text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4547–4554 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4547 DSCORER: A Fast Evaluation Metric for Discourse Representation Structure Parsing Jiangming Liu Shay B. Cohen Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh [email protected], {scohen,mlap}@inf.ed.ac.uk Abstract Discourse representation structures (DRSs) are scoped semantic representations for texts of arbitrary length. Evaluation of the accuracy of predicted DRSs plays a key role in developing semantic parsers and improving their performance. DRSs are typically visualized as nested boxes, in a way that is not straightforward to process automatically. COUNTER, an evaluation algorithm for DRSs, transforms them to clauses and measures clause overlap by searching for variable mappings between two DRSs. Unfortunately, COUNTER is computationally costly (with respect to memory and CPU time) and does not scale with longer texts. We introduce DSCORER, an efficient new metric which converts box-style DRSs to graphs and then measures the overlap of n-grams in the graphs. Experiments show that DSCORER computes accuracy scores that correlate with scores from COUNTER at a fraction of the time. 1 Introduction Discourse Representation Theory (DRT) is a popular theory of meaning representation (Kamp, 1981; Kamp and Reyle, 2013; Asher, 1993; Asher et al., 2003) designed to account for a variety of linguistic phenomena within and across sentences. The basic meaning-carrying units in DRT are Discourse Representation Structures (DRSs). They consist of discourse referents (e.g., x1, x2) representing entities in the discourse and conditions (e.g., male.n.02(x1), Agent(e1, x1)) representing information about discourse referents. Every variable and condition are bounded by a box label (e.g., b1) which implies that the variable or condition are interpreted in that box. DRSs are constructed recursively. An example of a DRS in boxstyle notation is shown in Figure 1(a). DRS parsing differs from related parsing tasks (e.g., Banarescu et al. 2013) in that it can create representations that go beyond individual sentences. Despite the large amount of recently developed DRS parsing models (van Noord et al., 2018b; van Noord, 2019; Evang, 2019; Liu et al., 2019b; Fancellu et al., 2019; Le et al., 2019), the automatic evaluation of DRSs is not straightforward due to the non-standard DRS format shown in Figure 1(a). It is neither a tree (although a DRS-to-tree conversion exists; see Liu et al. 2018, 2019a for details) nor a graph. Evaluation so far relied on COUNTER (van Noord et al., 2018a) which converts DRSs to clauses shown in Figure 1(b). Given two DRSs with n and m (n ≥m) variables each, COUNTER has to consider n! (n−m)! possible variable mappings in order to find an optimal one for evaluation. The problem of finding this alignment is NP-complete, similar to other metrics such as SMATCH (Cai and Knight, 2013a) for Abstract Meaning Representation. COUNTER uses a greedy hill-climbing algorithm to obtain one-to-one variable mappings, and then computes precision, recall, and F1 scores according to the overlap of clauses between two DRSs. To get around the problem of search errors, the hill-climbing search implementation applies several random restarts. This incurs unacceptable runtime, especially when evaluating document-level DRSs with a large number of variables. Another problem with the current evaluation is that COUNTER only considers local clauses without taking larger window sizes into account. For example, it considers “b4 sing e2” and “b3 NOT b4” as separate semantic units. However, it would also make sense to assess “ b3 NOT b4 sing e2” as a whole without breaking it down into smaller parts. By considering higher-order chains, it is possible to observe more global differences in DRSs which are important when assessing entire documents. In order to address the above issues, we propose DSCORER, a highly efficient metric for the evalu4548 ation of DRS parsing on texts of arbitrary length. DSCORER converts DRSs (predicted and gold) to graphs from which it extracts n-grams, and then computes precision, recall and F1 scores between them. The algorithm operates over n-grams in a fashion similar to BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), which are metrics widely used for evaluating the output of machine translation and summarization systems. While BLEU only calculates precision with a brevity penalty (it is not straightforward to define recall given the wide range of possible translations for a given input), ROUGE is a recall-oriented metric since the summary length is typically constrained by a prespecified budget.1 However, in DRS parsing, there is a single correct semantic representation (goldstandard reference) and no limit on the maximum size of DRSs. Our proposed metric, DSCORER, converts box-style DRSs to a graph format used for evaluation and computes F1 with high efficiency (7,000 times faster compared to COUNTER). We release our code, implementing the metric, at https: //github.com/LeonCrashCode/DRSScorer. 2 DSCORER The proposed metric converts two box-style DRSs into graphs, extracts n-grams from these graphs, and then computes precision, recall, and F1 score based on the n-gram overlap. 2.1 Graph Induction Following the work of van Noord et al. (2018a), box-style DRSs can be converted to clauses as shown in Figure 1(b). For example, box b1 is in a contrast relationship to box b4 within box b0 which corresponds to the clause b0 CONTRAST b1 b4; variable b2 : x1 is converted to clause b2 REF x1, and the condition b1 : t1 < “now” is converted to b1 TPR t1 “now”.2 We now explain how we convert DRSs to graphs. There are two types of clauses depending on the number of arguments: 2-argument clauses (e.g., b2 male.n.02 x1) and 3-argument ones (e.g., b1 Agent e1 x1). The two types of clauses can be formatted as node edge −−−→node and node edge −−−→node edge −−−→node, respectively. For example, clause “b2 male.n.02 x1” is rendered as 1See https://github.com/tensorflow/ tensor2tensor for computing ROUGE F1. 2REF and TPR are operators abbreviating “referent” and “temporally precedes”, respectively; see https://pmb. let.rug.nl/drs.php for more detail. He didn’t play the piano. But she sang. b0 b0 : ¬ b2 : x1, b3 : x2, b1 : e1, b1 : t1 b1 b2 : male.n.02(x1) b1 : time.n.08(t1) b1 : t1 < “now” b1 : play.v.03(e1) b1 : Time(e1, t1) b1 : Theme(e1, x2) b1 : Agent(e1, x1) b3 : piano.n.01(x2) b0 : b5 : x3, b4 : e2, b4 : t2 b4 b5 : female.n.02(x3) b4 : time.n.08(t2) b4 : t2 < “now” b4 : sing.v.01(e2) b4 : Time(e2, t2) b4 : Agent(e2, x3) CONTRAST(b1, b4) (a) b0 CONTRAST b1 b4 b3 REF x2 b0 NOT b1 b3 piano “n.01” x2 b2 REF x1 b5 REF x3 b2 male “n.02” x1 b5 female “n.02” x3 b1 REF e1 b4 REF e2 b1 REF t1 b4 REF t2 b1 Agent e1 x1 b4 Agent e2 x3 b1 TPR t1 “now” b4 TPR t2 “now” b1 Theme e1 x2 b4 Time e2 t2 b1 Time e1 t1 b4 sing “v.01” e2 b1 play “v.03” e1 b4 time “n.08” t2 b1 time “n.08” t1 (b) b0(B) b1(B) b4(B) CONTRAST-A1 CONTRAST-A2 NOT b2(B) b3(B) x1(X) x2(X) e1(E) male.n.02 piano.n.01 play.v.03 Agent-A2 Theme-A2 Agent-A1 Theme-A1 e2(E) x3(X) b5(B) female.n.02 sing.v.01 Agent-A1 Agent-A2 (c) Figure 1: (a) Box-style DRS for the text “He didn’t play the piano but she sang.”; (b) Clause-style DRS format for COUNTER; (c) Proposed graph-style DRS format (abridged version shown; complete graphs can be found in the Appendix). b2 male.n.02 −−−−−−→x1, and clause “b1 Agent e1 x1” as b1 Agent-A1 −−−−−−→e1 Agent-A2 −−−−−−→x1. Same nodes are further merged to a single node. For example, x1 nodes in b2 male.n.02 −−−−−−→x1 and e1 Agent-A2 −−−−−−→x1 are merged to a single node x1. The induced graph is directed and yields the chain b1 Agent-A1 −−−−−−→ e1 Agent-A2 −−−−−−→x1. In order to capture interactions between chains, (e.g., chain b2 male.n.02 −−−−−−→x1, assigns x1 as a predicate “male.n.02” but x1 is also 4549 an agent), we make edges bidirectional (red in Figure 1(c)) if they do not connect the two b nodes. Next, we rewrite the nodes, keeping their type3 (e.g., B, X, E, S, P, and T) but not their indices and the resulting graph is shown in Figure 1(c). In addition to being typed, variables can be distinguished by their neighboring nodes and connecting edges. For example, the two E nodes are different. One is on the path B play.v.03 −−−−−→E Theme-A2 −−−−−−→ X piano.n.01 −−−−−−→B showing that the Theme of the predicate play is piano, and the other is on the path B sing.v.01 −−−−−→E Agent-A2 −−−−−−→X female.n.02 −−−−−−−→B showing that the Agent of the predicate sing is female. To compare two graphs, we compute the overlap between extracted paths instead of searching for best node mappings, which saves computational resources (i.e., CPU memory and time). 2.2 Evaluation Based on n-grams An n-gram in our case is an Euler path4 on a graph with n edges. For example, B Theme-A1 −−−−−−→E is a 1-gram as it contains a single edge, B Theme-A1 −−−−−−→ E Theme-A2 −−−−−−→X piano.n.01 −−−−−−→B is a 3-gram since it has three edges, and a single node is a 0-gram. We extract the n-grams for each node in a graph. Due to the high sparsity of graphs typical for DRSs, the number of n-grams does not explode as the size of graphs increases, |G| = |N| + |E|, where |N| and |E| are the number of nodes and edges in graph G, respectively. Given the n-grams of predicted and gold DRS graphs, we compute precision pk and recall rk as: pk = |k-gramspred ∩k-gramsgold| |k-gramspred| (1) rk = |k-gramspred ∩k-gramsgold| |k-gramsgold| (2) where k-gramspred and k-gramsgold are k-grams on predicted and gold DRS graphs, respectively, and fk = 2pkrk pk+rk , where p0 = r0 = f0 = min(|Npred|,|Ngold|) max(|Npred|,|Ngold|). DSCORER calculates precision, recall, and F1 as: DSCORERnF = exp n X k=1 wk log Fk ! (3) 3B refers to box labels, X to entities, E to events, S refers to states, P to propositions, and T to time. 4An Euler path is a path that visits every edge of a graph exactly once (allowing for revisiting nodes). 0 10 20 0 0.5 1 1.5 |G| (102) # of n-gram (106) (a) 2 4 6 8 0 1 2 |G| (101) # of n-gram (103) (b) Figure 2: Number of n-grams in (a) GMB and (b) PMB. Red points are 4-grams, blue points are 3-grams, green points are 2-grams and black points are 1-grams. where wk is a fixed weight for k-gram (0 ≤k ≤n) counts, and F ∈{p, r, f}. 3 Experiments In our experiments, we investigate the correlation between DSCORER and COUNTER, and the efficiency of the two metrics. We present results on two datasets, namely the Groningen Meaning Bank (GMB; Bos et al. 2017) and the Parallel Meaning Bank (PMB; Abzianidze et al. 2017). We compare two published systems on the GMB: DRTS-sent which is a sentence-level parser (Liu et al., 2018) and DRTS-doc which is a documentlevel parser (Liu et al., 2019a). On the PMB, we compare seven systems: Boxer, a CCG-based parser (Bos, 2015), AMR2DRS, a rule-based parser that converts AMRs to DRSs, SIM-SPAR giving the DRS in the training set most similar to the current DRS, SPAR giving a fixed DRS for each sentence, seq2seq-char, a character-based sequence-tosequence clause parser (van Noord et al., 2018b), seq2seq-word, a word-based sequence-to-sequence clause parser, and a transformer-based clause parser (Liu et al., 2019b). 3.1 Metric Settings COUNTER takes 100 hill-climbing restarts to search for the best variable mappings on PMB and 10 restarts on GMB. Both DSCORER and COUNTER are computed on one CPU (2.10GHz). The weight w0 is set to 0.1 and the weights wk (1 ≤k ≤n) in DSCORER are set to 0.9/n, where n = 4. 3.2 Analysis We analyze the number of n-grams extracted by DSCORER; we also report the values obtained by 4550 Systems COUNTER DSCORER P R F1 PMB SPAR 39.7 6.5 19.7 9.2 AMR2DRS 43.2 17.5 23.3 19.7 SIM-SPAR 56.8 41.8 39.2 40.2 Boxer 74.3 56.7 58.4 57.6 seq2seq-word 83.1 72.4 75.1 73.7 seq2seq-char 83.6 71.9 75.3 73.5 transformer 87.4 79.8 82.1 80.9 GMB DRTS-sent 77.9 66.7 65.3 65.9 DRTS-doc 66.7 60.0 62.9 61.4 Table 1: System evaluation according to COUNTER and DSCORER which runs on 4-grams. dataset |G| |NG| COUNTER DSCORER PMB 39.93 7.83 0.006 0.004 GMB-sent 122.07 20.28 3.03 0.14 GMB-doc 801.87 120.86 14428.68 2.35 Table 2: Average runtime (secs) for a pair of DRSs, where |G| is the average graph size and |NG| is the average number of nodes in a graph. DSCORER and COUNTER on the two datasets, their correlation, and efficiency. Number of n-grams Figure 2(a) shows the number of n-grams across graphs in GMB where the largest size of 4-grams extracted on one graph is 1.47 × 106. Figure 2(b) shows the number of n-grams across graphs in PMB where the largest size of 4-grams extracted on one graph is 2.27×103. The number of n-grams will increase exponentially with n or as the size of the graph increases. Nevertheless, the number of 4-grams remains manageable. We set k = 4 for computing our metric (see Equations (1) and (2)) as 4-grams are detailed enough to capture differences between meaning representations whilst avoiding overly strict matching (which would render the similarity between predicted and gold DRSs unncessarily low and not very useful). Metric Values Table 1 shows the various scores assigned by DSCORER and COUNTER to the different systems. We observe similar trends for both metrics; DSCORER penalizes more harshly SPAR and SIM-SPAR, which output random DRSs without any parsing algorithm. Generally speaking, the two metrics are highly correlated; across systems and datatasets, Pearson’s correlation coefficient r is 0.93 on 1-grams, 0.94 on 2-grams, 0.91 on 3-grams, and 0.88 on 4-grams, with 2-grams being most correlated. This is not surprising, 2-grams 0 0.2 0.4 0.6 0.8 1 0 0.5 1 y = x3 scores given by COUNTER scores given by DSCORER Figure 3: Pearson’s r between DSCORER (on 4-grams) and COUNTER (across systems and datasets). in DSCORER are most similar to COUNTER which only considers predicates with at most two arguments. Figure 3 shows the 4-gram correlation between COUNTER and DSCORER. We found most points are around the curve of y = x3, which means that considering high-order grams renders the two metrics less similar, but nevertheless allows to more faithfully capture similarities or discrepancies between DRSs. Efficiency Table 2 shows the average run-time for COUNTER and DSCORER on a pair of DRSs. Both metrics have similar run-times on PMB which mostly consists of small graphs. However, in GMB, which consists of larger graphs with many nodes, the run-time of COUNTER explodes (more than 4 hours per graph), while DSCORER evaluates DRSs within an acceptable time frame (2.35 seconds per graph). In GMB-doc, DSCORER runs seven thousand times faster than COUNTER, showing it is very efficient at comparing large graphs. 3.3 Case Study We further conducted a case study in order to analyze what the two metrics measure. Figure 4 shows two different sentences in their clause-style DRS format used by COUNTER and graph-style DRS format used by DSCORER. Note that the two sentences have totally different meanings (distinguished using various meaning constructs in the corresponding DRSs). Using COUNTER to compare the two sentences yields an F1 of 47.06, which drops to 16.11 when employing DSCORER on 4-grams. Note that DSCORER on 1-grams obtains an F1 of 46.42 which is close to COUNTER. COUNTER takes matching clauses into account 4551 Tom is putting the children to bed . He smiled . b1 REF x1 b3 Agent e1 x1 b1 Name x1 “tom” b3 Theme e1 x2 b1 male “n.02” x1 b3 put “v.01” e1 b3 Time e1 t1 b2 REF x2 b4 REF t1 b2 child “n.01” x2 b4 EQU t1 “now” b3 Destination e1 x3 b4 time “n.08” t1 b3 REF x3 b3 REF e1 b3 bed “n.01” x3 b1 REF x1 b2 Agent e1 x1 b1 male “n.02” x1 b2 REF e1 b3 REF t1 b2 Time e1 t1 b3 TPR t1 “now” b2 smile “v.01” e1 b3 time “n.08” t1 b3(B) e1(E) t1(T) x1(X) x3(X) x2(X) b1(B) “tom” b4 “now” b2 put.v.01 Agent-A1 Agent-A2 Theme-A1 Theme-A2 Time-A1 Time-A2 Destination-A1 Destination-A2 bed.n.01 Name-A1 Name-A2 EQU-A1 EQU-A2 time.n.08 child.n.01 b2(B) e1(E) t1(T) x1(X) b1(B) b3 “now” smile.v.01 Agent-A1 Agent-A2 Time-A1 Time-A2 time.n.08 TPR-A1 TPR-A2 smile.v.01 (a) (b) Figure 4: (a) DRS for the sentence “Tom is putting the children to bed.”; (b) DRS for the sentence “He smiled.”; we omit the “REF” relation from the graph for the sake of clarity. (marked as red in Figure 4), which might inflate the similarity between two sentences without actually measuring their core meaning. For example, the common relation “b3 Time e1 t1” is matched to “b2 Time e1 t1” without considering what e1 and t1 are. Instead, DSCORER aims to find matches for paths B Time−A1 −−−−−−→e1 Time−A2 −−−−−−→t1 and B smile.v.01 −−−−−−→ e1 Time−A2 −−−−−−→t1 as well. And the mismatch of the second path reduces the final score. 4 Related Work The metric SEMBLEU (Song and Gildea, 2019) is most closely related to ours. It evaluates AMR graphs by calculating precision based on n-gram overlap. SEMBLEU yields scores more consistent with human evaluation than SMATCH (Cai and Knight, 2013b), an AMR metric which is the basis of COUNTER. SEMBLEU cannot be directly used on DRS graphs due to the large amount of indexed variables and the fact that the graphs are not explicitly given; moreover, our metric outputs F1 scores instead of precision only. Opitz et al. (2020) propose a set of principles for AMR-related metrics, showing the advantages and drawbacks of alignment- and BLEU-based AMR metrics. However, efficiency of the metric is crucial for the development of document-level models of semantic parsing. Basile and Bos (2013) propose to represent DRSs via Discourse Representation Graphs (DRGs) which are acyclic and directed. However, DRGs are similar to flattened trees, and not able to capture clause-level information (e.g., b1 Agent e1 x1) required for evaluation (van Noord et al., 2018a). 5 Conclusions In this work we proposed DSCORER, as a DRS evaluation metric alternative to COUNTER. Our metric is significantly more efficient than COUNTER and considers high-order DRSs. DSCORER allows to speed up model selection and development removing the bottleneck of evaluation time. Acknowledgments We thank the anonymous reviewers for their feedback. We gratefully acknowledge the support of the European Research Council (Lapata, Liu; award number 681760), the EU H2020 project SUMMA (Cohen, Liu; grant agreement 688139) and Bloomberg (Cohen, Liu). 4552 References Lasha Abzianidze, Johannes Bjerva, Kilian Evang, Hessel Haagsma, Rik van Noord, Pierre Ludmann, Duc-Duy Nguyen, and Johan Bos. 2017. The parallel meaning bank: Towards a multilingual corpus of translations annotated with compositional meaning representations. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 242–247, Valencia, Spain. Association for Computational Linguistics. Nicholas Asher. 1993. Reference to abstract objects in english. Nicholas Asher, Nicholas Michael Asher, and Alex Lascarides. 2003. Logics of conversation. Cambridge University Press. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics. Valerio Basile and Johan Bos. 2013. Aligning formal meaning representations with surface strings for wide-coverage text generation. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 1–9. Johan Bos. 2015. Open-domain semantic parsing with boxer. In Proceedings of the 20th nordic conference of computational linguistics (NODALIDA 2015), pages 301–304. Johan Bos, Valerio Basile, Kilian Evang, Noortje Venhuizen, and Johannes Bjerva. 2017. The groningen meaning bank. In Nancy Ide and James Pustejovsky, editors, Handbook of Linguistic Annotation, volume 2, pages 463–496. Springer. Shu Cai and Kevin Knight. 2013a. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752. Shu Cai and Kevin Knight. 2013b. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics. Kilian Evang. 2019. Transition-based DRS parsing using stack-LSTMs. In Proceedings of the IWCS Shared Task on Semantic Parsing, Gothenburg, Sweden. Association for Computational Linguistics. Federico Fancellu, Sorcha Gilroy, Adam Lopez, and Mirella Lapata. 2019. Semantic graph parsing with recurrent neural network DAG grammars. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2769– 2778, Hong Kong, China. Association for Computational Linguistics. Hans Kamp. 1981. A theory of truth and semantic representation. Formal semantics-the essential readings, pages 189–222. Hans Kamp and Uwe Reyle. 2013. From discourse to logic: Introduction to model theoretic semantics of natural language, formal logic and discourse representation theory, volume 42. Springer Science & Business Media. Ngoc Luyen Le, Yannis Haralambous, and Philippe Lenca. 2019. Towards a drs parsing framework for french. In Advances in Natural Language Processing, Grannada, Spain. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2018. Discourse representation structure parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 429–439, Melbourne, Australia. Association for Computational Linguistics. Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2019a. Discourse representation parsing for sentences and documents. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6248–6262, Florence, Italy. Association for Computational Linguistics. Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2019b. Discourse representation structure parsing with recurrent neural networks and the transformer model. In Proceedings of the IWCS Shared Task on Semantic Parsing, Gothenburg, Sweden. Association for Computational Linguistics. Rik van Noord. 2019. Neural boxer at the IWCS shared task on DRS parsing. In Proceedings of the IWCS Shared Task on Semantic Parsing, Gothenburg, Sweden. Association for Computational Linguistics. Rik van Noord, Lasha Abzianidze, Hessel Haagsma, and Johan Bos. 2018a. Evaluating scoped meaning representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). Rik van Noord, Lasha Abzianidze, Antonio Toral, and Johan Bos. 2018b. Exploring neural methods for parsing discourse representation structures. Transactions of the Association for Computational Linguistics, 6:619–633. 4553 Juri Opitz, Letitia Parcalabescu, and Anette Frank. 2020. Amr similarity metrics from principles. arXiv preprint arXiv:2001.10929. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Linfeng Song and Daniel Gildea. 2019. SemBleu: A robust metric for AMR parsing evaluation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4547– 4552, Florence, Italy. Association for Computational Linguistics. 4554 A Appendix Figure 5 shows the complete graph for Figure 1(c). b0 b1 b4 CONTRAST-A1 CONTRAST-A2 NOT b2 b3 x1 x2 e1 t1 “Now” REF REF REF REF male.n.02 piano.n.01 play.v.03 Agent-A2 Theme-A2 Agent-A1 Theme-A1 time.n.08 TPR-A2 Time-A1 Time-A2 TPR-A1 e2 x3 b5 t2 “Now” REF REF REF female.n.02 sing.v.01 Agent-A1 Agent-A2 time.n.08 TPR-A1 TPR-A2 Time-A1 Time-A2 Figure 5: The complete DRS graph for Figure 1(c)
2020
416
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4555–4567 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4555 ParaCrawl: Web-Scale Acquisition of Parallel Corpora Marta Ba˜n´on†, Pinzhen Chen‡, Barry Haddow‡, Kenneth Heafield‡, Hieu Hoang‡ Miquel Espl`a-Gomis⋆, Mikel Forcada⋆, Amir Kamran♦, Faheem Kirefu‡ Philipp Koehn§, Sergio Ortiz-Rojas†, Leopoldo Pla⋆, Gema Ram´ırez-S´anchez† Elsa Sarr´ıas⋆, Marek Strelec‡, Brian Thompson§, William Waites‡, Dion Wiggins▲ Jaume Zaragoza† †Prompsit, ‡University of Edinburgh, ⋆University of Alicante §Johns Hopkins University, ♦TAUS, ▲Omniscien Technologies Abstract We report on methods to create the largest publicly available parallel corpora by crawling the web, using open source software. We empirically compare alternative methods and publish benchmark data sets for sentence alignment and sentence pair filtering. We also describe the parallel corpora released and evaluate their quality and their usefulness to create machine translation systems. 1 Introduction Parallel corpora are essential for building highquality machine translation systems and have found uses in many other natural language applications, such as learning paraphrases (Bannard and Callison-Burch, 2005; Hu et al., 2019) or cross-lingual projection of language tools (Yarowsky et al., 2001). We report on work to create the largest publicly available parallel corpora by crawling hundreds of thousands of web sites, using open source tools. The processing pipeline consists of the steps: crawling, text extraction, document alignment, sentence alignment, and sentence pair filtering. We describe these steps in detail in Sections 4–8. For some of these steps we evaluate several methods empirically in terms of their impact on machine translation quality. We provide the data resources used in these evaluations as benchmarks for future research. As part of these effort, several open source components have been developed. These are integrated into the open-source tool Bitextor,1 a highly modular pipeline that allows harvesting parallel corpora from multilingual websites or from preexisting or historical web crawls such as the one available as part of the Internet Archive.2 1https://github.com/bitextor/bitextor 2https://archive.org/ The execution of the pipeline has focused on official European Union languages, but also targeted Russian, Sinhala, Nepali, Tagalog, Swahili, and Somali. We show that the obtained parallel corpora improve state-of-the-art results on common benchmarks, such as the WMT Shared Task on News Translation. 2 Related Work While the idea of mining the web for parallel data has been already pursued in the 20th century (Resnik, 1999), the most serious efforts have been limited to large companies such as Google (Uszkoreit et al., 2010) and Microsoft (Rarrick et al., 2011), or targeted efforts on specific domains such as the Canadian Hansards and Europarl (Koehn, 2005). The book Bitext Alignment (Tiedemann, 2011) describes some of the challenges in greater detail. 2.1 Acquisition Efforts Most publicly available parallel corpora are the result of targeted efforts to extract the translations from a specific source. The French–English Canadian Hansards3 were used in the earliest work on statistical machine translation. A similar popular corpus is Europarl (Koehn, 2005), used throughout the WMT evaluation campaign. Multi-lingual web sites are attractive targets. Rafalovitch and Dale (2009); Ziemski et al. (2015) extract data from the United Nations, T¨ager (2011) from European Patents, Lison and Tiedemann (2016) from a collection of TV and movie subtitles. Cettolo et al. (2012) explain the creation of a multilingual parallel corpus of subtitles from the TED Talks website which is popular due to its use in the IWSLT evaluation campaign. 3https://www.isi.edu/natural-language/ download/hansard/ 4556 There are also various efforts targeted at a single language pair. Martin et al. (2003) build a parallel corpus for Inuktitut–English. Utiyama and Isahara (2003); Fukushima et al. (2006) worked on creating Japanese–English corpora. Uchiyama and Isahara (2007) report on the efforts to build a Japanese–English patent corpus and Macken et al. (2007) on efforts on a broad-based Dutch–English corpus. Li and Liu (2008) mine the web for a Chinese–English corpus. A large Czech–English corpus from various sources was collected (Bojar et al., 2010), linguistically annotated (Bojar et al., 2012), and has been continuously extended to over 300 million words (Bojar et al., 2016). All these efforts rely on methods and implementations that are quite specific for each use case, not documented in great detail, and not publicly available. A discussion of the pitfalls during the construction of parallel corpora is given by Kaalep and Veskis (2007). A large collection of corpora is maintained at the OPUS web site4 (Tiedemann, 2012). 2.2 Document Alignment Document alignment can be defined as a matching task that takes a pair of documents and computes a score that reflects the likelihood that they are translations of each others. The task is typically limited to a single web domain (all web pages from www.aaa.com and aaa.com, possibly aaa.de but not bbb.com) for efficiency. Matching may take the HTML structure into account, or purely rely on the textual content. Examples of structural matching is the use of editdistance between linearized documents (Resnik and Smith, 2003) and probability of a probabilistic DOM-tree alignment model (Shi et al., 2006). Using the URL for matching is a very powerful indicator for some domains, typically by using a predefined set of patterns for language marking or simple Levenshtein distance (Le et al., 2016). Content matching requires crossing the language barrier at some point, typically by using bilingual dictionaries or translating one of the documents into the other document’s language (Uszkoreit et al., 2010). Documents may be represented by vectors over word frequencies, typically td-idf-weighted. Vectors may also be constructed over bigrams (Dara and Lin, 2016) or even higher order n-grams 4http://opus.lingfil.uu.se/ (Uszkoreit et al., 2010). The vectors are then typically matched with cosine similarity (Buck and Koehn, 2016a). The raw vectors may be recentered around the mean vector for a web domain (Germann, 2016) Document alignment quality can be improved with additional features such ratio of shared links, similarity of link URLs, ratio of shared images, binary feature indicating if the documents are linked, DOM structure similarity (Espl`a-Gomis et al., 2016), same numbers (Papavassiliou et al., 2016), or same named entities (Lohar et al., 2016). Guo et al. (2019) introduce the use of document embeddings, constructed from sentence embeddings, to the document alignment task. 2.3 Sentence Alignment Early sentence aligners (Brown et al., 1991; Gale and Church, 1993) use scoring functions based only on the number of words or characters in each sentence and alignment algorithms based on dynamic programming. Europarl, for example, used metadata to align paragraphs, typically consisting of 2-5 sentences, and using Gale and Church (1993)’s method to align sentences within corresponding paragraphs. Later work added lexical features and heuristics to speed up search, such as limiting the search space to be near the diagonal (Moore, 2002; Varga et al., 2005). More recent work introduced scoring methods that use MT to get both documents into the same language (Sennrich and Volk, 2010) or use pruned phrase tables from a statistical MT system (Gomes and Lopes, 2016). Both methods “anchor” highprobability 1–1 alignments in the search space and then fill in and refine alignments. They later propose an extension (Sennrich and Volk, 2011) in which an SMT system is bootstrapped from an initial alignment and then used in Bleualign. Vecalign (Thompson and Koehn, 2019) is a sentence alignment method that relies on bilingual sentence embeddings and achieves linear run time with a coarse-to-fine dynamic programming algorithm. 2.4 Sentence Pair Filtering Parallel corpora that have been crawled from unverified web sites and processed by error-prone extraction and alignment methods are likely to contain noise, such as random text fragments, text in the wrong language, translations produced by machine translation tools or bad translators, and 4557 misaligned sentence pairs. Such noise is specially harmful for neural machine translation (Khayrallah and Koehn, 2018), so filtering it out is an essential processing step. There is a robust body of work on filtering out noise in parallel data but most recently this topic has gained a lot of momentum, partly due to the lack of robustness of neural models and fostered by recent shared tasks on parallel corpus filtering under high-resource (Koehn et al., 2018) and lowresource data conditions (Koehn et al., 2019). Most participants in these shared tasks used three components: pre-filtering rules, scoring functions for sentence pairs, and a classifier that learned weights for feature functions. Pre-filtering rules. Some of the training data can be discarded based on simple deterministic filtering rules. This may remove over 80% of the data (Kurfalı and ¨Ostling, 2019; Soares and Costa-juss`a, 2019). Such rules remove too short or too long sentences, sentences that have too few words (tokens with letters instead of just special characters), either absolute or relative to the total number of tokens, sentences whose average token length is too short or too long, sentence pairs with mismatched lengths in terms of number of tokens, sentence pairs where names, numbers, dates, email addresses, URLs do not match between both sides, sentence pairs that are too similar, indicating simple copying instead of translating, and sentences where language identifier do not detect the required language. Scoring functions. Sentence pairs that pass the pre-filtering stage are assessed with scoring functions which provide scores that hopefully correlate with quality of sentence pairs. Participants used a variety of such scoring functions, including n-gram or neural language models on clean data (Rossenbach et al., 2018), language models trained on the provided raw data as contrast, neural translation models (JunczysDowmunt, 2018), bag-of-words lexical translation probabilities (Gonz´alez-Rubio, 2019), or even existing off-the-shelf tools like Zipporah and Bicleaner (Chaudhary et al., 2019). Learning weights for scoring functions. Given a large number of scoring functions, simply averaging their resulting scores may be inadequate. Learning weights to optimize machine translation system quality is computationally intractable due to the high cost of training these systems to evaluate different weight settings. A few participants used instead a classifier that learns how to distinguish between good and bad sentence pairs (where bad sentence pairs are either synthesized by scrambling good sentence pairs or selected from the raw crawled data). A novel method that was central to the bestperforming submission in WMT 2019 was the use of cross-lingual sentence embeddings that were directly trained from parallel sentence pairs (Chaudhary et al., 2019). Other submissions used monolingual word embeddings (Soares and Costajuss`a, 2019; Kurfalı and ¨Ostling, 2019; BernierColborne and Lo, 2019). Another approach is to first train a translation system on the clean data, then use it to translate the non-English side into English and use monolingual matching methods to compare it against the English side of the parallel corpus. Different matching metrics were used: METEOR (Erdmann and Gwinnup, 2019), Levenshtein distance (Sen et al., 2019), or BLEU (Parcheta et al., 2019), As Rarrick et al. (2011) point out, one type of noise in parallel corpora extracted from the web are translations that have been created by machine translation. Venugopal et al. (2011) propose a method to watermark the output of machine translation systems to aid this distinction, with a negligible loss of quality. Antonova and Misyurev (2011) report that rule-based machine translation output can be detected due to certain word choices, and statistical machine translation output can be detected due to lack of reordering. Rarrick et al. (2011) train a classifier to learn the distinction and show that removing such data leads to better translation quality. 2.5 Comparable Corpus Mining Our work exploits web sites that provide roughly the same content in multiple languages, leading us to the assumption to find pairs of web pages which are translations of each other, with translated sentences following the same order. This assumption does not hold in less consistently translated web content such as Wikipedia, or accidental parallel sentence found in news stories about the same subject matter written in multiple languages. There have been increasing efforts to mine sentence pairs from large pools of multi-lingual text, which are treated as unstructured bags of sen4558 tences. Munteanu and Marcu (2005) use document retrieval and a maximum entropy classifier to identify parallel sentence pairs in a multi-lingual collection of news stories. Bilingual sentence embeddings (Guo et al., 2018) and multilingual sentence embeddings (Artetxe and Schwenk, 2018) were tested on their ability to reconstruct parallel corpora. This lead to work to construct WikiMatrix, a large corpus of parallel sentences from Wikipedia (Schwenk et al., 2019) based on cosine distance of their crosslingual sentence embeddings. 3 Identifying Multi-Lingual Web Sites Since the start of the collection effort in 2015, we identified potential web sites to crawl in various ways, but mainly by exploiting statistics from CommonCrawl. By splitting this large collection of crawled web pages by web domain and running text extraction and language identification (Buck et al., 2014), we can extract statistics on what language content exists on each of them. Web domains with sufficient content in a targeted language and English are selected for crawling. The thresholds of what constitutes sufficient content varied depending on language. Typically, we require minimum amounts of content in the targeted language and English (measured in bytes of text), and consider the ratio between the two. For instance, we identified 19,616 web domains with at least 100KB of content in German and English (max ratio 10), but only 438 web domains with at least 20KB of content in Maltese and English (max ratio 10). It is worth noting that by targeted crawling of web sites we are able to collect many more web pages than present in CommonCrawl. In an exploratory study, only 5% of a collection of web pages with useful content were found in CommonCrawl. This may have improved with recent more extensive crawls by CommonCrawl but there is still a strong argument for targeted crawling. 4 Crawling Crawling is the initial step of the pipeline. It entails downloading documents from a number of websites and looking for any documents that contain text. These documents are stored as single or multi-domain Web ARChive (WARC) files. WARC is an archiving format for crawled data originally proposed by the Internet Archive Figure 1: Workflow diagram of Bitextor and developed by a consortium of libraries and archives into the ISO 28500:2009 standard (ISO, 2009). It consists of a list of gzip-compressed records, each comprising a header with metadata and a crawled document. Four different crawling tools are currently supported in Bitextor: HTTrack5 Well-known multi-platform tool for crawling. It has been for long time in Bitextor, even though it is now deprecated as the support for the tool is discontinued. Heritrix6 Internet Archive’s web crawler; it is fully compatible with WARC format and supports 5https://www.httrack.com/ 6https://github.com/internetarchive/ heritrix3 4559 a variety of options that make it one of the most suitable options for large scale data crawling. Creepy7 Python library with basic resources for crawling. A crawler has been implemented on top of it, and is currently experimental. Wget One of the most popular tools for retrieving files through HTTP and HTTPS in Unix systems. It is fully compatible with WARC format. Most of our crawling in ParaCrawl has been done using HTTrack. To deal with the I/Ointensive process of writing small files with high frequency, data is first stored on local SSD drives and then transferred to a network file system for subsequent processing. 5 Text Extraction After crawling, all documents are pre-processed to extract and normalize the text and identify their language. The resulting cleaned and sorted text is the input for the subsequent steps of document and segment alignment (see Sections 6 and 7). Conversion to HTML WARC files contain one web-crawled document per record. The documents can be in a variety of formats that contain text: plain text, HTML, Open Document Format8 (”.odt”), Office Open XML9 (”.docx”) or PDF files containing text. With the exception of the small number of documents that are already in plain text format, the bitextor-warc2htmlwarc.py module converts any of these formats to HTML (see fig. 1) and produces WARC files containing only HTML or plain text documents. Text extraction from HTML Given WARC files containing HTML, we extract the text content. We preserve sentence breaks indicated by HTML tags such as <p> or <br> (paragraph and line break), but remove formatting tags such as <b> (for bold text) without a trace. Language identification with cld2 and text extraction are currently performed by Python module bitextor-warc2preprocess.py; as text extraction is a rather intensive operation, an alternative workflow uses an experimental module written in the Go language, giawarc. 7https://github.com/aitjcize/creepy 8https://www.oasis-open.org/standards# opendocumentv1.2 9http://www.ecma-international.org/ publications/standards/Ecma-376.htm 6 Document Alignment There are two main workflows for document alignment. Using bilingual lexica The traditional workflow in Bitextor until version 5 used bilingual lexica. Module bitextor-buildidx.py builds indexes of documents containing, for each word in the lexicon for each language, the documents containing it. Then bitextor-idx2ridx uses the bilingual lexica to translate these words and build reverse indexes where each document is paired to a list of documents and bag-of-words-based overlap scores in the other language. A series of modules (bitextor-urlscomparison.py, bitextor-urlsetoverlap.py, bitextorimagestooverlap.py, etc.), compute a series of features for each language direction based on mutual linking and the comparison of document URLs, the set of outgoing URLs, HTML structure and image content; these features are integrated by bitextor-rank.py into two new reverse-index file with new scores, which are used to obtain the final document alignment. Using machine translation This workflow uses machine translation to decide whether two documents have to be aligned, and is the one that has been used for the parallel data releases of the project (Buck and Koehn, 2016b). After extract-lett.py extracts plain-text documents in each language, a machine translation system translates each document from language A to B. We then generate a (sparse) matrix of tf-idf scores between machine translated versions of documents in language A and documents in language B. These scores are used by compute_matches.py to compute a list of document pairs (score, source URL, target URL). Document pairs are stored in a file in which each line contains the URLs of both documents and their plain-text content encoded in base64. 7 Sentence Alignment During the ParaCrawl project, we made use of a few sentence alignment tools. In this paper, we compare their performance on five language pairs. The sentence aligners are: Hunalign (Varga et al., 2005) is a widely used tool that relies on a bilingual dictionary that we 4560 Language Web Document English Domains Pairs Tokens German 21,806 17,109,018 10,788,923,009 Czech 12,179 6,661,650 4,089,806,440 Hungarian 5,560 2,770,432 1,504,698,348 Estonian 5,129 2,301,309 1,427,328,440 Maltese 933 303,198 134,232,546 Table 1: Corpus statistics for data used in the sentence alignment evaluation. Number of English tokens is computed with the Unix command wc. generated from the Europarl corpus or other available parallel corpora. Bleualign (Sennrich and Volk, 2010) aligns an English translation of the foreign sentences and the English sentences based on their similarity, as measured by a variant of the BLEU score. We implemented a faster version of Bleualign in C++. Vecalign (Thompson and Koehn, 2019) is a new sentence aligner based on sentence embeddings, using an efficient coarse-to-fine algorithm with linear run time. We used pre-trained LASER embeddings10 which cover all the languages of ParaCrawl, except for Irish. We compared the quality of the sentence pairs extracted from document pairs for these tools. To our knowledge, this is the first evaluation of sentence aligners on large-scale real-world webcrawled data. We selected five languages, ranging from low resource (Maltese) over mid-resource (Estonian, Hungarian) to high-resource (Czech, German). We selected a subset of web domains, for details see Table 1. The data is provided as document pairs from the usual upstream ParaCrawl processing. The text of web pages needs to be further split into sentences, and then aligned using the different sentence aligners. The resulting sentence pairs are deduplicated are assessed for quality using Bicleaner (more on sentence pair filtering in the next section). Since different sentence aligners generate different amounts of data (for instance, Bleualign filters quite aggressively for noise), we selected differently sized subsets of the data for evaluation by selecting the best sentence pairs according to Bicleaner quality scores. We built neural machine translation models on these subsets using 10https://engineering.fb.com/ai-research/ laser-multilingual-sentence-embeddings/ Language Hunalign Vecalign Bleualign German 35.1 (100m) 35.8 (150m) 35.0 (100m) Czech 21.0 (50m) 21.2 (50m) 21.0 (50m) Hungarian 16.5 (30m) 16.8 (30m) 16.6 (15m) Estonian 21.8 (20m) 21.6 (20m) 21.4 (20m) Maltese 33.5 (5m) 34.1 (7m) 30.3 (2m) Table 2: BLEU scores for systems trained on corpora generated by different sentence aligners. Different subsets are selected based on Bicleaner scores, size of the subsets is given in number of million English tokens. Fairseq and evaluated them on test sets drawn from the WMT news translation task (newstest2018 for German, Czech, Estonian; newstest2009 for Hungarian) and the EU Bookshop11 corpus (Maltese). See Table 2 for the BLEU scores and corpus sizes for the best-performing subsets for each sentence aligner and language. Vecalign gives the best results for 4 of the languages, and is slightly behind Hunalign for Estonian. We published the document pairs to be aligned, as well as the testing environment12 to promote the evaluation of novel sentence alignment methods. 8 Sentence Pair Filtering Our processing pipeline is aimed at high recall at the cost of precision, thus creating large but very noisy corpora. So, as a last processing step, we aim to filter out sentence pairs that are not useful as training data for machine translation or any other purpose. This is especially important since training on noisy corpora is a challenge for neural machine translation which motivated the organization of two shared tasks in 2018 and 2019, on the high resource language German–English and the low resource languages Sinhala and Nepali, respectively. Here, we extend this evaluation to European languages with medium sized resources. Building on the data sets generated by the sentence alignment evaluation of the previous section, we compared three sentence pair filtering methods used in the ParaCrawl effort: Zipporah (Xu and Koehn, 2017), Bicleaner (S´anchez-Cartagena et al., 2018), and LASER (Chaudhary et al., 2019). We carried out the evaluation (see Table 3) in the same fashion, as in the previous section. Filtering by LASER scores gives the best results except for Maltese (for which the publicly available 11http://opus.nlpl.eu/EUbookshop.php 12http://www.statmt.org/ paracrawl-benchmarks/ 4561 Setup Zipporah Bicleaner LASER de, Hunalign 34.4 (100m) 35.1 (100m) 36.0 (100m) de, Vecalign 34.6 (100m) 35.8 (100m) 36.3 (50m) cs, Hunalign 19.1 (15m) 21.0 (50m) 22.2 (30m) cs, Vecalign 21.4 (30m) 21.2 (50m) 22.2 (30m) hu, Hunalign 16.2 (10m) 16.5 (30m) 17.2 (10m) hu, Vecalign 16.9 (15m) 16.8 (30m) 17.2 (15m) et, Hunalign 21.2 (15m) 21.8 (20m) 22.1 (15m) et, Vecalign 21.3 (20m) 21.6 (20m) 22.9 (20m) mt, Hunalign 32.8 (5m) 33.5 (7m) 32.6 (7m) mt, Vecalign 33.8 (5m) 34.1 (5m) 30.2 (7m) Table 3: BLEU scores for systems trained on subsets of the data selected by different sentence pair filtering methods. The size of the subsets in millions of English words is also reported. LASER model has not been trained). Moreover, in almost all settings, we achieve better results with Bicleaner than Zipporah. 9 Released Corpora Overall, the ParaCrawl corpus release v5.0 contains a total of 223 million filtered13, unique sentence pairs from around 150k website domains and across 23 EU languages with English (see Table 5). However, the data release is highly imbalanced with 73% of sentence pairs comprising of just five languages: French, German, Spanish, Italian and Portuguese. The average (untokenised) English sentence length (over all languages) is 22.9 words, with some notable anomalies. For example, the low-resourced Irish-English pair (27.6 words) has over 50% of sentence pairs originating from the legal domain, where sentences are longer than usual. Furthermore, we noticed that filtered sentences which had been aligned using Hunalign were significantly shorter than those aligned by Bleualign (26.1 and 20.1 words respectively), although we are unsure of the exact reason for this discrepancy. Our main motivation for creating the ParaCrawl corpus is to improve the quality of machine translation systems. To test this, we trained neural machine translation models where we added the corpus to existing data sets for language pairs that were tackled in the shared task on news translation at the Conference on Machine Translation (WMT) — which we consider a strong baseline. 13Sentence pairs with a Bicleaner score of less than 0.7 were discarded, but remain in the RAW release. 14sacreBLEU signatures: BLEU+case.mixed+lang.*-*+numrefs.1+smooth.exp+ tok.13a+version.1.4.2 Pair BLEU 14 BLEU WMT WMT+ParaCrawl-5 en-cs 19.0 (52m) 19.8 (52m+5.3m) cs-en 25.0 (52m) 25.7 (52m+5.3m) en-de 26.2 (5.8m) 27.7 (5.8m+37m) de-en 31.2 (5.8m) 34.0 (5.8m+37m) en-fi 19.9 (2.6m) 23.3 (2.6m+3.0m) fi-en 24.2 (2.6m) 29.9 (2.6m+3.0m) en-lv 12.8 (4.5m) 16.2 (4.5m+1.0m) lv-en 16.2 (4.5m) 20.2 (4.5m+1.0m) en-ro 26.5 (0.6m) 28.6 (0.6m+2.8m) ro-en 30.2 (0.6m) 35.7 (0.6m+2.8m) Table 4: BLEU scores for machine translation systems trained with WMT data adding ParaCrawl release v5.0 data. All the training and test sets are from WMT17 except for Romanian, taken from WMT16. The systems are transformer base trained with Marian using SentencePiece. Sentences are reported in millions. We trained Transformer-Base models with Marian using SentencePiece. See Table 4 for results. For most language pairs, we see gains of several BLEU points (up to 6 BLEU points for English– Romanian). We even see gains for English–Czech, were ParaCrawl is quite a bit smaller than existing data sets (+0.7 BLEU when adding 5.3m sentence pairs to the existing set of 52m sentence pairs). 10 Computational Costs Concerns Several of the steps involved in producing and evaluating the ParaCrawl corpora are computationally expensive. Even as some of the steps are embarrassingly parallel and amenable processing in a high-performance computing setting, even pre-processing of 100TB of source data to produce candidate documents consumes on the order of 50,000 CPU-hours equivalent to an estimated15 720kWh of power. Training of a neural network model for translating one of the more resource-rich languages such as German may take a week on a dozen GPUs again consuming about 750kWh. Translating 500 million German sentences to English for evaluation consumed roughly 7MWh. In practice, these computations are not simply performed once, they are performed many times as parameters are changed and different strategies tried. This energy cost is significant. The Typical Domestic Consumption Values published by 15The datasheet of an Intel E5-2695 processor says that it uses 115W of power or about 9.5W/core. This estimate includes a 50% margin for main board power and other overhead. 4562 Language Pair Web domains Raw Corpus Clean Corpus Sentence Pairs English Words Sentence Pairs English Words Bulgarian–English 4,762 248,555,951 1,564,051,100 2,586,277 55,725,444 Croatian–English 8,889 273,330,006 1,738,164,401 1,861,590 43,464,197 Czech–English 14,335 665,535,115 4,025,512,842 5,280,149 117,385,158 Danish–English 19,776 447,743,455 3,347,135,236 4,606,183 106,565,546 Dutch–English 17,887 1,101,087,006 6,792,400,704 10,596,717 233,087,345 Estonian–English 9,522 168,091,382 915,074,587 1,387,869 30,858,140 Finnish–English 11,028 460,181,215 2,731,068,033 3,097,223 66,385,933 French–English 48,498 4,273,819,421 24,983,683,983 51,316,168 1,178,317,233 German–English 67,977 5,038,103,659 27,994,213,177 36,936,714 929,818,868 Greek–English 11,343 640,502,801 3,768,712,672 3,830,643 88,669,279 Hungarian–English 9,522 461,181,772 3,208,285,083 4,187,051 104,292,635 Irish–English 1,283 64,628,733 667,211,260 782,769 21,909,039 Italian–English 31,518 2,251,771,798 13,150,606,108 22,100,078 533,512,632 Latvian–English 3,557 176,113,669 1,069,218,155 1,019,003 23,656,140 Lithuanian–English 4,678 198,101,611 963,384,230 1,270,933 27,214,054 Maltese–English 672 3,693,930 38,492,028 177,244 4,252,814 Polish–English 13,357 723,052,912 4,123,972,411 6,382,371 145,802,939 Portuguese–English 18,887 1,068,161,866 6,537,298,891 13,860,663 299,634,135 Romanian–English 9,335 510,209,923 3,034,045,929 2,870,687 62,189,306 Slovak–English 7,980 269,067,288 1,416,750,646 2,365,339 45,636,383 Slovenian–English 5,016 175,682,959 1,003,867,134 1,406,645 31,855,427 Spanish–English 36,211 2,674,900,280 16,598,620,402 38,971,348 897,891,704 Swedish–English 13,616 620,338,561 3,496,650,816 6,079,175 138,264,978 Russian–English 14,035 1,078,819,759 12,061,155 157,061,045 Dutch–French 7,700 38,164,560 Dutch: 770,141,393 2,687,331 Dutch: 60,504,313 French: 817,973,481 French: 64,650,034 Polish–German 5,549 11,060,105 Polish: 202,765,359 916,522 Polish: 18,883,576 German: 198,442,547 German: 20,271,637 Table 5: Size of corpus release 5. The corpus is released in two versions: Raw is very noisy data before the sentence pair filtering step. Clean has been proven to be useful for training machine translation systems. We release the raw corpus to allow use of other filtering methods, or different thresholds for quality cutoffs. Ofgem16, the UK energy regulator, say that a highconsuming household with electric heating is expected to consume 7.1MWh/year. Does an increase of one or two BLEU points justify this cost? For ParaCrawl, we argue that yes, it does, because we are producing an enabling data set whose cost will, we hope, be amortised across many future experiments. But there is a more general point to be made here: it is not currently the practice in the machine translation community to publish figures about the cost involved in achieving an increase in performance as measured with the standard metrics. It is not straightforward to evaluate when or if we, as a community, have reached a point of diminishing returns where small changes to a family of methods consume an ever-increasing amount of resources yielding only marginal improvements. We therefore suggest adopting a practice of disclosing energy use for experiments in machine translation alongside BLEU scores to make the 16https://www.ofgem.gov.uk/electricity/ retail-market/monitoring-data-and-statistics/ typical-domestic-consumption-values cost-benefit trade-off explicit. 11 Conclusions We released the largest publicly available parallel corpora for many language pairs and demonstrated their benefit to train machine translation systems. Going beyond providing data, the goals of this project include the creation of publicly available infrastructure to explore new research directions on parallel corpus mining by releasing open source code for the entire pipeline and public benchmarks for individual processing steps. Each of the processing steps we describe here still have great potential for improvement, and we hope that our work contributes to the development of novel methods both in terms of better processing of raw parallel data sources, but also increasing the robustness of neural machine translation training when faced with noisy data. We are especially interested in further extending this work into low resource languages where resources tend to be noisier and underlying models to support data mining less reliable. 4563 Acknowledgement This work has been supported in part by three projects funded by the Connecting Europe Facility of the European Union (paracrawl.eu), two Google Faculty Research Awards to Philipp Koehn, a Mozilla research grant to Kenneth Heafield, and a donation from eBay to Kenneth Heafield. Hosting is provided by the AWS Public Dataset Program. This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (http://www.csd3.cam.ac.uk/), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). This paper is the authors’ opinion and not necessarily that of the funders. References Alexandra Antonova and Alexey Misyurev. 2011. Building a web-based parallel corpus and filtering out machine-translated text. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, pages 136–144, Portland, Oregon. Association for Computational Linguistics. Mikel Artetxe and Holger Schwenk. 2018. Marginbased parallel corpus mining with multilingual sentence embeddings. CoRR, abs/1811.01136. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 597–604, Ann Arbor, Michigan. Association for Computational Linguistics. Gabriel Bernier-Colborne and Chi-kiu Lo. 2019. NRC parallel corpus filtering system for WMT 2019. In Proceedings of the Fourth Conference on Machine Translation (WMT). Ondˇrej Bojar, Ondˇrej Duˇsek, Tom Kocmi, Jindˇrich Libovick´y, Michal Nov´ak, Martin Popel, Roman Sudarikov, and Duˇsan Variˇs. 2016. CzEng 1.6: Enlarged Czech-English Parallel Corpus with Processing Tools Dockered. In Text, Speech, and Dialogue: 19th International Conference, TSD 2016, number 9924 in Lecture Notes in Computer Science, pages 231–238, Cham / Heidelberg / New York / Dordrecht / London. Masaryk University, Springer International Publishing. Ondˇrej Bojar, Adam Liˇska, and Zdenˇek ˇZabokrtsk´y. 2010. Evaluating utility of data sources in a large parallel Czech-English corpus CzEng 0.9. In Proceedings of LREC2010. Ondˇrej Bojar, Zdenˇek ˇZabokrtsk´y, Ondˇrej Duˇsek, Petra Galuˇsˇc´akov´a, Martin Majliˇs, David Mareˇcek, Jiˇr´ı Marˇs´ık, Michal Nov´ak, Martin Popel, and Aleˇs Tamchyna. 2012. The Joy of Parallelism with CzEng 1.0. In Proceedings of LREC2012, Istanbul, Turkey. ELRA, European Language Resources Association. Peter F. Brown, Jennifer C. Lai, and Robert L. Mercer. 1991. Aligning sentences in parallel corpora. In Proceedings of the 29th Annual Meeting on Association for Computational Linguistics, ACL ’91, pages 169–176, Stroudsburg, PA, USA. Association for Computational Linguistics. Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014. N-gram counts and language models from the common crawl. In Proceedings of the Language Resources and Evaluation Conference (LREC), Reykjav´ık, Iceland. Christian Buck and Philipp Koehn. 2016a. Findings of the wmt 2016 bilingual document alignment shared task. In Proceedings of the First Conference on Machine Translation, pages 554–563, Berlin, Germany. Association for Computational Linguistics. Christian Buck and Philipp Koehn. 2016b. Quick and reliable document alignment via tf/idf-weighted cosine distance. In Proceedings of the First Conference on Machine Translation, pages 672–678, Berlin, Germany. Association for Computational Linguistics. M. Cettolo, C. Girardi, and M. Federico. 2012. WIT3: Web inventory of transcribed and translated talks. In Proceedings of th 16th International Conference of the European Association for Machine Translation (EAMT), pages 261–268. Vishrav Chaudhary, Yuqing Tang, Francisco Guzm´an, Holger Schwenk, and Philipp Koehn. 2019. Lowresource corpus filtering using multilingual sentence embeddings. In Proceedings of the Fourth Conference on Machine Translation (WMT). Aswarth Abhilash Dara and Yiu-Chang Lin. 2016. Yoda system for wmt16 shared task: Bilingual document alignment. In Proceedings of the First Conference on Machine Translation. Grant Erdmann and Jeremy Gwinnup. 2019. Quality and coverage: The afrl submission to the wmt19 parallel corpus filtering for low-resource conditions task. In Proceedings of the Fourth Conference on Machine Translation (WMT). Miquel Espl`a-Gomis, Mikel Forcada, Sergio Ortiz Rojas, and Jorge Ferr´andez-Tordera. 2016. Bitextor’s participation in wmt’16: shared task on document alignment. In Proceedings of the First Conference on Machine Translation, pages 685–691, Berlin, 4564 Germany. Association for Computational Linguistics. Ken’ichi Fukushima, Kenjiro Taura, and Takashi Chikayama. 2006. A fast and accurate method for detecting English-Japanese parallel texts. In Proceedings of the Workshop on Multilingual Language Resources and Interoperability, pages 60–67, Sydney, Australia. Association for Computational Linguistics. William A Gale and Kenneth W Church. 1993. A program for aligning sentences in bilingual corpora. Computational linguistics, 19(1):75–102. Ulrich Germann. 2016. Bilingual document alignment with latent semantic indexing. In Proceedings of the First Conference on Machine Translation, pages 692–696, Berlin, Germany. Association for Computational Linguistics. Lu´ıs Gomes and Gabriel Pereira Lopes. 2016. First steps towards coverage-based sentence alignment. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 2228–2231, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Jes´us Gonz´alez-Rubio. 2019. Webinterpret submission to the wmt2019 shared task on parallel corpus filtering. In Proceedings of the Fourth Conference on Machine Translation (WMT). Mandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernand ez Abrego, Keith Stevens, Noah Constant, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Effective parallel corpus mining using bilingual sentence embeddings. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 165–176, Belgium, Brussels. Association for Computational Linguistics. Mandy Guo, Yinfei Yang, Keith Stevens, Daniel Cer, Heming Ge, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Hierarchical document encoder for parallel corpus mining. In Proceedings of the Fourth Conference on Machine Translation, pages 64–72, Florence, Italy. Association for Computational Linguistics. J. Edward Hu, Rachel Rudinger, Matt Post, and Benjamin Van Durme. 2019. Parabank: Monolingual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation. In Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19). International Organization for Standardization ISO. 2009. ISO 28500:2009 information and documentation-WARC file format. Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation, Belgium, Brussels. Association for Computational Linguistics. Heiki-Jaan Kaalep and Kaarel Veskis. 2007. Comparing parallel corpora and evaluating their quality. In Proceedings of the MT Summit XI. Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74–83. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the Tenth Machine Translation Summit (MT Summit X), Phuket, Thailand. Philipp Koehn, Francisco Guzm´an, Vishrav Chaudhary, and Juan Pino. 2019. Findings of the WMT 2019 shared task on parallel corpus filtering for low-resource conditions. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 54–72, Florence, Italy. Association for Computational Linguistics. Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L. Forcada. 2018. Findings of the wmt 2018 shared task on parallel corpus filtering. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 726–739, Belgium, Brussels. Association for Computational Linguistics. Murathan Kurfalı and Robert ¨Ostling. 2019. Noisy parallel corpus filtering through projected word embeddings. In Proceedings of the Fourth Conference on Machine Translation (WMT). Thanh Le, Hoa Trong Vu, Jonathan Oberl¨ander, and Ondˇrej Bojar. 2016. Using term position similarity and language modeling for bilingual document alignment. In Proceedings of the First Conference on Machine Translation, pages 710–716, Berlin, Germany. Association for Computational Linguistics. Bo Li and Juan Liu. 2008. Mining Chinese-English parallel corpora from the web. In Proceedings of the 3rd International Joint Conference on Natural Language Processing (IJCNLP). Pierre Lison and J¨org Tiedemann. 2016. Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Language Resources and Evaluation Conference (LREC). Pintu Lohar, Haithem Afli, Chao-Hong Liu, and Andy Way. 2016. The adapt bilingual document alignment system at wmt16. In Proceedings of the First Conference on Machine Translation, pages 717– 723, Berlin, Germany. Association for Computational Linguistics. Lieve Macken, Julia Trushkina, and Lidia Rura. 2007. Dutch parallel corpus: MT corpus and translator’s aid. In Proceedings of the MT Summit XI. 4565 Joel Martin, Howard Johnson, Benoit Farley, and Anna Maclachlan. 2003. Aligning and using an EnglishInuktitut parallel corpus. In HLT-NAACL 2003 Workshop: Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, Edmonton, Alberta, Canada. Association for Computational Linguistics. Robert C Moore. 2002. Fast and accurate sentence alignment of bilingual corpora. In Conference of the Association for Machine Translation in the Americas, pages 135–144. Springer. Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving machine translation performance by exploiting non-parallel corpora. Computational Linguistics, 31(4). Vassilis Papavassiliou, Prokopis Prokopidis, and Stelios Piperidis. 2016. The ilsp/arc submission to the wmt 2016 bilingual document alignment shared task. In Proceedings of the First Conference on Machine Translation, pages 733–739, Berlin, Germany. Association for Computational Linguistics. Zuzanna Parcheta, Germ´an Sanchis-Trilles, and Francisco Casacuberta. 2019. Filtering of noisy parallel corpora based on hypothesis generation. In Proceedings of the Fourth Conference on Machine Translation (WMT). Alexandre Rafalovitch and Robert Dale. 2009. United Nations General Assembly resolutions: A sixlanguage parallel corpus. In Proceedings of the Twelfth Machine Translation Summit (MT Summit XII). International Association for Machine Translation. Spencer Rarrick, Chris Quirk, and Will Lewis. 2011. MT detection in web-scraped parallel corpora. In Proceedings of the 13th Machine Translation Summit (MT Summit XIII), pages 422–430. International Association for Machine Translation. Philip Resnik. 1999. Mining the web for bilingual text. In Proceedings of the 37th Annual Meeting of the Association of Computational Linguistics (ACL). Philip Resnik and Noah A Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349–380. Nick Rossenbach, Jan Rosendahl, Yunsu Kim, Miguel Grac¸a, Aman Gokrani, and Hermann Ney. 2018. The RWTH Aachen University filtering system for the WMT 2018 parallel corpus filtering task. In Proceedings of the Third Conference on Machine Translation, Belgium, Brussels. Association for Computational Linguistics. V´ıctor M. S´anchez-Cartagena, Marta Ba˜n´on, Sergio Ortiz Rojas, and Gema Ram´ırez. 2018. Prompsit’s submission to wmt 2018 parallel corpus filtering shared task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 955–962, Belgium, Brussels. Association for Computational Linguistics. Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm´an. 2019. Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia. CoRR, abs/1907.05791. Sukanta Sen, Asif Ekbal, and Pushpak Bhattacharyya. 2019. Parallel corpus filtering based on fuzzy string matching. In Proceedings of the Fourth Conference on Machine Translation (WMT). Rico Sennrich and Martin Volk. 2010. MT-based sentence alignment for OCR-generated parallel texts. In The Ninth Conference of the Association for Machine Translation in the Americas (AMTA 2010). Rico Sennrich and Martin Volk. 2011. Iterative, MTbased sentence alignment of parallel texts. In Proceedings of the 18th Nordic Conference of Computational Linguistics (NODALIDA 2011), pages 175– 182. Lei Shi, Cheng Niu, Ming Zhou, and Jianfeng Gao. 2006. A dom tree alignment model for mining parallel data from the web. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 489–496. Association for Computational Linguistics. Felipe Soares and Marta R. Costa-juss`a. 2019. Unsupervised corpus filtering and mining. In Proceedings of the Fourth Conference on Machine Translation (WMT). Wolfgang T¨ager. 2011. The sentence-aligned european patent corpus. In Proceedings of the 15th International Conference of the European Association for Machine Translation (EAMT), pages 177–184. Brian Thompson and Philipp Koehn. 2019. Vecalign: Improved sentence alignment in linear time and space. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Hong Kong. Association for Computational Linguistics. J¨org Tiedemann. 2011. Bitext Alignment. Synthesis Lectures on Human Language Technologies. Morgan & Claypool, San Rafael, CA. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 2214–2218, Istanbul, Turkey. European Language Resources Association (ELRA). ACL Anthology Identifier: L121246. Masao Uchiyama and Hitoshi Isahara. 2007. A Japanese-English patent parallel corpus. In Proceedings of the MT Summit XI. Jakob Uszkoreit, Jay Ponte, Ashok Popat, and Moshe Dubiner. 2010. Large scale parallel document mining for machine translation. In Proceedings of the 4566 23rd International Conference on Computational Linguistics (Coling 2010), pages 1101–1109, Beijing, China. Coling 2010 Organizing Committee. Masao Utiyama and Hitoshi Isahara. 2003. Reliable measures for aligning Japanese-English news articles and sentences. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 72–79. D´aniel Varga, P´eter Hala´acsy, Andr´as Kornai, Voktor Nagy, L´aszl´o N´emeth, and Viktor Tr´on. 2005. Parallel corpora for medium density languages. In Proceedings of the RANLP 2005 Conference, pages 590–596. Ashish Venugopal, Jakob Uszkoreit, David Talbot, Franz Och, and Juri Ganitkevitch. 2011. Watermarking the outputs of structured prediction with an application in statistical machine translation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1363–1372, Edinburgh, Scotland, UK. Association for Computational Linguistics. Hainan Xu and Philipp Koehn. 2017. Zipporah: a fast and scalable data cleaning system for noisy webcrawled parallel corpora. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2935–2940. Association for Computational Linguistics. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research. Michał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2015. The united nations parallel corpus v1.0. In International Conference on Language Resources and Evaluation (LREC). 4567 Appendix: Detailed Sentence Alignment and Filtering Results German 10m 20m 50m 70m 100m 150m 200m Hunalign/Zipporah 29.9 32.1 33.8 34.3 34.4 34.1 33.6 Hunalign/Bicleaner 27.2 30.6 34.0 34.2 35.1 33.7 34.6 Hunalign/Laser 32.3 34.6 35.7 35.8 36.0 35.3 34.4 Vecalign/Zipporah 30.2 32.6 34.3 34.6 34.5 34.0 32.8 Vecalign/Bicleaner 28.1 31.7 34.3 35.0 35.4 35.8 35.1 Vecalign/Laser 32.4 34.4 36.3 36.1 36.1 35.9 34.7 Bleualign(NMT)/Bicleaner 27.9 30.9 34.5 34.7 35.0 34.6 33.1 Czech 10m 15m 20m 30m 50m 70m 100m Hunalign/Zipporah 18.5 19.1 19.0 18.6 17.8 15.8 14.3 Hunalign/Bicleaner 16.2 17.7 18.7 20.2 21.0 20.9 19.1 Hunalign/Laser 20.6 21.6 21.8 22.2 21.0 20.7 19.6 Vecalign/Zipporah 19.2 20.1 20.9 21.4 21.3 20.5 19.7 Vecalign/Bicleaner 16.5 18.1 19.3 20.3 21.2 21.1 19.8 Vecalign/Laser 21.1 21.6 21.9 22.2 21.8 20.9 20.0 Bleualign(NMT)/Bicleaner 18.0 19.3 20.5 21.0 20.5 18.3 17.6 Bleualign(SMT)/Bicleaner 13.2 14.5 15.4 16.3 18.0 19.0 19.6 Hungarian 5m 7m 10m 15m 20m 30m 50m Hunalign/Zipporah 15.4 15.9 16.2 15.3 15.0 13.9 12.8 Hunalign/Bicleaner 12.3 13.2 14.8 15.8 16.3 16.5 12.4 Hunalign/Laser 16.2 16.7 17.2 16.9 16.8 15.9 14.6 Vecalign/Zipporah 15.4 16.0 16.7 16.9 15.2 14.1 12.2 Vecalign/Bicleaner 12.4 13.8 14.0 16.1 16.8 16.8 13.4 Vecalign/Laser 16.3 16.9 17.0 17.2 17.1 16.7 15.6 Bleualign(NMT)/Bicleaner 14.0 15.2 16.2 16.6 16.2 14.6 14.7 Bleualign(SMT)/Bicleaner 7.3 9.0 10.1 11.9 13.1 14.2 14.2 Estonian 5m 7m 10m 15m 20m 30m 50m 70m Hunalign/Zipporah 18.3 19.4 20.6 21.2 21.0 20.6 18.4 15.6 Hunalign/Bicleaner 17.2 18.0 19.7 20.9 21.8 21.0 17.8 15.1 Hunalign/Laser 19.6 20.5 21.2 22.1 21.9 20.7 18.4 18.1 Vecalign/Zipporah 18.7 19.7 20.4 21.3 21.3 21.3 17.3 15.5 Vecalign/Bicleaner 17.1 18.3 19.8 20.9 21.6 21.5 18.3 15.6 Vecalign/Laser 19.5 20.6 21.7 22.4 22.9 21.6 18.6 18.5 Bleualign(NMT)/Bicleaner 17.2 19.0 19.8 21.3 21.4 19.4 19.4 19.3 Bleualign(SMT)/Bicleaner 15.5 16.5 18.1 19.9 19.5 15.0 11.9 11.0 Maltese 1m 1.5m 2m 3m 5m 7m 10m Hunalign/Zipporah 29.3 29.9 31.6 32.6 32.8 31.6 32.3 Hunalign/Bicleaner 29.0 30.1 30.1 31.8 32.7 33.5 31.3 Hunalign/Laserzero shot 29.0 30.2 30.7 31.9 32.6 32.6 32.1 Vecalign/Zipporah 27.0 31.9 32.5 33.5 33.8 33.0 32.0 Vecalign/Bicleaner 29.1 30.0 30.7 32.5 33.1 34.1 33.2 Vecalign/Laserzero shot 26.2 27.6 27.8 21.1 24.6 30.2 24.8 Bleualign(NMT)/Bicleaner 28.0 29.4 30.3 28.3 29.5 29.6 29.6 Bleualign(SMT)/Bicleaner 27.5 28.9 30.1 30.3 30.4 29.0 28.5
2020
417
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4568–4595 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4568 Toward Gender-Inclusive Coreference Resolution Yang Trista Cao University of Maryland [email protected] Hal Daum´e III University of Maryland Microsoft Research [email protected] Abstract Correctly resolving textual mentions of people fundamentally entails making inferences about those people. Such inferences raise the risk of systemic biases in coreference resolution systems, including biases that can harm binary and non-binary trans and cis stakeholders. To better understand such biases, we foreground nuanced conceptualizations of gender from sociology and sociolinguistics, and develop two new datasets for interrogating bias in crowd annotations and in existing coreference resolution systems. Through these studies, conducted on English text, we confirm that without acknowledging and building systems that recognize the complexity of gender, we build systems that lead to many potential harms. 1 Introduction Coreference resolution—the task of determining which textual references resolve to the same realworld entity—requires making inferences about those entities. Especially when those entities are people, coreference resolution systems run the risk of making unlicensed inferences, possibly resulting in harms either to individuals or groups of people. Embedded in coreference inferences are varied aspects of gender, both because gender can show up explicitly (e.g., pronouns in English, morphology in Arabic) and because societal expectations and stereotypes around gender roles may be explicitly or implicitly assumed by speakers or listeners. This can lead to significant biases in coreference resolution systems: cases where systems “systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others” (Friedman and Nissenbaum, 1996, p. 332). Gender bias in coreference resolution can manifest in many ways; work by Rudinger et al. (2018), Zhao et al. (2018a), and Webster et al. (2018) focused largely on the case of binary gender discrimination in trained coreference systems, showing that current systems over-rely on social stereotypes when resolving HE and SHE pronouns1 (see §2). Contemporaneously, critical work in HumanComputer Interaction has complicated discussions around gender in other fields, such as computer vision (Keyes, 2018; Hamidi et al., 2018). Building on both lines of work, and inspired by Keyes’s (2018) study of vision-based automatic gender recognition systems, we consider gender bias from a broader conceptual frame than the binary “folk” model. We investigate ways in which folk notions of gender—namely that there are two genders, assigned at birth, immutable, and in perfect correspondence to gendered linguistic forms— lead to the development of technology that is exclusionary and harmful of binary and non-binary trans and cis people.2 Addressing such issues is critical not just to improve the quality of our systems, but more pointedly to minimize the harms caused by our systems by reinforcing existing unjust social hierarchies (Lambert and Packer, 2019). There are several stakeholder groups who may easily face harms when coreference systems is used (Blodgett et al., 2020). Those harms includes several possible harms, both allocational and representation harms (Barocas et al., 2017), including quality of service, erasure, and stereotyping harms. Following Bender’s (2019) taxonomy of stakehold1Throughout, we avoid mapping pronouns to a “gender” label, preferring to use the pronoun directly, include (in English) SHE, HE, the non-binary use of singular THEY, and neopronouns (e.g., ZE/HIR, XEY/XEM), which have been in usage since at least the 1970s (Bustillos, 2011; Merriam-Webster, 2016; Bradley et al., 2019; Hord, 2016; Spivak, 1997). 2Following GLAAD (2007), transgender individuals are those whose gender differs from the sex they were assigned at birth. This is in opposition to cisgender individuals, whose assigned sex at birth happens to correspond to their gender. Transgender individuals can either be binary (those whose gender falls in the “male/female” dichotomy) or non-binary (those for which the relationship is more complex). 4569 ers and Barocas et al.’s (2017) taxonomy of harms, there are several ways in which trans exclusionary coreference resolution systems can cause harm: ⋄Indirect: subject of query. If a person is the subject of a web query, pages about xem may be missed if “multiple mentions of query” is a ranking feature, and the system cannot resolve xyr pronouns ⇒quality of service, erasure. ⋄Direct: by choice. If a grammar checker uses coreference, it may insist that an author writing hir third-person autobiography is repeatedly making errors when referring to hirself ⇒quality of service, stereotyping, denigration. ⋄Direct: not by choice. If an information extraction system run on r´esum´es relies on cisnormative assumptions, job experiences by a candidate who has transitioned and changed his pronouns may be missed ⇒allocative, erasure. ⋄Many stakeholders. If a machine translation system uses discourse context to generate pronouns, then errors can results in directly misgendering subjects of the document being translated ⇒ quality of service, denigration, erasure. To address such harms as well as understand where and how they arise, we need to complicate (a) what “gender” means and (b) how harms can enter into natural language processing (NLP) systems. Toward (a), we begin with a unifying analysis (§3) of how gender is socially constructed, and how social conditions in the world impose expectations around people’s gender. Of particular interest is how gender is reflected in language, and how that both matches and potentially mismatches the way people experience their gender in the world. Then, in order to understand social biases around gender, we find it necessary to consider the different ways in which gender can be realized linguistically, breaking down what previously have been considered “gendered words” in NLP papers into finer-grained categories that have been identified in the sociolinguistics literature of lexical, referential, grammatical, and social gender. Toward (b), we focus on how bias can enter into two stages of machine learning systems: data annotation (§ 4) and model definition (§ 5). We construct two new datasets: (1) MAP (a similar dataset to GAP (Webster et al., 2018) but without binary gender constraints) on which we can perform counterfactual manipulations and (2) GICoref (a fully annotated coreference resolution dataset written by and about trans people).3 In all cases, we focus largely on harms due to over- and underrepresentation (Kay et al., 2015), replicating stereotypes (Sweeney, 2013; Caliskan et al., 2017) (particular those that are cisnormative and/or heteronormative), and quality of service differentials (Buolamwini and Gebru, 2018). The primary contributions of this paper are: (1) Connecting existing work on gender bias in NLP to sociological and sociolinguistic conceptions of gender to provide a scaffolding for future work on analyzing “gender bias in NLP” (§3). (2) Developing an ablation technique for measuring gender bias in coreference resolution annotations, focusing on the human bias that can enter into annotation tasks (§4). (3) Constructing a new dataset, the Gender Inclusive Coreference dataset (GICOREF), for testing performance of coreference resolution systems on texts that discuss non-binary and binary transgender people (§5). 2 Related Work There are four recent papers that consider gender bias in coreference resolution systems. Rudinger et al. (2018) evaluates coreference systems for evidence of occupational stereotyping, by constructing Winograd-esque (Levesque et al., 2012) test examples. They find that humans can reliably resolve these examples, but systems largely fail at them, typically in a gender-stereotypical way. In contemporaneous work, Zhao et al. (2018a) proposed a very similar, also Winograd-esque scheme, also for measuring gender-based occupational stereotypes. In addition to reaching similar conclusions to Rudinger et al. (2018), this work also used a similar “counterfactual” data process as we use in §4.1 in order to provide additional training data to a coreference resolution system. Webster et al. (2018) produced the GAP dataset for evaluating coreference systems, by specifically seeking examples where “gender” (left underspecified) could not be used to help coreference. They found that coreference systems struggle in these cases, also pointing to the fact that some success of current coreference systems is due to reliance on (binary) gender stereotypes. Finally, Ackerman (2019) presents an alternative breakdown of gender than we use (§ 3), and proposes matching criteria for model3Both datasets are released under a BSD license at github.com/TristaCao/into inclusivecoref with corresponding datasheets (Gebru et al., 2018). 4570 ing coreference resolution linguistically, taking a trans-inclusive perspective on gender. Gender bias in NLP has been considered more broadly than just in coreference resolution, including, natural language inference (Rudinger et al., 2017), word embeddings (e.g., Bolukbasi et al., 2016; Romanov et al., 2019; Gonen and Goldberg, 2019), sentiment analysis (Kiritchenko and Mohammad, 2018), machine translation (Font and Costa-juss`a, 2019; Prates et al., 2019; Dryer, 2013; Frank et al., 2004; Wandruszka, 1969; Nissen, 2002; Doleschal and Schmid, 2001), among many others (Blodgett et al., 2020, inter alia). Gender is also an object of study in gender recognition systems (Hamidi et al., 2018). Much of this work has focused on gender bias with a (usually implicit) binary lens, an issue which was also called out recently by Larson (2017b) and May (2019). 3 Linguistic & Social Gender The concept of gender is complex and contested, covering (at least) aspects of a person’s internal experience, how they express this to the world, how social conditions in the world impose expectations on them (including expectations around their sexuality), and how they are perceived and accepted (or not). When this complex concept is realized in language, the situation becomes even more complex: linguistic categories of gender do not even remotely map one-to-one to social categories. As observed by Bucholtz (1999): “Attempts to read linguistic structure directly for information about social gender are often misguided.” For instance, when working in a language like English which formally marks gender on pronouns, it is all too easy to equate “recognizing the pronoun that corefers with this name” with “recognizing the real-world gender of referent of that name.” Furthermore, despite the impossibility of a perfect alignment with linguistic gender, it is generally clear that an incorrectly gendered reference to a person (whether through pronominalization or otherwise) can be highly problematic (Johnson et al., 2019; McLemore, 2015). This process of misgendering is problematic for both trans and cis individuals to the extent that transgender historian Stryker (2008) writes: “[o]ne’s gender identity could perhaps best be described as how one feels about being referred to by a particular pronoun.” 3.1 Sociological Gender Many modern trans-inclusive models of gender recognize that gender encompasses many different aspects. These aspects include the experience that one has of gender (or lack thereof), the way that one expresses one’s gender to the world, and the way that normative social conditions impose gender norms, typically as a dichotomy between masculine and feminine roles or traits (Kramarae and Treichler, 1985; West and Zimmerman, 1987; Butler, 1990; Risman, 2009; Serano, 2007). Gender selfdetermination, on the other hand, holds that each person is the “ultimate authority” on their own gender identity (Zimman, 2019; Stanley, 2014), with Zimman (2019) further arguing the importance of the role language plays in that determination. Such trans-inclusive models deconflate anatomical and biological traits and the sex that a person had assigned to them at birth from one’s gendered position in society; this includes intersex people, whose anatomical/biological factors do not match the usual designational criteria for either sex. Transinclusive views typically recognize that gender exists beyond the regressive “female”/“male” binary4; additionally, one’s gender may shift by time or context (often “genderfluid”), and some people do not experience gender at all (often “agender”) (Kessler and McKenna, 1978; Schilt and Westbrook, 2009; Darwin, 2017; Richards et al., 2017). In §5 we analyze the degree to which NLP papers make transinclusive or trans-exclusive assumptions. Social gender refers to the imposition of gender roles or traits based on normative social conditions (Kramarae and Treichler, 1985), which often includes imposing a dichotomy between feminine and masculine (in behavior, dress, speech, occupation, societal roles, etc.). Ackerman (2019) highlights a highly overlapping concept, “bio-social gender”, which consists of gender role, gender expression, and gender identity. Taking gender role as an example, upon learning that a nurse is coming to their hospital room, a patient may form expectations that this person is likely to be “female,” and may generate expectations around how their face or body may look, how they are likely to be dressed, how and where hair may appear, how to refer to them, and so on. This process, often referred to as gendering (Serano, 2007) occurs both in real world 4Some authors use female/male for sex and woman/man for gender; we do not need this distinction (which is itself contestable) and use female/male for gender. 4571 interactions, as well as in purely linguistic settings (e.g., reading a newspaper), in which readers may use social gender clues to assign gender(s) to the real world people being discussed. 3.2 Linguistic Gender Our discussion of linguistic gender largely follows (Corbett, 1991; Ochs, 1992; Craig, 1994; Corbett, 2013; Hellinger and Motschenbacher, 2015; Fuertes-Olivera, 2007), departing from earlier characterizations that postulate a direct mapping from language to gender (Lakoff, 1975; Silverstein, 1979). Our taxonomy is related but not identical to (Ackerman, 2019), which we discuss in §2. Grammatical gender, similarly defined in Ackerman (2019), is nothing more than a classification of nouns based on a principle of grammatical agreement. In “gender languages” there are typically two or three grammatical genders that have, for animate or personal references, considerable correspondence between a FEM (resp. MASC) grammatical gender and referents with female- (resp. male-)5 social gender. In comparison, “noun class languages” have no such correspondence, and typically many more classes. Some languages have no grammatical gender at all; English is generally seen as one (Nissen, 2002; Baron, 1971) (though this is contested (Bjorkman, 2017)). Referential gender (similar, but not identical to Ackerman’s (2019) “conceptual gender”) relates linguistic expressions to extra-linguistic reality, typically identifying referents as “female,” “male,” or “gender-indefinite.” Fundamentally, referential gender only exists when there is an entity being referred to, and their gender (or sex) is realized linguistically. The most obvious examples in English are gendered third person pronouns (SHE, HE), including neopronouns (ZE, EM) and singular THEY6, but also includes cases like “policeman” when the intended referent of this noun has social gender “male” (though not when “policeman” is used non-referentially, as in “every policeman needs to hold others accountable”). Lexical gender refers to an extra-linguistic properties of female-ness or male-ness in a nonreferential way, as in terms like “mother” as well 5One difficulty in this discussion is that linguistic gender and social gender use the terms “feminine” and “masculine” differently; to avoid confusion, when referring to the linguistic properties, we use FEM and MASC. 6People’s mental acceptability of singular THEY is still relatively low even with its increased usage (Prasad and Morris, 2020), and depends on context (Conrod, 2018). as gendered terms of address like “Mrs.” Importantly, lexical gender is a property of the linguistic unit, not a property of its referent in the real world, which may or may not exist. For instance, in “Every son loves his parents”, there is no real world referent of “son” (and therefore no referential gender), yet it still (likely) takes HIS as a pronoun anaphor because “son” has lexical gender MASC. 3.3 Social and Linguistic Gender Interplays The relationship between these aspects of gender is complex, and none is one-to-one. The referential gender of an individual (e.g., pronouns in English) may or may not match their social gender and this may change by context. This can happen in the case of people whose everyday life experience of their gender fluctuates over time (at any interval), as well as in the case of drag performers (e.g., some men who perform drag are addressed as SHE while performing, and HE when not (for Transgender Equality, 2017)). The other linguistic forms of gender (grammatical, lexical) also need not match each other, nor match referential gender (Hellinger and Motschenbacher, 2015). Social gender (societal expectations, in particular) captures the observation that upon hearing “My cousin is a librarian”, many speakers will infer “female” for “cousin”, because of either an entailment of “librarian” or some sort of probabilistic inference (Lyons, 1977), but not based on either grammatical gender (which does not exist in English) or lexical gender. We focus on English, which has no grammatical gender, but does have lexical gender. English also marks referential gender on singular third person pronouns. Below, we use this more nuanced notion of different types of gender to inspect how bias play out in coreference resolution systems. These biases may arise in the context of any of these notions of gender, and we encourage future work to extend care over and be explicit about what notions of gender are being utilized and when. 4 Bias in Human Annotation A possible source of bias in coreference systems comes from human annotations on the data used to train them. Such biases can arise from a combination of (possibly) underspecified annotations guidelines and the positionality of annotators themselves. In this section, we study how different aspects of linguistic notions impact an annotator’s 4572 Mrs. (d) −−→/0 Rebekah Johnson Bobbitt (b) −−→M. Booth was the younger sister (c) −→sibling of Lyndon B. Johnson (b) −−→T. Schneider, 36th President of the United States. Born in 1910 in Stonewall, Texas, she (a) −−→they worked in the cataloging department of the Library of Congress in the 1930s before her (a) −−→their brother (c) −→sibling entered politics. Figure 1: Example of applying all ablation substitutions for an example context in the MAP corpus. Each substitution type is marked over the arrow and separately color-coded. judgments of anaphora. This parallels Ackerman (2019) linguistic analysis, in which a Broad Matching Criterion is proposed, which posits that “matching gender requires at least one level of the mental representation of gender to be identical to the candidate antecedent in order to match.” Our study can be seen as evaluating which conceptual properties of gender are most salient in human judgments. We start with natural text in which we can cast the coreference task as a binary classification problem (“which of these two names does this pronoun refer to?”) inspired by Webster et al. (2018). We then generate “counterfactual augmentations” of this dataset by ablating the various notions of linguistic gender described in §3.2, similar to Zmigrod et al. (2019). We finally evaluate the impact of these ablations on human annotation behavior to answer the question: which forms of linguistic knowledge are most essential for human annotators to make consistent judgments. See Appendix A for examples of how linguistic gender may be used to infer social gender. 4.1 Ablation Methodology In order to determine which cues annotators are using and the degree to which they use them, we construct an ablation study in which we hide various aspects of gender and evaluate how this impacts annotators’ judgments of anaphoricity. We construct binary classification examples taken from Wikipedia pages, in which a single pronoun is selected, and two possible antecedent names are given, and the annotator must select which one. We cannot use Webster et al.’s GAP dataset directly, because their data is constrained that the “gender” of the two possible antecedents is “the same”7; for us, we are specifically interested in how annotators make decisions even when additional gender information is available. Thus, we construct a dataset called Maybe Ambiguous Pronoun (MAP) follow7It is unclear from the GAP dataset what notion of “gender” is used, nor how it was determined to be “the same.” ing Webster et al.’s approach, but we do not restrict the two names to match gender. In ablating gender information, one challenge is that removing social gender cues (e.g., “nurse” tending female) is not possible because they can exist anywhere. Likewise, it is not possible to remove syntactic cues in a non-circular manner. For example in (1), syntactic structure strongly suggests the antecedent of “herself” is “Liang”, making it less likely that “He” corefers with Liang later (though it is possible, and such cases exist in natural data due either to genderfluidity or misgendering). (1) Liang saw herself in the mirror. . .He. . . Fortunately, it is possible to enumerate a high coverage list of English terms that signal lexical gender: terms of address (Mrs., Mr.) and semantically gendered nouns (mother).8 We assembled a list by taking many online lists (mostly targeted at English language learners), merging them, and manual filtering. The assembling process and the final list is published with the MAP dataset and its datasheet. To execute the “hiding” of various aspects of gender, we use the following substitutions: (a) ¬PRO: Replace third person pronouns with gender neutral variants (THEY, XEY, ZE). (b) ¬NAME: Replace names by random names with only a first initial and last name. (c) ¬SEM: Replace semantically gendered nouns with gender-indefinite variants. (d) ¬ADDR: Remove terms of address.9 See Figure 1 for an example of all substitutions. We perform two sets of experiments, one following a “forward selection” type ablation (start with everything removed and add each back in oneat-a-time) and one following “backward selection” (remove each separately). Forward selection is necessary in order to de-conflate syntactic cues from 8These are, however, sometimes complex. For instance, “actress” signals lexical gender of female, while “actor” may signal social gender of male and, in certain varieties of English, may also signal lexical gender of male. 9An alternative suggested by Cassidy Henry that we did not explore would be to replace all with Mx. or Dr. 4573 stereotypes; while backward selection gives a sense of how much impact each type of gender cue has in the context of all the others. We begin with ZERO, in which we apply all four substitutions. Since this also removes gender cues from the pronouns themselves, an annotator cannot substantially rely on social gender to perform these resolutions. We next consider adding back in the original pronouns (always HE or SHE here), yielding ¬NAME ¬SEM ¬ADDR. Any difference in annotation behavior between ZERO and ¬NAME ¬SEM ¬ADDR can only be due to social gender stereotypes. The next setting, ¬SEM ¬ADDR removes both forms of lexical gender (semantically gendered nouns and terms of address); differences between ¬SEM ¬ADDR and ¬NAME ¬SEM ¬ADDR show how much names are relied on for annotation. Similarly, ¬NAME ¬ADDR removes names and terms of address, showing the impact of semantically gendered nouns, and ¬NAME ¬SEM removes names and semantically gendered nouns, showing the impact of terms of address. In the backward selection case, we begin with ORIG, which is the unmodified original text. To this, we can apply the pronoun filter to get ¬PRO; differences in annotation between ORIG and ¬PRO give a measure of how much any sort of genderbased inference is used. Similarly, we get ¬NAME by only removing names, which gives a measure of how much names are used (in the context of all other cues); we get ¬SEM by only removing semantically gendered words; and ¬ADDR by only removing terms of address. 4.2 Annotation Results We construct examples using the methodology defined above. We then conduct annotation experiments using crowdworkers on Amazon Mechanical Turk following the methodology by which the original GAP corpus was created10. Because we wanted to also capture uncertainty, we ask the crowdworkers how sure they are in their choices, between “definitely” sure, “probably” sure and “unsure.” Figure 2 shows the human annotation results as binary classification accuracy for resolving the pronoun to the antecedent. We can see that removing pronouns leads to significant drop in accuracy. This indicates that gender-based inferences, especially social gender stereotypes, play the most significant 10Our study was approved by the Microsoft Research Ethics Board. Workers were paid $1 to annotate ten contexts (the average annotation time was seven minutes). Figure 2: Human annotation results for the ablation study on MAP dataset. Each column is a different ablation, and the y-axis is the degree of accuracy with 95% significance intervals. Bottom bar plots are annotator certainties as how sure they are in their choices. role when annotators resolve coreferences. This confirms the findings of Rudinger et al. (2018) and Zhao et al. (2018a) that human annotated data incorporates bias from stereotypes. Moreover, if we compare ORIG with columns left to it, we see that name is another significant cue for annotator judgments, while lexical gender cues do not have significant impacts on human annotation accuracies. This is likely in part due to the low appearance frequency of lexical gender cues in our dataset. Every example has pronouns and names, whereas 49% of the examples have semantically gendered nouns but only 3% of the examples include terms of address. We also note that if we compare ¬NAME ¬SEM ¬ADDR to ¬SEM ¬ADDR and ¬NAME ¬ADDR, accuracy drops when removing gender cues. Though the differences are not statistically significant, we did not expect the accuracy drop. Finally, we find annotators’ certainty values follow the same trend as the accuracy: annotators have a reasonable sense of when they are unsure. We also note that accuracy score are essentially the same for ZERO and ¬PRO, which suggests that once explicit binary gender is gone from pronouns, the impact of any other form of linguistic gender in annotator’s decisions is also removed. 5 Bias in Model Specifications In addition to biases that can arise from the data that a system is trained on, as studied in the previ4574 ous section, bias can also come from how models are structured. For instance, a system may fail to recognize anything other than a dictionary of fixed pronouns as possible referents to entities. Here, we analyze prior work in models for coreference resolution in three ways. First, we do a literature study to quantify how NLP papers discuss gender. Second, similar to Zhao et al. (2018a) and Rudinger et al. (2018), we evaluate five freely available systems on the ablated data from §4. Third, we evaluate these systems on the dataset we created: Gender Inclusive Coreference (GICOREF). 5.1 Cis-normativity in published NLP papers In our first study, we adapt the approach Keyes (2018) took for analyzing the degree to which computer vision papers encoded trans-exclusive models of gender. In particular, we began with a random sample of ∼150 papers from the ACL anthology that mention the word “gender” and coded them according to the following questions: • Does the paper discuss coreference resolution? • Does the paper study English? • L.G: Does the paper deal with linguistic gender (grammatical gender or gendered pronouns)? • S.G: Does the paper deal with social gender? • L.G̸=S.G: (If yes to L.G and S.G:) Does the paper distinguish linguistic from social gender? • S.G Binary: (If yes to S.G:) Does the paper explicitly or implicitly assume that social gender is binary? • S.G Immutable: (If yes to S.G:) Does the paper explicitly or implicitly assume social gender is immutable? • They/Neo: (If yes to S.G and to English:) Does the paper explicitly consider uses of definite singular “they” or neopronouns? The results of this coding are in Table 1 (the full annotation is in Appendix B). We see out of the 22 coreference papers analyzed, the vast majority conform to a “folk” theory of language: ⋄Only 5.5% distinguish social from linguistic gender (despite it being relevant); ⋄Only 5.6% explicitly model gender as inclusive of non-binary identities; ⋄No papers treat gender as anything other than completely immutable;11 11The most common ways in which papers implicitly assume that social gender is immutable is either 1) by relying on external knowledge bases that map names to “gender”; or 2) by scraping a history of a user’s social media posts or emails and assuming that their “gender” today matches the gender of All Papers Coref Papers L.G? 52.6% (of 150) 95.4% (of 22) S.G? 58.0% (of 150) 86.3% (of 22) L.G̸=S.G? 11.1% (of 27) 5.5% (of 18) S.G Binary? 92.8% (of 84) 94.4% (of 18) S.G Immutable? 94.5% (of 74) 100.0% (of 14) They/Neo? 3.5% (of 56) 7.1% (of 14) Table 1: Analysis of a corpus of 150 NLP papers that mention “gender” along the lines of what assumptions around gender are implicitly or explicitly made. ⋄Only 7.1% (one paper!) considers neopronouns and/or specific singular THEY. The situation for papers not specifically about coreference is similar (the majority of these papers are either purely linguistic papers about grammatical gender in languages other than English, or papers that do “gender recognition” of authors based on their writing; May (2019) discusses the (re)production of gender in automated gender recognition in NLP in much more detail). Overall, the situation more broadly is equally troubling, and generally also fails to escape from the folk theory of gender. In particular, none of the differences are significant at a p = 0.05 level except for the first two questions, due to the small sample size (according to an n−1 chi-squared test). The result is that although we do not know exactly what decisions are baked in to all systems, the vast majority in our study (including two papers by one of the authors (Daum´e and Marcu, 2005; Orita et al., 2015)) come with strong gender binary assumptions, and exist within a broader sphere of literature which erases non-binary and binary trans identities. 5.2 System performance on MAP Next, we analyze the effect that our different ablation mechanisms have on existing coreference resolutions systems. In particular, we run five coreference resolution systems on our ablated data: the AI2 system (AI2; Gardner et al., 2017), hugging face (HF; Wolf, 2017), which is a neural system based on spacy, and the Stanford deterministic (SfdD; Raghunathan et al., 2010), statistical (SfdS; Clark and Manning, 2015) and neural (SfdN; Clark and Manning, 2016) systems. Figure 3 shows the results. We can see that the system accuracies mostly follow the same pattern as human accuracy scores, though all are significantly lower than human results. Accuracy scores for systems drop that historical record. 4575 Figure 3: Coreference resolution systems results for the ablation study on MAP dataset. The y-axis is the degree of accuracy with 95% significance intervals. dramatically when we ablate out referential gender in pronouns. This reveals that those coreference resolution systems reply heavily on gender-based inferences. In terms of each systems, HF and SfdN systems have similar results and outperform other systems in most cases. SfdD accuracy drops significantly once names are ablated. These results echo and extend previous observations made by Zhao et al. (2018a), who focus on detecting stereotypes within occupations. They detect gender bias by checking if the system accuracies are the same for cases that can be resolved by syntactic cues and cases that cannot, with original data and reversed-gender data. Similarly, Rudinger et al. (2018) focus on detecting stereotypes within occupations as well. They construct dataset without any gender cues other than stereotypes, and check how systems perform with different pronouns – THEY, SHE, HE. Ideally, they should all perform the same because there is not any gender cues in the sentence. However, they find that systems do not work on “they” and perform better on “he” than “she”. Our analysis breaks this stereotyping down further to detect which aspects of gender signals are most leveraged by current systems. 5.3 System behavior on gender-inclusive data Finally, in order to evaluate current coreference resolution models in gender inclusive contexts we introduce a new dataset, GICOREF. Here we focused on naturally occurring data, but sampled specifically to surface more gender-related phenomena than may be found in, say, the Wall Street Journal. Our new GICOREF dataset consists of 95 docPrecision Recall F1 AI2 40.4% 29.2% 33.9% HF 68.8% 22.3% 33.6% SfdD 50.8% 23.9% 32.5% SfdS 59.8% 24.1% 34.3% SfdN 59.4% 24.0% 34.2% Table 2: LEA scores on GICOREF (incorrect reference excluded) with various coreference resolution systems. Rows are different systems while columns are precision, recall, and F1 scores. When evaluate, we only count exact matches of pronouns and name entities. uments from three types of sources: articles from English Wikipedia about people with non-binary gender identities, articles from LGBTQ periodicals, and fan-fiction stories from Archive Of Our Own (with the respective author’s permission)12. These documents were each annotated by both of the authors and adjudicated.13 This data includes many examples of people who use pronouns other than SHE or HE (the dataset contains 27% HE, 20% SHE, 35% THEY, and 18% neopronouns, people who are genderfluid and whose names or pronouns change through the article, people who are misgendered, and people in relationships that are not heteronormative. In addition, incorrect references (misgendering and deadnaming14) are explicitly annotated.15 Two example annotated documents, one from Wikipedia, and one from Archive of Our Own, are provided in Appendix C and Appendix D. We run the same systems as before on this dataset. Table 2 reports results according the standard coreference resolution evaluation metric LEA (Moosavi and Strube, 2016). Since no systems are implemented to explicitly mark incorrect references, and no current evaluation metrics address this case, we perform the same evaluation twice. One with incorrect references included as regular references in the ground truth; and other with incorrect references excluded. Due to the limited number of incorrect references in the dataset, the 12See https://archiveofourown.org; thanks to Os Keyes for this suggestion. 13We evaluate inter-annotator agreement by treating one annotation as gold standard and the other as system output and computing the LEA metric; the resulting F1-score is 92%. During the adjudication process we found that most of the disagreement are due to one of the authors missing/overlooking mentions, and rarely due to true “disagreement.” 14According to Clements (2017) deadnaming occurs when someone, intentionally or not, refers to a person who’s transgender by the name they used before they transitioned. 15Thanks to an anonymous reader of a draft version of this paper for this suggestion. 4576 difference of the results are not significant. Here we only report the latter. The first observation is that there is still plenty room for coreference systems to improve; the best performing system achieves as F1 score of 34%, but the Stanford neural system’s F1 score on CoNLL2012 test set reaches 60% (Moosavi, 2020). Additionally, we can see system precision dominates recall. This is likely partially due to poor recall of pronouns other than HE and SHE. To analyze this, we compute the recall of each system for finding referential pronouns at all, regardless of whether they are correctly linked to their antecedents. We find that all systems achieve a recall of at least 95% for binary pronouns, a recall of around 90% on average for THEY, and a recall of around a paltry 13% for neopronouns (two systems—Stanford deterministic and Stanford neural—never identify any neopronouns at all). 6 Discussion and Moving Forward Our goal in this paper was to analyze how gender bias exist in coreference resolution annotations and models, with a particular focus on how it may fail to adequately process text involving binary and non-binary trans referents. We thus created two datasets: MAP and GICOREF. Both datasets show significant gaps in system performance, but perhaps moreso, show that taking crowdworker judgments as “gold standard” can be problematic. It may be the case that to truly build gender inclusive datasets and systems, we need to hire or consult experiential experts (Patton et al., 2019; Young et al., 2019). Moreover, although we studied crowdworkers on Mechanical Turk (because they are often employed as annotators for NLP resources), if other populations are used for annotation, it becomes important to consider their positionality and how that may impact annotations. This echoes a related finding in annotation of hate-speech that annotator positionality matters (Olteanu et al., 2019). More broadly, we found that trans-exclusionary assumptions around gender in NLP papers is made commonly (and implicitly), a practice that we hope to see change in the future because it fundamentally limits the applicability of NLP systems. The primary limitation of our study and analysis is that it is limited to English. This is particularly limiting because English lacks a grammatical gender system, and some extensions of our work to languages with grammatical gender are non-trivial. We also emphasize that while we endeavored to be inclusive, our own positionality has undoubtedly led to other biases. One in particular is a largely Western bias, both in terms of what models of gender we use and also in terms of the data we annotated. We have attempted to partially compensate for this bias by intentionally including documents with non-Western non-binary expressions of gender in the GICoref dataset16, but the dataset nonetheless remains Western-dominant. Additionally, our ability to collect naturally occurring data was limited because many sources simply do not yet permit (or have only recently permitted) the use of gender inclusive language in their articles. This led us to counterfactual text manipulation, which, while useful, is essentially impossible to do flawlessly. Moreover, our ability to evaluate coreference systems with data that includes incorrect references was limited as well, because current systems do not mark any forms of misgendering or deadnaming explicitly, and current metrics do not take this into account. Finally, because the social construct of gender is fundamentally contested, some of our results may apply only under some frameworks. We hope this paper can serve as a roadmap for future studies. In particular, the gender taxonomy we presented, while not novel, is (to our knowledge) previously unattested in discussions around gender bias in NLP systems; we hope future work in this area can draw on these ideas. We also hope that developers of datasets or systems can use some of our analysis as inspiration for how one can attempt to measure—and then root out—different forms of bias in coreference resolution systems and NLP systems more broadly. Acknowledgments The authors are grateful to a number of people who have provided pointers, edits, suggestions, and annotation facilities to improve this work: Lauren Ackerman, Cassidy Henry, Os Keyes, Chandler May, Hanyu Wang, and Marion Zepf, all contributed to various aspects of this work, including suggestions for data sources for the GI Coref dataset. We also thank the CLIP lab at the University of Maryland for comments on previous drafts. 16We endeavored to represent some non-Western gender identies that do not fall into the male/female binary, including people who identify as hijra (Indian subcontinent), phuying (Thailand, sometimes referred to as kathoey), muxe (Oaxaca), two-spirit (Americas), fa’afafine (Samoa) and m¯ah¯u (Hawaii). 4577 References Saleem Abuleil, Khalid Alsamara, and Martha Evens. 2002. Acquisition system for Arabic noun morphology. In Proceedings of the ACL-02 Workshop on Computational Approaches to Semitic Languages, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Lauren Ackerman. 2019. Syntactic and cognitive issues in investigating gendered coreference. Glossa: a journal of general linguistics, 4. Apoorv Agarwal, Jiehan Zheng, Shruti Kamath, Sriramkumar Balasubramanian, and Shirin Ann Dey. 2015. Key female characters in film have more to talk about besides men: Automating the Bechdel test. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 830–840, Denver, Colorado. Association for Computational Linguistics. Tafseer Ahmed Khan. 2014. Automatic acquisition of Urdu nouns (along with gender and irregular plurals). In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 2846–2850, Reykjavik, Iceland. European Language Resources Association (ELRA). Sarah Alkuhlani and Nizar Habash. 2011. A corpus for modeling morpho-syntactic agreement in Arabic: Gender, number and rationality. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 357–362, Portland, Oregon, USA. Association for Computational Linguistics. Sarah Alkuhlani and Nizar Habash. 2012. Identifying broken plurals, irregular gender, and rationality in Arabic text. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 675–685, Avignon, France. Association for Computational Linguistics. Tania Avgustinova and Hans Uszkoreit. 2000. An ontology of systematic relations for a shared grammar of Slavic. In COLING 2000 Volume 1: The 18th International Conference on Computational Linguistics. Bogdan Babych, Jonathan Geiger, Mireia Ginest´ı Rosell, and Kurt Eberle. 2014. Deriving de/het gender classification for Dutch nouns for rule-based MT generation tasks. In Proceedings of the 3rd Workshop on Hybrid Approaches to Machine Translation (HyTra), pages 75–81, Gothenburg, Sweden. Association for Computational Linguistics. Ibrahim Badr, Rabih Zbib, and James Glass. 2008. Segmentation for English-to-Arabic statistical machine translation. In Proceedings of ACL-08: HLT, Short Papers, pages 153–156, Columbus, Ohio. Association for Computational Linguistics. Ibrahim Badr, Rabih Zbib, and James Glass. 2009. Syntactic phrase reordering for English-to-Arabic statistical machine translation. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 86–93, Athens, Greece. Association for Computational Linguistics. R. I. Bainbridge. 1985. Montagovian definite clause grammar. In Second Conference of the European Chapter of the Association for Computational Linguistics, Geneva, Switzerland. Association for Computational Linguistics. Janet Baker, Larry Gillick, and Robert Roth. 1994. Research in large vocabulary continuous speech recognition. In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994. Murali Raghu Babu Balusu, Taha Merghani, and Jacob Eisenstein. 2018. Stylistic variation in social media part-of-speech tagging. In Proceedings of the Second Workshop on Stylistic Variation, pages 11–19, New Orleans. Association for Computational Linguistics. Francesco Barbieri and Jose Camacho-Collados. 2018. How gender and skin tone modifiers affect emoji semantics in twitter. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 101–106, New Orleans, Louisiana. Association for Computational Linguistics. Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The Problem With Bias: Allocative Versus Representational Harms in Machine Learning. In Proceedings of SIGCIS. Naomi S. Baron. 1971. A reanalysis of english grammatical gender. Lingua, 27:113–140. Emily M. Bender. 2019. A typology of ethical risks in language technology with an eye towards where transparent documentation can help. The Future of Artificial Intelligence: Language, Ethics, Technology. Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 33–40, Sydney, Australia. Association for Computational Linguistics. Shane Bergsma, Dekang Lin, and Randy Goebel. 2009. Glen, glenda or glendale: Unsupervised and semisupervised learning of English noun gender. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 120–128, Boulder, Colorado. Association for Computational Linguistics. 4578 Shane Bergsma, Matt Post, and David Yarowsky. 2012. Stylometric analysis of scientific articles. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 327–337, Montr´eal, Canada. Association for Computational Linguistics. Bronwyn M. Bjorkman. 2017. Singular they and the syntactic representation of gender in english. Glossa: A Journal of General Linguistics, 2(1):80. Su Lin Blodgett, Solon Barocas, Hal Daum´e, III, and Hanna Wallach. 2020. Language (technology) is power: The need to be explicit about NLP harms. In Proceedings of the Conference of the Association for Computational Linguistics (ACL). Ondˇrej Bojar, Rudolf Rosa, and Aleˇs Tamchyna. 2013. Chimera – three heads for English-to-Czech translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 92–98, Sofia, Bulgaria. Association for Computational Linguistics. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Proceedings of NeurIPS. Constantinos Boulis and Mari Ostendorf. 2005. A quantitative analysis of lexical differences between genders in telephone conversations. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 435– 442, Ann Arbor, Michigan. Association for Computational Linguistics. Evan Bradley, Julia Salkind, Ally Moore, and SofiTeitsort. 2019. Singular ‘they’ and novel pronouns: gender-neutral, nonbinary, or both? Proceedings of the Linguistic Society of America, 4(1):36–1–7. Mary Bucholtz. 1999. Gender. Journal of Linguistic Anthropology. Special issue: Lexicon for the New Millennium, ed. Alessandro Duranti. Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on twitter. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1301–1309, Edinburgh, Scotland, UK. Association for Computational Linguistics. Felix Burkhardt, Martin Eckert, Wiebke Johannsen, and Joachim Stegmann. 2010. A database of age and gender annotated telephone speech. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), Valletta, Malta. European Language Resources Association (ELRA). Maria Bustillos. 2011. Our desperate, 250-year-long search for a gender-neutral pronoun. permalink. Judith Butler. 1990. Gender Trouble. Routledge. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334). Michael Carl, Sandrine Garnier, Johann Haller, Anne Altmayer, and B¨arbel Miemietz. 2004. Controlling gender equality with shallow NLP techniques. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 820–826, Geneva, Switzerland. COLING. Sophia Chan and Alona Fyshe. 2018. Social and emotional correlates of capitalization on twitter. In Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media, pages 10–15, New Orleans, Louisiana, USA. Association for Computational Linguistics. Songsak Channarukul, Susan W. McRoy, and Syed S. Ali. 2000. Enriching partially-specified representations for text realization using an attribute grammar. In INLG’2000 Proceedings of the First International Conference on Natural Language Generation, pages 163–170, Mitzpe Ramon, Israel. Association for Computational Linguistics. Eric Charton and Michel Gagnon. 2011. Poly-co: a multilayer perceptron approach for coreference detection. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 97–101, Portland, Oregon, USA. Association for Computational Linguistics. Chen Chen and Vincent Ng. 2014. Chinese zero pronoun resolution: An unsupervised probabilistic model rivaling supervised resolvers. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 763– 774, Doha, Qatar. Association for Computational Linguistics. Morgane Ciot, Morgan Sonderegger, and Derek Ruths. 2013. Gender inference of Twitter users in nonEnglish contexts. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1136–1145, Seattle, Washington, USA. Association for Computational Linguistics. Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In ACL. Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. In EMNLP. KC Clements. 2017. What is deadnaming? Blog post. Kirby Conrod. 2018. Changes in singular they. In Cascadia Workshop in Sociolinguistics. 4579 Greville G. Corbett. 1991. Gender. Cambridge University Press. Greville G. Corbett. 2013. Number of genders. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Marta R. Costa-juss`a. 2017. Why Catalan-Spanish neural machine translation? analysis, comparison and combination with standard rule and phrase-based technologies. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 55–62, Valencia, Spain. Association for Computational Linguistics. Colette G. Craig. 1994. Classifier languages. The encyclopedia of language and linguistics, 2:565–569. Silviu Cucerzan and David Yarowsky. 2003. Minimally supervised induction of grammatical gender. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 40–47. Ali Dada. 2007. Implementation of the Arabic numerals and their syntax in GF. In Proceedings of the 2007 Workshop on Computational Approaches to Semitic Languages: Common Issues and Resources, pages 9–16, Prague, Czech Republic. Association for Computational Linguistics. Laurence Danlos and Fiametta Namer. 1988. Morphology and cross dependencies in the synthesis of personal pronouns in romance languages. In Coling Budapest 1988 Volume 1: International Conference on Computational Linguistics. Helana Darwin. 2017. Doing gender beyond the binary: A virtual ethnography. Symbolic Interaction, 40(3):317–334. Kareem Darwish, Ahmed Abdelali, and Hamdy Mubarak. 2014. Using stem-templates to improve Arabic POS and gender/number tagging. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 2926–2931, Reykjavik, Iceland. European Language Resources Association (ELRA). Hal Daum´e, III and Daniel Marcu. 2005. A large-scale exploration of effective global features for a joint entity detection and tracking model. In HLT/EMNLP, pages 97–104. Łukasz Debowski. 2003. A reconfigurable stochastic tagger for languages with complex tag structure. In Proceedings of the 2003 EACL Workshop on Morphological Processing of Slavic Languages, pages 63–70, Budapest, Hungary. Association for Computational Linguistics. Thierry Declerck, Nikolina Koleva, and Hans-Ulrich Krieger. 2012. Ontology-based incremental annotation of characters in folktales. In Proceedings of the 6th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 30–34, Avignon, France. Association for Computational Linguistics. Liviu P. Dinu, Vlad Niculae, and Octavia-Maria S¸ulea. 2012. The Romanian neuter examined through a two-gender n-gram classification system. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), pages 907–910, Istanbul, Turkey. European Language Resources Association (ELRA). Ursula Doleschal and Sonja Schmid. 2001. Doing gender in Russian. Gender Across Languages. The linguistic representation of women and men, 1:253– 282. Michael Dorna, Anette Frank, Josef van Genabith, and Martin C. Emele. 1998. Syntactic and semantic transfer with f-structures. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 341–347, Montreal, Quebec, Canada. Association for Computational Linguistics. Matthew S. Dryer. 2013. Expression of pronominal subjects. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Esin Durmus and Claire Cardie. 2018. Understanding the effect of gender and stance in opinion expression in debates on “abortion”. In Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media, pages 69–75, New Orleans, Louisiana, USA. Association for Computational Linguistics. Jason Eisner and Damianos Karakos. 2005. Bootstrapping without the boot. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 395–402, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Ahmed El Kholy and Nizar Habash. 2012. Rich morphology generation using statistical machine translation. In INLG 2012 Proceedings of the Seventh International Natural Language Generation Conference, pages 90–94, Utica, IL. Association for Computational Linguistics. Katja Filippova. 2012. User demographics and language in an implicit social network. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1478–1488, Jeju Island, Korea. Association for Computational Linguistics. 4580 Lucie Flekova, Jordan Carpenter, Salvatore Giorgi, Lyle Ungar, and Daniel Preot¸iuc-Pietro. 2016. Analyzing biases in human perception of user age and gender from text. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 843– 854, Berlin, Germany. Association for Computational Linguistics. Joel Escud´e Font and Marta R. Costa-juss`a. 2019. Equalizing gender biases in neural machine translation with word embeddings techniques. In Proceedings of the 1st ACL Workshop on Gender Bias for Natural Language Processing. Anke Frank, Chr Hoffmann, Maria Strobel, et al. 2004. Gender issues in machine translation. Univ. Bremen. Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3):330–347. Pedro A. Fuertes-Olivera. 2007. A corpus-based view of lexical gender in written business english. English for Specific Purposes, 26(2):219–234. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. AllenNLP: A deep semantic natural language processing platform. arXiv:1803.07640. Nikesh Garera and David Yarowsky. 2009. Modeling latent biographic attributes in conversational genres. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 710–718, Suntec, Singapore. Association for Computational Linguistics. Aparna Garimella and Rada Mihalcea. 2016. Zooming in on gender differences in social media. In Proceedings of the Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media (PEOPLES), pages 1–10, Osaka, Japan. The COLING 2016 Organizing Committee. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daum´e, III, and Kate Crawford. 2018. Datasheets for datasets. arXiv:1803.09010. Damien Genthial, Jacques Courtin, and Jacques Menezo. 1994. Towards a more user-friendly correction. In COLING 1994 Volume 2: The 15th International Conference on Computational Linguistics. Ona de Gibert, Naiara Perez, Aitor Garc´ıa-Pablos, and Montse Cuadros. 2018. Hate speech dataset from a white supremacy forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 11–20, Brussels, Belgium. Association for Computational Linguistics. GLAAD. 2007. Media reference guide–transgender. permalink. Goran Glavaˇs, Damir Korenˇci´c, and Jan ˇSnajder. 2013. Aspect-oriented opinion mining from user reviews in Croatian. In Proceedings of the 4th Biennial International Workshop on Balto-Slavic Natural Language Processing, pages 18–23, Sofia, Bulgaria. Association for Computational Linguistics. Yoav Goldberg and Michael Elhadad. 2013. Word segmentation, unknown-word resolution, and morphological agreement in a Hebrew parsing system. Computational Linguistics, 39(1):121–160. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. In Proceedings of NAACL-HLT. Rob van der Goot, Nikola Ljubeˇsi´c, Ian Matroos, Malvina Nissim, and Barbara Plank. 2018. Bleaching text: Abstract features for cross-lingual gender prediction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 383–389, Melbourne, Australia. Association for Computational Linguistics. Liane Guillou. 2012. Improving pronoun translation for statistical machine translation. In Proceedings of the Student Research Workshop at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 1–10, Avignon, France. Association for Computational Linguistics. Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham. 2018. Gender recognition or gender reductionism?: The social implications of embedded gender recognition systems. In CHI, page 8. ACM. Sanda M. Harabagiu and Steven J. Maiorano. 1999. Knowledge-lean coreference resolution and its relation to textual cohesion and coherence. In The Relation of Discourse/Dialogue Structure and Reference. Marlis Hellinger and Heiko Motschenbacher. 2015. Gender across languages, volume 4. John Benjamins Publishing Company. Tom´aˇs Holan, Vladislav Kuboˇn, and Martin Pl´atek. 1997. A prototype of a grammar checker for Czech. In Fifth Conference on Applied Natural Language Processing, pages 147–154, Washington, DC, USA. Association for Computational Linguistics. Levi C. R. Hord. 2016. Bucking the linguistic binary: Gender neutral language in english, swedish, french, and german. Dirk Hovy. 2015. Demographic factors improve classification performance. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 752–762, Beijing, China. Association for Computational Linguistics. 4581 Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2007. Extracting social networks and biographical facts from conversational speech transcripts. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 1040– 1047, Prague, Czech Republic. Association for Computational Linguistics. Anders Johannsen, Dirk Hovy, and Anders Søgaard. 2015. Cross-lingual syntactic variation over age and gender. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 103–112, Beijing, China. Association for Computational Linguistics. Kelly Johnson, Colette Auerswald, Allen J. LeBlanc, and Walter O. Bockting. 2019. 7. invalidation experiences and protective factors among non-binary adolescents. Journal of Adolescent Health, 64(2, Supplement):S4. Megumi Kameyama. 1986. A property-sharing constraint in centering. In 24th Annual Meeting of the Association for Computational Linguistics, pages 200–206, New York, New York, USA. Association for Computational Linguistics. Sweta Karlekar, Tong Niu, and Mohit Bansal. 2018. Detecting linguistic characteristics of Alzheimer’s dementia by interpreting neural models. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 701–707, New Orleans, Louisiana. Association for Computational Linguistics. Matthew Kay, Cynthia Matuszek, and Sean A. Munson. 2015. Unequal representation and gender stereotypes in image search results for occupations. In CHI. Suzanne J. Kessler and Wendy McKenna. 1978. Gender: An ethnomethodological approach. University of Chicago Press. Mike Kestemont. 2014. Function words in authorship attribution. from black magic to theory? In Proceedings of the 3rd Workshop on Computational Linguistics for Literature (CLFL), pages 59–66, Gothenburg, Sweden. Association for Computational Linguistics. Os Keyes. 2018. The misgendering machines: Trans/HCI implications of automatic gender recognition. CHI. Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43–53, New Orleans, Louisiana. Association for Computational Linguistics. Bennett Kleinberg, Maximilian Mozes, and Isabelle van der Vegt. 2018. Identifying the sentiment styles of YouTube’s vloggers. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3581–3590, Brussels, Belgium. Association for Computational Linguistics. Dimitrios Kokkinakis, Ann Ighe, and Mats Malm. 2015. Gender-based vocation identification in Swedish 19th century prose fiction using linguistic patterns, NER and CRF learning. In Proceedings of the Fourth Workshop on Computational Linguistics for Literature, pages 89–97, Denver, Colorado, USA. Association for Computational Linguistics. Corina Koolen and Andreas van Cranenburgh. 2017. These are not the stereotypes you are looking for: Bias and fairness in authorial gender attribution. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 12–22, Valencia, Spain. Association for Computational Linguistics. Cheris Kramarae and Paula A. Treichler. 1985. A feminist dictionary. Pandora Press. Matthias Kraus, Johannes Kraus, Martin Baumann, and Wolfgang Minker. 2018. Effects of gender stereotypes on trust and likability in spoken human-robot interaction. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Robin Lakoff. 1975. Language and woman’s place. New York ao: Harper and Row. Max Lambert and Melina Packer. 2019. How gendered language leads scientists astray. New York Times. Brian Larson. 2017a. Gender as a variable in naturallanguage processing: Ethical considerations. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 1–11, Valencia, Spain. Association for Computational Linguistics. Brian N. Larson. 2017b. Gender as a variable in natural-language processing: Ethical considerations. In ACL Workshop on Ethics in NLP. Ronan Le Nagard and Philipp Koehn. 2010. Aiding pronoun translation with co-reference resolution. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 252–261, Uppsala, Sweden. Association for Computational Linguistics. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. Moshe Levinger, Uzzi Ornan, and Alon Itai. 1995. Learning morpho-lexical probabilities from an untagged corpus with an application to Hebrew. Computational Linguistics, 21(3):383–404. 4582 Rivka Levitan. 2013. Entrainment in spoken dialogue systems: Adopting, predicting and influencing user behavior. In Proceedings of the 2013 NAACL HLT Student Research Workshop, pages 84–90, Atlanta, Georgia. Association for Computational Linguistics. Sarah Ita Levitan, Yocheved Levitan, Guozhen An, Michelle Levine, Rivka Levitan, Andrew Rosenberg, and Julia Hirschberg. 2016. Identifying individual differences in gender, ethnicity, and personality from dialogue for deception detection. In Proceedings of the Second Workshop on Computational Approaches to Deception Detection, pages 40–44, San Diego, California. Association for Computational Linguistics. Sarah Ita Levitan, Angel Maredia, and Julia Hirschberg. 2018. Linguistic cues to deception and perceived deception in interview dialogues. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1941–1950, New Orleans, Louisiana. Association for Computational Linguistics. Dingcheng Li, Tim Miller, and William Schuler. 2011. A pronoun anaphora resolution system based on factorial hidden Markov models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1169–1178, Portland, Oregon, USA. Association for Computational Linguistics. Shoushan Li, Bin Dai, Zhengxian Gong, and Guodong Zhou. 2016. Semi-supervised gender classification with joint textual and social modeling. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2092–2100, Osaka, Japan. The COLING 2016 Organizing Committee. Wen Li and Markus Dickinson. 2017. Gender prediction for Chinese social media data. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 438–445, Varna, Bulgaria. INCOMA Ltd. Olga Litvinova, Pavel Seredin, Tatiana Litvinova, and John Lyell. 2017. Deception detection in Russian texts. In Proceedings of the Student Research Workshop at the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 43–52, Valencia, Spain. Association for Computational Linguistics. Yuanchao Liu, Ming Liu, Xiaolong Wang, Limin Wang, and Jingjing Li. 2013. PAL: A chatterbot system for answering domain-specific questions. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 67–72, Sofia, Bulgaria. Association for Computational Linguistics. Nikola Ljubeˇsi´c, Darja Fiˇser, and Tomaˇz Erjavec. 2017. Language-independent gender prediction on twitter. In Proceedings of the Second Workshop on NLP and Computational Social Science, pages 1–6, Vancouver, Canada. Association for Computational Linguistics. Ver´onica L´opez-Lude˜na, Rub´en San-Segundo, Syaheerah Lufti, Juan Manuel Lucas-Cuesta, Juli´an David Echevarry, and Beatriz Mart´ınezGonz´alez. 2011. Source language categorization for improving a speech into sign language translation system. In Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies, pages 84–93, Edinburgh, Scotland, UK. Association for Computational Linguistics. John Lyons. 1977. Semantics. Cambridge University Press. Justina Mandravickait˙e and Tomas Krilaviˇcius. 2017. Stylometric analysis of parliamentary speeches: Gender dimension. In Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing, pages 102–107, Valencia, Spain. Association for Computational Linguistics. Inderjeet Mani, T. Richard Macmillan, Susann Luperfoy, Elaine Lusher, and Sharon Laskowski. 1993. Identifying unknown proper names in newswire text. In Acquisition of Lexical Knowledge from Text. Harmony Marchal, Benoˆıt Lemaire, Maryse Bianco, and Philippe Dessus. 2008. A MDL-based model of gender knowledge acquisition. In CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 73–80, Manchester, England. Coling 2008 Organizing Committee. David Mareˇcek, Rudolf Rosa, Petra Galuˇsˇc´akov´a, and Ondˇrej Bojar. 2011. Two-step translation with grammatical post-processing. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 426–432, Edinburgh, Scotland. Association for Computational Linguistics. Matej Martinc and Senja Pollak. 2018. Reusable workflows for gender prediction. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Yuval Marton, Nizar Habash, and Owen Rambow. 2010. Improving Arabic dependency parsing with lexical and inflectional morphological features. In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 13–21, Los Angeles, CA, USA. Association for Computational Linguistics. Yuval Marton, Nizar Habash, and Owen Rambow. 2013. Dependency parsing of modern standard Arabic with lexical and inflectional features. Computational Linguistics, 39(1):161–194. 4583 Austin Matthews, Waleed Ammar, Archna Bhatia, Weston Feely, Greg Hanneman, Eva Schlinger, Swabha Swayamdipta, Yulia Tsvetkov, Alon Lavie, and Chris Dyer. 2014. The CMU machine translation systems at WMT 2014. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 142–149, Baltimore, Maryland, USA. Association for Computational Linguistics. Chandler May. 2019. Neurips2019. Kevin A. McLemore. 2015. Experiences with misgendering: Identity misclassification of transgender spectrum individuals. Self and Identity, 14(1):51– 74. C. S. Mellish. 1988. Implementing systemic classification by unification. Computational Linguistics, 14(1). Merriam-Webster. 2016. Words we’re watching: Singular ’they’. permalink. Timothee Mickus, Olivier Bonami, and Denis Paperno. 2019. Distributional effects of gender contrasts across categories. In Proceedings of the Society for Computation in Linguistics (SCiL) 2019, pages 174– 184. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1–17, New Orleans, Louisiana. Association for Computational Linguistics. Saif Mohammad and Tony Yang. 2011. Tracking sentiment in mail: How genders differ on emotional axes. In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011), pages 70–79, Portland, Oregon. Association for Computational Linguistics. Sridhar Moorthy, Ruth Pogacar, Samin Khan, and Yang Xu. 2018. Is Nike female? exploring the role of sound symbolism in predicting brand name gender. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1128–1132, Brussels, Belgium. Association for Computational Linguistics. Nafise Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. pages 632–642. Nafise Sadat Moosavi. 2020. Robustness in Coreference Resolution. PhD dissertation, University of Heidelberg. Cristina Mota, Paula Carvalho, and Elisabete Ranchhod. 2004. Multiword lexical acquisition and dictionary formalization. In Proceedings of the Workshop on Enhancing and Using Electronic Dictionaries, pages 73–76, Geneva, Switzerland. COLING. Arjun Mukherjee and Bing Liu. 2010. Improving gender classification of blog authors. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 207–217, Cambridge, MA. Association for Computational Linguistics. Smruthi Mukund, Debanjan Ghosh, and Rohini Srihari. 2011. Using sequence kernels to identify opinion entities in Urdu. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 58–67, Portland, Oregon, USA. Association for Computational Linguistics. Hidetsugu Nanba, Haruka Taguma, Takahiro Ozaki, Daisuke Kobayashi, Aya Ishino, and Toshiyuki Takezawa. 2009. Automatic compilation of travel information from automatically identified travel blogs. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 205–208, Suntec, Singapore. Association for Computational Linguistics. Ajit Narayanan and Lama Hashem. 1993. On abstract finite-state morphology. In Sixth Conference of the European Chapter of the Association for Computational Linguistics, Utrecht, The Netherlands. Association for Computational Linguistics. Vivi Nastase and Marius Popescu. 2009. What’s in a name? In some languages, grammatical gender. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1368–1377, Singapore. Association for Computational Linguistics. Costanza Navarretta. 2004. An algorithm for resolving individual and abstract anaphora in Danish texts and dialogues. In Proceedings of the Conference on Reference Resolution and Its Applications, pages 95–102, Barcelona, Spain. Association for Computational Linguistics. Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1396–1411, Uppsala, Sweden. Association for Computational Linguistics. Dong Nguyen, Dolf Trieschnigg, A. Seza Do˘gru¨oz, Rilana Gravel, Mari¨et Theune, Theo Meder, and Franciska de Jong. 2014a. Why gender and age prediction from tweets is hard: Lessons from a crowdsourcing experiment. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1950– 1961, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Dong Nguyen, Dolf Trieschnigg, and Theo Meder. 2014b. TweetGenie: Development, evaluation, and lessons learned. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations, pages 62–66, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. 4584 Uwe Kjær Nissen. 2002. Aspects of translating gender. Linguistik online, 11(2):02. Michal Nov´ak and Zdenˇek ˇZabokrtsk´y. 2014. Crosslingual coreference resolution of pronouns. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 14–24, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Elinor Ochs. 1992. Indexing gender. Rethinking context: Language as an interactive phenomenon, 11:335. Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. 2019. Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2:13. Naho Orita, Eliana Vornov, Naomi H. Feldman, and Hal Daum´e, III. 2015. Why discourse affects speakers’ choice of referring expressions. In Proceedings of the Conference of the Association for Computational Linguistics (ACL). Serguei V. Pakhomov, James Buntrock, and Christopher G. Chute. 2003. Identification of patients with congestive heart failure using a binary classifier: A case study. In Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine, pages 89–96, Sapporo, Japan. Association for Computational Linguistics. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2799–2804, Brussels, Belgium. Association for Computational Linguistics. Desmond Patton, Philipp Blandfort, William Frey, Michael Gaskell, and Svebor Karaman. 2019. Annotating twitter data from vulnerable populations: Evaluating disagreement between domain experts and graduate student annotators. Carlos P´erez Estruch, Roberto Paredes Palacios, and Paolo Rosso. 2017. Learning multimodal gender profile using neural networks. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 577– 582, Varna, Bulgaria. INCOMA Ltd. Ver´onica P´erez-Rosas, Quincy Davenport, Anna Mengdan Dai, Mohamed Abouelenien, and Rada Mihalcea. 2017. Identity deception detection. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 885–894, Taipei, Taiwan. Asian Federation of Natural Language Processing. Wido van Peursen. 2009. How to establish a verbal paradigm on the basis of ancient Syriac manuscripts. In Proceedings of the EACL 2009 Workshop on Computational Approaches to Semitic Languages, pages 1–9, Athens, Greece. Association for Computational Linguistics. Barbara Plank. 2018. Predicting authorship and author traits from keystroke dynamics. In Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media, pages 98–104, New Orleans, Louisiana, USA. Association for Computational Linguistics. Fred Popowich. 1989. Tree unification grammar. In 27th Annual Meeting of the Association for Computational Linguistics, pages 228–236, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Vinodkumar Prabhakaran, Emily E. Reid, and Owen Rambow. 2014. Gender and power: How gender and gender environment affect manifestations of power. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1965–1976, Doha, Qatar. Association for Computational Linguistics. Grusha Prasad and Joanna Morris. 2020. The p600 for singular “they”: How the brain reacts when john decides to treat themselves to sushi. PsyArXiv. Marcelo Prates, Pedro Avelar, and Luis C. Lamb. 2019. Assessing gender bias in machine translation – a case study with google translate. Neural Computing and Applications. Daniel Preot¸iuc-Pietro, Johannes Eichstaedt, Gregory Park, Maarten Sap, Laura Smith, Victoria Tobolsky, H. Andrew Schwartz, and Lyle Ungar. 2015. The role of personality, age, and gender in tweeting about mental illness. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 21–30, Denver, Colorado. Association for Computational Linguistics. Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016. Investigating language universal and specific properties in word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1478–1488, Berlin, Germany. Association for Computational Linguistics. J. Joachim Quantz. 1994. An HPSG parser based on description logics. In COLING 1994 Volume 1: The 15th International Conference on Computational Linguistics. Chris Quirk and Simon Corston-Oliver. 2006. The impact of parse quality on syntactically-informed statistical machine translation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 62–69, Sydney, Australia. Association for Computational Linguistics. 4585 Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lucia Specia, and Shuly Wintner. 2017. Personalized machine translation: Preserving original author traits. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1074–1084, Valencia, Spain. Association for Computational Linguistics. Karthik Raghunathan, Heeyoung Lee, Sudarshan Rangarajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multipass sieve for coreference resolution. In EMNLP. Anil Ramakrishna, Nikolaos Malandrakis, Elizabeth Staruk, and Shrikanth Narayanan. 2015. A quantitative analysis of gender differences in movies using psycholinguistic normatives. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1996–2001, Lisbon, Portugal. Association for Computational Linguistics. Sravana Reddy and Kevin Knight. 2016. Obfuscating gender in social media writing. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 17–26, Austin, Texas. Association for Computational Linguistics. Christina Richards, Walter Pierre Bouman, and MegJohn Barker. 2017. Genderqueer and Non-Binary Genders. Springer. Barbara J. Risman. 2009. From doing to undoing: Gender as we know it. Gender & Society, 23(1). Livio Robaldo and Jurij Di Carlo. 2009. Disambiguating quantifier scope in DTS. In Proceedings of the Eight International Conference on Computational Semantics, pages 195–209, Tilburg, The Netherlands. Association for Computational Linguistics. Lina Maria Rojas-Barahona, Thierry Bazillon, Matthieu Quignard, and Fabrice Lefevre. 2011. Using MMIL for the high level semantic annotation of the French MEDIA dialogue corpus. In Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011). Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, and Adam Tauman Kalai. 2019. What’s in a name? reducing bias in bios without access to protected attributes. In NAACL. Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural language inferences. In ACL Workshop on Ethics in NLP, pages 74–79. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics. Maarten Sap, Gregory Park, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Lyle Ungar, and Hansen Andrew Schwartz. 2014. Developing age and gender predictive lexica over social media. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1146–1151, Doha, Qatar. Association for Computational Linguistics. Maarten Sap, Marcella Cindy Prasettio, Ari Holtzman, Hannah Rashkin, and Yejin Choi. 2017. Connotation frames of power and agency in modern films. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2329–2334, Copenhagen, Denmark. Association for Computational Linguistics. Emili Sapena, Llu´ıs Padr´o, and Jordi Turmo. 2011. RelaxCor participation in CoNLL shared task on coreference resolution. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 35–39, Portland, Oregon, USA. Association for Computational Linguistics. Ruchita Sarawgi, Kailash Gajulapalli, and Yejin Choi. 2011. Gender attribution: Tracing stylometric evidence beyond topic and genre. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 78–86, Portland, Oregon, USA. Association for Computational Linguistics. Kristen Schilt and Laurel Westbrook. 2009. Doing gender, doing heteronormativity. Gender & Society, 23(4). Alexandra Schofield and Leo Mehr. 2016. Genderdistinguishing features in film dialogue. In Proceedings of the Fifth Workshop on Computational Linguistics for Literature, pages 32–39, San Diego, California, USA. Association for Computational Linguistics. H. Andrew Schwartz, Gregory Park, Maarten Sap, Evan Weingarten, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Jonah Berger, Martin Seligman, and Lyle Ungar. 2015. Extracting human temporal orientation from Facebook language. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 409–419, Denver, Colorado. Association for Computational Linguistics. Julia Serano. 2007. Whipping Girl: A Transsexual Woman on Sexism and the Scapegoating of Femininity. Seal Press. Candace L. Sidner. 1981. Focusing for interpretation of pronouns. American Journal of Computational Linguistics, 7(4):217–231. 4586 Maxim Sidorov, Stefan Ultes, and Alexander Schmitt. 2014. Comparison of gender- and speaker-adaptive emotion recognition. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 3476–3480, Reykjavik, Iceland. European Language Resources Association (ELRA). Michael Silverstein. 1979. Language structure and linguistic ideology. The elements: A parasession on linguistic units and levels, pages 193–247. Noah A. Smith, David A. Smith, and Roy W. Tromble. 2005. Context-based morphological disambiguation with random fields. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 475–482, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Juan Soler-Company and Leo Wanner. 2014. How to use less features and reach better performance in author gender identification. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 1315– 1319, Reykjavik, Iceland. European Language Resources Association (ELRA). Juan Soler-Company and Leo Wanner. 2017. On the relevance of syntactic and discourse features for author profiling and identification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 681–687, Valencia, Spain. Association for Computational Linguistics. Danny Soloman and Mary McGee Wood. 1994. Learning a radically lexical grammar. In The Balancing Act: Combining Symbolic and Statistical Approaches to Language. Michael Spivak. 1997. The Joy of TEX: A Gourmet Guide to Typesetting with the AMS-TEX Macro Package, 1st edition. American Mathematical Society, USA. E. Stanley. 2014. Gender self-determination. TSQ: Transgender Studies Quarterly, 1:89–91. Ian Stewart. 2014. Now we stronger than ever: AfricanAmerican English syntax in twitter. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 31– 37, Gothenburg, Sweden. Association for Computational Linguistics. Oliver Streiter, Leonhard Voltmer, and Yoann Goudin. 2007. From tombstones to corpora: TSML for research on language, culture, identity and gender differences. In Proceedings of the 21st Pacific Asia Conference on Language, Information and Computation, pages 450–458, Seoul National University, Seoul, Korea. The Korean Society for Language and Information (KSLI). Susan Stryker. 2008. Transgender history. Seal Press. Latanya Sweeney. 2013. Discrimination in online ad delivery. ACM Queue. Marko Tadi´c and Sanja Fulgosi. 2003. Building the Croatian morphological lexicon. In Proceedings of the 2003 EACL Workshop on Morphological Processing of Slavic Languages, pages 41–45, Budapest, Hungary. Association for Computational Linguistics. Tomoki Taniguchi, Shigeyuki Sakaki, Ryosuke Shigenaka, Yukihiro Tsuboshita, and Tomoko Ohkuma. 2015. A weighted combination of text and image classifiers for user gender inference. In Proceedings of the Fourth Workshop on Vision and Language, pages 87–93, Lisbon, Portugal. Association for Computational Linguistics. Rachael Tatman. 2017. Gender and dialect bias in YouTube’s automatic captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 53–59, Valencia, Spain. Association for Computational Linguistics. Trang Tran and Mari Ostendorf. 2016. Characterizing the language of online communities and its relation to community reception. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1030–1035, Austin, Texas. Association for Computational Linguistics. National Center for Transgender Equality. 2017. Understanding drag. Blog post. Ashwini Vaidya, Owen Rambow, and Martha Palmer. 2014. Light verb constructions with ‘do’ and ‘be’ in Hindi: A TAG analysis. In Proceedings of Workshop on Lexical and Grammatical Resources for Language Processing, pages 127–136, Dublin, Ireland. Association for Computational Linguistics and Dublin City University. Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3003–3008, Brussels, Belgium. Association for Computational Linguistics. Ben Verhoeven, Iza ˇSkrjanec, and Senja Pollak. 2017. Gender profiling for Slovene twitter communication: the influence of gender marking, content and style. In Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing, pages 119–125, Valencia, Spain. Association for Computational Linguistics. Adam Vogel and Dan Jurafsky. 2012. He said, she said: Gender in the ACL anthology. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 33–41, Jeju Island, Korea. Association for Computational Linguistics. 4587 Thurid Vogt and Elisabeth Andr´e. 2006. Improving automatic emotion recognition from speech via gender differentiaion. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA). Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic language variations to improve multilingual sentiment analysis in social media. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1815–1827, Seattle, Washington, USA. Association for Computational Linguistics. Mario Wandruszka. 1969. Sprachen: vergleichbar und unvergleichlich. R. Piper & Company. Zijian Wang and David Jurgens. 2018. It’s going to be okay: Measuring access to support in online communities. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 33–45, Brussels, Belgium. Association for Computational Linguistics. Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the GAP: A balanced corpus of gendered ambiguous pronouns. Transactions of the Association for Computational Linguistics, 6:605–617. Marion Weller, Alexander Fraser, and Sabine Schulte im Walde. 2013. Using subcategorization knowledge to improve case prediction for translation to German. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 593–603, Sofia, Bulgaria. Association for Computational Linguistics. Candace West and Don H. Zimmerman. 1987. Doing gender. Gender & society, 1(2):125–151. Thomas Wolf. 2017. State-of-the-art neural coreference resolution for chatbots. Blog post. Zach Wood-Doughty, Nicholas Andrews, Rebecca Marvin, and Mark Dredze. 2018. Predicting twitter user demographics from names alone. In Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media, pages 105–111, New Orleans, Louisiana, USA. Association for Computational Linguistics. Kei Yoshimoto. 1988. Identifying zero pronouns in Japanese dialogue. In Coling Budapest 1988 Volume 2: International Conference on Computational Linguistics. Meg Young, Lassana Magassa, and Batya Friedman. 2019. Toward inclusive tech policy design: A method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology. Bei Yu. 2012. Function words for Chinese authorship attribution. In Proceedings of the NAACL-HLT 2012 Workshop on Computational Linguistics for Literature, pages 45–53, Montr´eal, Canada. Association for Computational Linguistics. Wajdi Zaghouani and Anis Charfi. 2018. Arap-tweet: A large multi-dialect twitter corpus for gender, age and language variety identification. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Dong Zhang, Shoushan Li, Hongling Wang, and Guodong Zhou. 2016. User classification with multiple textual perspectives. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2112–2121, Osaka, Japan. The COLING 2016 Organizing Committee. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847–4853, Brussels, Belgium. Association for Computational Linguistics. Lal Zimman. 2019. Trans self-identification and the language of neoliberal selfhood: Agency, power, and the limits of monologic discourse. International Journal of the Sociology of Language, 2019:147– 175. Ran Zmigrod, Sebastian J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. arXiv preprint arXiv:1906.04571. Michael Zock, Gil Francopoulo, and Abdellatif Laroui. 1988. Language learning as problem solving. In Coling Budapest 1988 Volume 2: International Conference on Computational Linguistics. 4588 A Examples of Possible Bias in Data Annotation Bias can enter coreference resolution datasets, which we use to train our systems, through annotation phase. Annotators may use linguistic notions to infer social gender. For instance, consider (2) below, in which an annotator is likely to determine that “her” refers to “Mary” and not “John” due to assumptions on likely ways that names may map to pronouns (or possibly by not considering that SHE pronouns could refer to someone named “John”). While in (3), an annotator is likely to have difficulty making a determination because both “Sue” and “Mary” suggest “her”. In (4), an annotator lacking knowledge of name stereotypes on typical Chinese and Indian names (plus the fact that given names in Chinese — especially when romanized —generally do not signal gender strongly), respectively, will likewise have difficulty. (2) John and Mary visitedhermother. (3) Sue and Mary visitedhermother. (4) Liang and Aditya visitedhermother. In all these cases, the plausible rough inference is that a reader takes a name, uses it to infer the social gender of the extra-linguistic referent. Later the reader sees the SHE pronoun, infers the referential gender of that pronoun, and checks to see if they match. An equivalent inference happens not just for names, but also for lexical gender references (both gendered nouns (5) and terms of address (6)), grammatical gender references (in gender languages like Arabic (7)), and social gender references (8). The last of these ((8)) is the case in which the correct referent is likely to be least clear to most annotators, and also the case studied by Rudinger et al. (2018) and Zhao et al. (2018a). (5) My brother and niece visitedhermother. (6) Mr. Hashimoto and Mrs. Iwu visitedhermother. (7) والدتها شاهدا الممثلة و المطرب walidatuha shahadaa almomathela w almutreb her mother saw actor[f] and singer[m] walidatu-ha shahidanaan walidatuha w almutarab mother -hersaw actor[FEM] and singer[MASC] The singer[MASC] and actor[FEM] sawhermother. (8) The nurse and the actor visitedhermother. 4589 B Annotation of ACL Anthology Papers Below we list the complete set of annotations we did of the papers described in §5.1. For each of the papers considered, we annotate the following items: • Coref: Does the paper discuss coreference resolution? • L.G: Does the paper deal with linguistic gender (grammatical gender or gendered pronouns)? • S.G: Does the paper deal with social gender? • Eng: Does the paper study English? • L̸=G: (If yes to L.G and S.G:) Does the paper distinguish linguistic from social gender? • 0/1: (If yes to S.G:) Does the paper explicitly or implicitly assume that social gender is binary? • Imm: (If yes to S.G:) Does the paper explicitly or implicitly assume social gender is immutable? • Neo: (If yes to S.G and to English:) Does the paper explicitly consider uses of definite singular “they” or neopronouns? For each of these, we mark with [Y] if the answer is yes, [N] if the answer is no, and [-] if this question is not applicable (ie it doesn’t pass the conditional checks). Citation Coref L.G S.G Eng L̸=S 0/1 Imm Neo Sidner (1981) Y Y Y Y N Bainbridge (1985) Y Y N Y Kameyama (1986) Y Y Y Y N Y Y N Mellish (1988) N Y N Y Danlos and Namer (1988) N Y N N Yoshimoto (1988) N Y N N Zock et al. (1988) N Y N N Popowich (1989) N Y N Y Mani et al. (1993) Y N Y Y Y Narayanan and Hashem (1993) N Y N N Soloman and Wood (1994) N Y N Y Quantz (1994) N Y N Y Baker et al. (1994) Genthial et al. (1994) N Y N N Levinger et al. (1995) N Y N N Holan et al. (1997) N Y N N Dorna et al. (1998) N N N Y Harabagiu and Maiorano (1999) Y Y Y Y N Y Y N Avgustinova and Uszkoreit (2000) N Y N N Channarukul et al. (2000) N Y N Y Abuleil et al. (2002) N Y N N Cucerzan and Yarowsky (2003) N Y N N Pakhomov et al. (2003) N N Y Y Tadi´c and Fulgosi (2003) N Y N N Debowski (2003) N Y N N Navarretta (2004) Y Y Y N N Y Y Carl et al. (2004) Y Y Y N N Y Y Mota et al. (2004) N Y N Y Eisner and Karakos (2005) N Y N Y Boulis and Ostendorf (2005) N N Y Y Y Y N Smith et al. (2005) N Y N N Bergsma and Lin (2006) Y Y Y Y N Y Y N Vogt and Andr´e (2006) N N Y N Y Y Quirk and Corston-Oliver (2006) N Y N Y Dada (2007) N Y N N 4590 Citation Coref L.G S.G Eng L̸=S 0/1 Imm Neo Streiter et al. (2007) N N Y N Jing et al. (2007) Y Y Y Y N Y N Badr et al. (2008) N Y N N Marchal et al. (2008) N Y N N van Peursen (2009) N Y N N Badr et al. (2009) N Y N N Garera and Yarowsky (2009) N Y Y Y N Y Y N Bergsma et al. (2009) Y Y Y Y N Y Y N Nastase and Popescu (2009) N Y N N Nanba et al. (2009) N N N Y Robaldo and Di Carlo (2009) N N N Y Mukherjee and Liu (2010) N N Y Y Y Y Ng (2010) Y Y Y Y N Y Y N Burkhardt et al. (2010) N N Y N Y Y Marton et al. (2010) N Y N N Le Nagard and Koehn (2010) Y Y Y Y N Y Y N Rojas-Barahona et al. (2011) N Y N N Mukund et al. (2011) N Y N N Sarawgi et al. (2011) N N Y Y Y Y N Li et al. (2011) Y Y Y Y N Y Y N Burger et al. (2011) N N Y Y Y Y N Mohammad and Yang (2011) N N Y Y Y Y N Sapena et al. (2011) Y Y Y Y N Y Y N Charton and Gagnon (2011) Y Y Y Y N Y Y N Alkuhlani and Habash (2011) N Y N N Mareˇcek et al. (2011) N Y N N L´opez-Lude˜na et al. (2011) N Y N N Declerck et al. (2012) Y Y N Y Bergsma et al. (2012) N N Y Y Y Y N Alkuhlani and Habash (2012) N Y N N Filippova (2012) N N Y Y Y Dinu et al. (2012) N Y N N El Kholy and Habash (2012) N Y N N Yu (2012) N N N N Guillou (2012) Y Y Y Y Y Y Vogel and Jurafsky (2012) N N Y Y Y Y N Goldberg and Elhadad (2013) N Y N N Marton et al. (2013) N Y N N Weller et al. (2013) N Y N Y Ciot et al. (2013) N N Y N Y Y Volkova et al. (2013) N N Y Y Y Y N Levitan (2013) N N Y Y N N N Bojar et al. (2013) N Y N N Glavaˇs et al. (2013) N Y N N Liu et al. (2013) N N N N Kestemont (2014) N N N Y Nov´ak and ˇZabokrtsk´y (2014) Y Y N Y Babych et al. (2014) N Y N N Soler-Company and Wanner (2014) N N Y Y Y Y N Chen and Ng (2014) Y Y Y Y N Y Y N 4591 Citation Coref L.G S.G Eng L̸=S 0/1 Imm Neo Sap et al. (2014) N N Y Y Y Y Nguyen et al. (2014a) N N Y Y Y Y N Prabhakaran et al. (2014) N N Y Y Y Y N Sidorov et al. (2014) N N Y Y Y Y N Darwish et al. (2014) N Y N N Ahmed Khan (2014) N Y N N Nguyen et al. (2014b) N N Y N Y Y Stewart (2014) N N Y Y Y Y Matthews et al. (2014) N Y N N Vaidya et al. (2014) N Y N N Kokkinakis et al. (2015) N Y Y N N Y Johannsen et al. (2015) N N Y Y Y Y Schwartz et al. (2015) N N N Y Hovy (2015) N N Y Y Y Y N Agarwal et al. (2015) N Y Y Y N Y Y N Preot¸iuc-Pietro et al. (2015) N N Y Y N Y Y Ramakrishna et al. (2015) N Y Y Y N Y Y N Taniguchi et al. (2015) N N Y Y N Y N Schofield and Mehr (2016) N N Y Y Y Y N Levitan et al. (2016) N N Y Y Y Y N Flekova et al. (2016) N N Y Y Y Y N Tran and Ostendorf (2016) N N N Y Qian et al. (2016) N Y N Y Li et al. (2016) N N Y Y Y Y N Zhang et al. (2016) N N Y Y Y Y N Garimella and Mihalcea (2016) N N Y Y Y Y N Reddy and Knight (2016) N N Y Y Y Y N Li and Dickinson (2017) N N Y N Y Y P´erez Estruch et al. (2017) N N Y Y Y Y N P´erez-Rosas et al. (2017) N N Y Y Y Y N Rabinovich et al. (2017) N N Y N Y Y Costa-juss`a (2017) N Y N N Sap et al. (2017) N N Y Y Y Zhao et al. (2017) N N Y Y Y Y N Mandravickait˙e and Krilaviˇcius (2017) N N Y Y Y Y N Verhoeven et al. (2017) N N Y Y Y Y N Larson (2017a) N Y Y Y Y N N Y Koolen and van Cranenburgh (2017) N N Y N N Y Tatman (2017) N N Y Y Y Y N Soler-Company and Wanner (2017) N N Y Y Y Y N Ljubeˇsi´c et al. (2017) N N Y N Y Y Litvinova et al. (2017) N N Y N Y Y Mohammad et al. (2018) N N Y Y Y Wang and Jurgens (2018) N Y Y Y Y N N N Kraus et al. (2018) N N Y Y Y Martinc and Pollak (2018) N N Y Y Y Y N Chan and Fyshe (2018) N N Y Y Y Y N Durmus and Cardie (2018) N N N Y Zaghouani and Charfi(2018) N Y Y N N Y Y Plank (2018) N N Y Y Y Y N 4592 Citation Coref L.G S.G Eng L̸=S 0/1 Imm Neo Wood-Doughty et al. (2018) N N Y Y Y Y N Moorthy et al. (2018) N N Y Y Y Levitan et al. (2018) N N Y Y Y Y N Webster et al. (2018) Y Y Y Y N Y Y N Park et al. (2018) N Y Y Y N Y Y N Vanmassenhove et al. (2018) N Y Y N N Y Y Kleinberg et al. (2018) N N Y Y Y Y N Zhao et al. (2018b) N N Y Y Y Y N Balusu et al. (2018) N N N Y Rudinger et al. (2018) Y Y Y Y N N Y Zhao et al. (2018a) Y Y Y Y N Y Y N Kiritchenko and Mohammad (2018) Barbieri and Camacho-Collados (2018) N N Y Y Y N van der Goot et al. (2018) N N Y N Y Y Karlekar et al. (2018) N N Y Y Y Y N de Gibert et al. (2018) N N N Y Mickus et al. (2019) N Y N N 4593 C Example GICoref Document from Wikipedia: Dana Zzyym [[Source: https://en.wikipedia.org/wiki/Dana_Zzyym]] Dana Alix ZzyymA is an Intersex activist and former sailor who was the first military veteran in the United States to seek a non - binary gender U.S. passport , in a lawsuit ZzyymA v. PompeoC . Early life ZzyymA has expressed that theirA childhood as a military brat made it out of the question for themA to be associated with the queer community as a youth due to the prevalence of homophobia in the armed forces . TheirA parentsB hid ZzyymA ’s status as intersex from themA and ZzyymA discovered theirA identity and the surgeries theirA parentsB had approved for themA by themselvesB after theirA Navy service . In 1978 , ZzyymA joined the Navy as a machinist ’s mate . Activism ZzyymA has been an avid supporter of the Intersex Campaign for Equality . Legal case ZzyymA is the first veteran to seek a non - binary gender U.S. passport . In light of the State Department ’s continuing refusal to recognize an appropriate gender marker , on June 27 , 2017 a federal court granted Lambda Legal ’s motion to reopen the case . On September 19 , 2018 , the United States District Court for the District of Colorado enjoined the U.S. Department of State from relying upon its binary - only gender marker policy to withhold the requested passport . 4594 D Example GICoref Document from AO3: Scar Tissue [[Source: https://archiveofourown.org/works/14476524]] [[Author: cornheck]] Despite dreading theirA first true series of final exams , CronaA ’s relieved to have a particularly absorbative memory , lucky to recall all the material theyA ’d been required to catch up on . Half a semester of attendance , a whole year of course content . The only true moment of discomfort came when theyA ’d arrived at the essay portion . Thankful it was easy enough to answer , however , theirA subtle eye - roll stemmed entirely from just how much writing it asked of themA , hands already beginning to ache at the thought of scrawling out two pages on the origins , history , and importance of partnered and grouped soul resonance . By the end of it all , theirA neck , wrist , back , and ribs ached from the strain of theirA typical , hunched posture – a habit theyA defaulted to , and Miss MarieB silently wished theyA ’d be more mindful of . It was a relief , at least to themA , not to be the last one out of the lecture hall . Booklet turned in , theyA left the room as quietly as possible and lingered just outside , an air of hesitance settling upon themA as theyA considered what to do now that , it seemed , everything was over with . No more class , no more lessons , just ... students on break from their studies for the season . “ Kind of a breeze , was n’t it ? ” EvansC ’ voice echoes in the arched hall and CronaA ’s shoulders jump , theirA frame still a tense and anxious mess . “ Oh , ” theyA sigh , “ IA ... IA suppose so . It was n’t ... necessarily hard . ” CronaA answers , putting forth a vaguely forced smile . Smiling with the assumed purpose of making SoulC comfortable with the interaction . A defense mechanism . “ IA - IA guess , for a final , it was easier than IA expected ... everyone ... made it sound like it ’d be difficult . ” “ If by everyone , youA mean Black StarD , then yeah , ” SoulC chuckles , “ heD does n’t really do well on ‘ em ... bad test - taker . ” “ Ah , ” theirA facade falls just in time to be replaced by a much more genuine grin . Of the little theyA ’d spent talking to Black StarD , heD certainly had confidence and skill enough to make up for the lost exam points given hisD performance in every other grading category . “ That ... makes sense . ” “ MakaE ’s always the first one done when it comes to this stuff , sheE practically studies in herE sleep . IC ’m convinced sheE must be practicing clairvoyance the way sheE burns through essay questions , ” SoulC laughs , turning to the meek teenA who gives himC a simple nod in response . Determined not to let an impending awkward silence fall between themF , SoulC pipes up again , “ So , are youA staying here for break ? ” “ Ye - well , IA ... IA think so , ” theyA begin , stuttering , but encouraged to continue by a cock of SoulC ’s head ; a social cue even theyA could read , “ The professorH ... and Miss MarieB G asked if IA ’d like to come and stay with themG for the time being . ” “ Oh , huh , SteinH and MarieB G ? Nice , ” hisC brows lift , clearly some varying degree of happy for the otherA . The optimism is short - lived , observing as CronaA ’s expression falls back to its characteristic expressionless gaze . “ It seems like youA ’ve got a good thing going with those twoG . ” “ IA have n’t decided , yet , if IA should accept the invitation , ” theyA shift a bit where theyA stand . Never having been the best at reassuring others , even hisC own meisterA , SoulC kept hisC mouth shut to avoid stuttering while heC searched for the right words a web of thoughts . “ Y ’A know , IC think it ’s less of an invitation and more of an extended welcome . ” The otherA raises theirA head , taken aback , “ Oh , ” CronaA mutters , in a poignant tone , “ IA ... never considered something like that . ” SoulC does n’t leave much wiggle room for theirA mood to fall any further ( nothing past a flat - lipped frown ) , “ TheyG ’d probably love to have youA , IC bet theyG drive each other nuts sometimes all by themselvesG . ” Though EvansC wo n’t admit it , heC knows it ’s all too likely SteinH might actually put some more effort into taking care of himselfH if heH had someone else besides MarieB to look after . “ IA - IA see , ” theyA exhale with a nod , giving SoulC a hint of affirmation that heC ’d done something to boost the kidA ’s confidence . “ IC mean , it ’s got ta be lonely not to mention boring hanging here all summer ... and the weather , ” SoulC nearly gasps , dramatizing it for added effect , “ Oh , man , IC do n’t know how youA can stay cooped up in that room of yoursA when it ’s so nice out , ” heC grins . “ But ... meh . Different strokes . IC ca n’t judge . ” HisC comments comfort themA , an for a moment theyA forget how this came to be . The cathedral in Italy , Lady MedusaI ’s wrath , and the black blood that infected himC . Every moment theyA spent in the presence of Soul EvansC builds always up to this ; fixation on the memories of theirJ first encounters and all the pain theyA ’ve caused himC , the pain theyA ’ve caused heC and MakaE K both . As quickly as SoulC had lifted the swordsmanA ’s spirits , theyA ’d weighed themselvesA down once more . It seemed so normal , though . SoulC could n’t bring himselfC to feel any sense of accomplishment in the coaxing - out of CronaA ’s smile when the return of theirA self doubt was as certain as the sun in the sky . HisC own stubbornness could n’t let hisC diminished self worth lie . 4595 With another encouraging smile , rows of sharpened incisors appearing oddly charismatic , heC opens hisC mouth to speak – but finds himselfC cut off before heC can even squeeze a word in . “ SoulC , IA ’m sorry , ” the meisterA blurts . Having been pent - up for months , the apology comes forth without inhibition , rolling effortlessly off theirA tongue . “ Sorry ... ? For what ? ” EvansC quirks a brow , chuckling . HeC adjusts hisC stance to face CronaA with the whole of hisC body , maintaining hisC positive demeanor . “ F - for what ... ? ” TheyA stammer , shaking theirA head . For all theirA remorse , theyA thought this would have been obvious . “ For everything , it ’s ... the first time weF dueled , IA was the enemy ! IA - IA almost killed youC , IA - IA ... IA really , really hurt youC , ” theyA answer , still so sick with guild that even theirA confession of responsibility is tainted with frustration . SoulC seems stunned for a moment before harnessing hisC quick wit . “ Hey , now , youA ca n’t take all the credit like that , RagnarokL did most of the damage , ” heC . . .
2020
418
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4596–4608 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4596 Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? Cansu Sen1, Thomas Hartvigsen2, Biao Yin2, Xiangnan Kong1,2, and Elke Rundensteiner1,2 1Computer Science Department , Worcester Polytechnic Institute 2Data Science Program, Worcester Polytechnic Institute {csen,twhartvigsen,byin,xkong,rundenst}@wpi.edu Abstract Motivated by human attention, computational attention mechanisms have been designed to help neural networks adjust their focus on specific parts of the input data. While attention mechanisms are claimed to achieve interpretability, little is known about the actual relationships between machine and human attention. In this work, we conduct the first quantitative assessment of human versus computational attention mechanisms for the text classification task. To achieve this, we design and conduct a large-scale crowd-sourcing study to collect human attention maps that encode the parts of a text that humans focus on when conducting text classification. Based on this new resource of human attention dataset for text classification, YELP-HAT, collected on the publicly available YELP dataset, we perform a quantitative comparative analysis of machine attention maps created by deep learning models and human attention maps. Our analysis offers insights into the relationships between human versus machine attention maps along three dimensions: overlap in word selections, distribution over lexical categories, and context-dependency of sentiment polarity. Our findings open promising future research opportunities ranging from supervised attention to the design of human-centric attentionbased explanations. 1 Introduction Attention-based models have become the architectures of choice for a vast number of NLP tasks including, but not limited to, language modeling (Daniluk et al., 2017), machine translation (Bahdanau et al., 2015), document classification (Yang et al., 2016), and question answering (Kundu and Ng, 2018; Sukhbaatar et al., 2015). While attention mechanisms have been said to add interpretability since their introduction (Bahdanau et al., 2015), the investigation of whether this claim is correct has Figure 1: Examples of binary human attention (blue in top two texts) and continuous machine attention (red in bottom text). only just recently become a topic of high-interest (Mullenbach et al., 2018; Thorne et al., 2019; Serrano and Smith, 2019). If attention mechanisms indeed offer a more in-depth understanding of a model’s inner-workings, application areas from model debugging to architecture selection would benefit greatly from profound insights into the internals of attention-based neural models. Recently, Jain and Wallace (2019), Wiegreffe and Pinter (2019), and Serrano and Smith (2019) proposed three distinct approaches for evaluating the explainability of attention. Jain and Wallace (2019) base their work on the premise that explainable attention scores should be unique for a given prediction as well as consistent with other featureimportance measures. This prompts their conclusion that attention is not explanation. Based on similar experiments on alternative attention scores, Serrano and Smith (2019) conclude that attention does not necessarily correspond to the importance of inputs. In contrast, Wiegreffe and Pinter (2019) find that attention learns a meaningful relationship between input tokens and model predictions, which cannot be easily hacked adversarially. While these works ask valuable questions, they embrace model-driven approaches for manipulating the attention weights and thereafter evaluate the 4597 post-hoc explainability of the generated machine attention. In other words, they overlook the human factor in the evaluation process – which should be integral in assessing the plausibility of the generated explanations (Riedl, 2019). In this work, we adopt a novel approach to attention explainability from a human-centered perspective and, in particular, investigate to what degree machine attention mimics human behavior. More precisely, we are interested in the following research question: Do neural networks with attention mechanisms attend to the same parts of the text as humans? To this end, we first collect a large dataset of human-attention maps and then compare the validated human attention with a variety of machine attention mechanisms for text classification. Figure 1 displays examples of human and machine-generated attention for classifying a restaurant review’s overall rating. Our goal is to quantify the similarity between human attention and machine-generated attention scores. Measuring this similarity is non-trivial and is not appropriately captured by an existing similarity metric (e.g., Euclidean) between two vectors for the following reasons. A binary human attention vector does not solely denote which tokens are given higher importance but also implies information about the underlying grammatical structure and linguistic construction. For example, whether or not adjectives tend to be high-importance is encoded in the attention weights as well. Further, it is well known that human attention is itself subjective: given the same text and task, human annotators may not always agree on which words are important. That is, one single human’s attention should rarely be regarded as the ground-truth for attention. Given this objective, we use crowd-sourcing to collect a large set of human attention maps. We provide a detailed account of the iterative design process for our data collection study in §3. We design new metrics that quantify the similarity between machine and human attention from three perspectives (§4): Behavioral similarity measures the number of common words selected by human and machine discerning if neural networks with attention mechanisms attend to the same parts of the text as humans. Humans associate certain lexical categories (e.g., adjectives) with a sentiment more heavily. Lexical (grammatical) similarity identifies if machine attention favors similar lexical categories with humans. A high lexical similarity shows that the attention mechanism learns similar language patterns with humans. Context-dependency quanitifies sentiment polarity of word selections. We then employ these metrics to compare attention maps from a variety of attention-based Recurrent Neural Networks (RNN). We find that biDirectional RNNs with additive attention demonstrate strong similarities to human attention for all three metrics. In contrast, uni-directional RNNs with attention differ from human attention significantly. Finally, as the text length increases, and with it, the prediction task becomes more difficult, both the accuracy of the models and similarity between human and machine decrease. Our contributions are as follows: • We conduct a large-scale collection of 15,000 human attention maps as a companion to the publicly-available Yelp Review dataset. Our collected Yelp-HAT (Human ATtention) dataset is publicly available as a valuable resource to the NLP community. • We develop rich metrics for comparing human and machine attention maps for text. Our new metrics cover three complementary perspectives: behavioral similarity, lexical similarity, and context-dependency. • We conduct the first in-depth assessment comparing human versus machine attention maps, with the latter generated by a variety of stateof-the-art soft and hard attention. • We show that when used with bidirectional architectures, attention can be interpreted as human-like explanations for model predictions. However, as text length increases, machine attention resembles human attention less. 2 Preliminaries on Attention Maps In this section, we define the concepts of Human Attention Map and Machine Attention Map. Definition 2.1. Attention Map. An Attention Map (AM) is a vector where each entry in sequence is associated with a word in the corresponding position of the associated text. The value of the entry indicates the level of attention the corresponding word receives with respect to a classification task. Definition 2.2. Human Attention Map. A Human Attention Map (HAM) is a binary attention 4598 map produced by a human, where each entry with a set-bit indicates that the corresponding word receives high attention. Definition 2.3. Machine Attention Map. A Machine Attention Map (MAM) is an attention map generated by a neural network model. If computed through soft-attention, a MAM corresponds to an AM of continuous values, that capture a probability distribution over the words. If computed through hard-attention, a MAM is a binary AM. We now introduce the application of aggregation operators to coalesce HAMs by multiple annotators into aggregated HAMs. Definition 2.4. Consensus Attention Map. If multiple HAMs exist for the same text, a Consensus Attention Map (CAM) is computed through a bitwise AND operation of the HAMs. Definition 2.5. Super Attention Map. If multiple HAMs exist for the same text, a Super Attention Map (SAM) is computed by a bitwise OR operation of the HAMs. 3 Collection and Analysis of Human Attention Maps 3.1 HAM Collection by Crowd-sourcing We collect human attention maps for the Yelp dataset1 on the classification task of rating a review as positive or negative on Amazon Mechanical Turk. Participants are asked to complete two tasks: 1) Identify the sentiment of the review as positive, negative, or neither, and 2) Highlight the words that are indicative of the chosen sentiment. Our interface used for data collection is in Figure 2. Preliminary investigation of the quality of human annotations. First, we conduct a series of data collection studies on two subsets of the Yelp dataset. Both subsets consist of 50 randomlyselected reviews from the Restaurant category. The first subset contains reviews with exactly 50 words, while the second contains reviews with exactly 100 words. For each review, human annotation is collected from two unique users. We explore the quality of data we can collect on Mechanical Turk, as it encourages users to complete their tasks as quickly as possible since the number of completed tasks determines their income. This may lower the quality of collected 1https://www.yelp.com/dataset/ challenge Figure 2: User interface we used for data collection on Amazon Mechanical Turk. data since users may not select all relevant words, instead opting for the few most obvious ones, or they may choose words randomly. Based on our preliminary investigations, we observe that both the average time users spend on the task (44 vs. 70 seconds) and the average number of words selected per review (9 vs. 13 words) increase as the number of words in the review increases from 50 to 100. This suggests that users do not choose words randomly; instead, they make an informed decision. We also visually examine the collected human attention maps and confirm that subjects make meaningful selections. Pilot study assessing two design choices for data collection. Next, we design another pilot study to understand how humans perform the cognitive task of classifying a text and selecting the particular words that led to this decision. In this study, we ask eight participants to perform the same task while adhering to one of two strategies. The first strategy, the read-first design, involves reading the review first, deciding on the sentiment, then rereading the review, this time to highlight the relevant words. The second strategy, the free-style design, gives participants the freedom to choose the relevant words as they read the review to determine the sentiment. Each participant is asked to complete two tasks to experience both strategies. Half of the participants first work with the read-first design followed by the free-style design while the other half work in the reverse order. After completing the tasks, we ask the participants which strategy they find more natural in a post-task questionnaire. Findings from the pilot study. Out of eight participants, half of them find it more useful reading the review first then deciding on the words whereas the other half indicated the opposite. We then evaluate 4599 the collected data from three perspectives to decide which design is most suitable for our purposes. We first examine the agreement between participants adhering to a particular strategy. This involves calculating the percentage of participants that mutually select the same phrase. We find that participant agreement is higher (73%) when the participants are forced to read the review before making any selections compared to using the freestyle design (69%). Next, we investigate how similar the results are to the ground truth we defined for each review. The read-first design achieves better performance (3.30) compared to the freestyle design (3.10). Our final criterion involves examining the amount of noise in the data (i.e., selections which deviate from the chosen sentiment). Only one review exhibits this situation where the review is clearly positive; however, it also contains a negative-opinion sentence. We observe that the read-first design reduces this cross-sentiment noise (1 vs. 0.5 scores). Data collection protocol for the main study. Based on conclusions from the pilot studies, the read-first design is adopted to conduct the main data collection for 5, 000 reviews on Amazon Mechanical Turk. For this study, three different subjects annotated each review, resulting in a total of 15, 000 human attention maps. The resulting Yelp Human Attention Dataset (YELP-HAT) is publicly available 2 . 3.2 Analysis and Insights About HAMs Factors that affect human accuracy. Some reviews contain a mixture of opinions, even though the reviewer felt strongly positive or negative about the restaurant. For example, consider the following review: “Nothing to write home about, the chicken seems microwaved and the appetizers are meh. ... If your [sic] looking for a quick oriental fix I’d say go for it.. otherwise look elsewhere.” This review is labeled as negative, positive, and neither. The annotator who assigned it to the positive class selected the words “go for it” while the annotator who assigned it to the negative class selected the words “otherwise look elsewhere”. This type of “mixed review” is the principal reason for discrepancies in classifications by the human annotators. The nature of crowd-sourcing also causes such inconsistencies as not all annotators provide reviews 2http://davis.wpi.edu/dsrg/PROJECTS/ YELPHAT/index.html of equal quality. Ambiguity in human attention. Intuitively, human attention is highly subjective. Some common patterns across annotators lead to differences in human annotations. A common behavior is to select keywords that indicate a sentiment. Another typical action is to select entire sentences if the sentence expresses an opinion. Some reviews include subjective phrases that people interpret differently with regard to sentiment-polarity. For instance, “I come here often” can be construed as a favorable opinion. However, some people find it neutral. In some cases, an overwhelmingly-positive review incorporates a negative remark (or vice versa). In these cases, some people select all pieces of evidence of any sentiment, whereas others only choose words that indicate the prevailing sentiment. 4 Attention Map Similarity Framework We quantify the similarity between HAMs and MAMs through our similarity framework that contains three new metrics as described in this section. 4.1 Overlap in Word Selections For two attention mechanisms to be similar, they must put attention on the same parts of the text. Thus, we first define a metric for quantifying the overlap in the words selected by human annotators and by deep learning models. Definition 4.1. Behavioral Similarity. Given a collection of attention maps HAMD and MAMD for a text dataset D, behavioral similarity between human (H) and machine (M) corresponds to the average pair-wise similarity between each (HAMi, MAMi) vector pair ∀i ∈D as defined below: PairwiseSimi = AUC(HAMi, MAMi) BehavioralSim(M, H) = 1 |D| X i (PairwiseSimi) where |D| is the number of reviews in the dataset D. Intuitively, this corresponds to adopting the human attention vector as binary ground truth. That is, it measures how similar the machine-generated continuous vector is to this ground truth. AUC is between 0 and 1 with .5 representing no similarity, and 1 the perfect similarity. 4600 4.2 Distribution over Lexical Categories Previous work has found that lexical indicators of sentiment are commonly associated with syntactic categories such as adjective, adverb, noun, and verb (Marimuthu and Devi, 2012). We define the following lexical similarity metric to test if human and machine adopt similar behaviors in terms favoring certain lexical categories. Definition 4.2. Lexical Similarity. Given a collection of attention maps HAMD and MAMD for a text dataset D, Lexical Similarity (LS) between human (H) and machine (M) over D is computed: LS(M, H) = corr(dist(wordsH), dist(wordsM)) where wordsH is a list of all selected words in all reviews of D by human, wordsM is a list of all selected words in all reviews of D by machine, dist() is a function that computes the distribution of a word list over a tagset (e.g., nouns, verbs, etc.). After computing two distributions, the corr() function computes the correlation between them. In our experiments, we adopt Pearson Correlation. If MAM is continuous, selected words by M corresponds to k words with the highest attention scores, where k is the number of words selected by human for that text. Using a random attention R as a baseline where the most important k words are selected randomly, we then compute an Adjusted Lexical Similarity which is between 0 and 1 as follows. AdjustedLS = LS(M, H) −LS(R, H) 1 −LS(R, H) 4.3 Context-dependency of Sentimental Polarity When deciding the sentiment of a review, human subjects may consider positive-sentiment words in a negative review and vice versa. To assess how context-dependant human and machine attentions are, we compute cross-sentiment selections rates. Definition 4.3. Cross-sentiment selection rate (CSSR). Assume we have a collection of attention maps AMD for a dataset D, ground truth for overall sentiment Y for each review in D ( yi ∈{0, 1} ), and a list of positive words P and negative words N in the English language. CSSR denotes the ratio of selected words from the opposite sentiment. p words = get words(HAMD, Y = 1) n words = get words(HAMD, Y = 0) CSSRp = |p words ∩N| |p words ∩P| CSSRn = |n words ∩P| |n words ∩N| get words() function returns a list of attentionreceiving words where HAMij = 1, ∀i, j for the entire set of HAMD, for positive-sentiment reviews (Y = 1) and negative-sentiment reviews (Y = 0) separately. A list of words with positive and negative connotations, P and N, are obtained from Hu and Liu (2004). CSSRp (positive) and CSSRn (negative) is then computed as the ratio of the number of cross-sentiment words over the number of same-sentiment words. A high CSSR means many words from the opposite sentiment are selected. This metric provides insights about how similar human and machine attentions are with regard to their context-dependant behaviour. 5 Is Machine Attention Similar to Human Attention? 5.1 Generating Machine Attention Maps The Yelp dataset contains reviews and their rating scores between 0 and 5 (stars). This rating score corresponds to the ground truth for the review’s overall sentiment. We create a binary classification task by assigning 1 and 2-star reviews to the negative class and 4 and 5-star reviews to the positive class. We omit 3-star reviews as they may not exhibit a clear sentiment. For training neural network models, we extract balanced subsets and split them into 80% training set, 10% validation set and 10% test sets. We then generate MAMs using the following machine learning models. RNN with soft attention. Recurrent Neural Networks (RNN) enhanced with attention mechanisms have emerged as the state-of-the-art for NLP tasks (Bahdanau et al., 2015; Yang et al., 2016; Daniluk et al., 2017; Kundu and Ng, 2018). We implement the additive attention for many-to-one classification task as it is commonly used in the literature (Yang et al., 2016; Bahdanau et al., 2015) and paired it with both uni- and bi-directional RNN. In our implementation, we use LSTM memory cells. 4601 Accuracy Yelp-50 Yelp-100 Yelp-200 Human 0.96 0.94 0.94 RNN 0.91 ± 0.006 0.90 ± 0.013 0.88 ± 0.01 biRNN 0.93 ± 0.008 0.91 ± 0.005 0.88 ± 0.02 Rationales 0.90 ± 0.004 0.85 ± 0.035 0.77 ± 0.015 Table 1: Test accuracy from three subsets of Yelp data. Assuming that Γ is the recurrence function of LSTM and xi is the embedded i-th word of T words in a review, we model our method as: hi = Γ(xi, hi−1), i ∈[1, T] (1) ui = tanh(Whi + b) (2) αi = exp(u⊤ i u) P t exp(u⊤ i u) (3) Here hi, i ∈[1, T] are hidden representations, W, b, and u are trainable parameters, and αi, i ∈ [1, T] are the attention scores for each word xi. A context vector ci corresponds to the weighted average of the hidden representations of words with attention weights, denoted by: ci = X j αjhj (4) Through a softmax layer, context vector ci is then used for further classifying the input sequence. Rationale mechanism. An alternative approach, referred to as “rationale mechanism”, can be seen as a type of hard attention (Lei et al., 2016; Bao et al., 2018). This model consists of two main parts that are jointly learned: a generator and an encoder. The generator specifies a distribution over the input text to select candidate rationales. The encoder is used to make predictions based on the rationales. The two components are integrated and regularized in the cost function with two hyper-parameters, selection lambda, and continuity lambda, for optimizing the representative selections. The selection lambda penalizes the number of words selected, while the continuity lambda encourages the continuity via minimizing the distances of the words chosen. 5.2 Behavioral Similarity Analysis We conduct a set of controlled experiments with the length of the review changing across experiments. First, we generate MAMs for three subsets of the Yelp dataset: reviews containing 50 words (Yelp-50), 100 words (Yelp-100) and 200 words (Yelp-200). Neural network models with attention mechanisms are trained on each of these subsets. The corresponding test set accuracies for sentiment classification of human versus machine are shown in Table 1. Next, we acquire the HAMs collected for each test set. Since each review is annotated by three people, we have three sets of HAMs: HAM1, HAM2, and HAM3. Consensus among the three, CAM and SAM, are computed as per Defs. 2.4 and 2.5. Then we measure the Behavioral Similarity between human and machine. The amount of overlap in the selected words are presented in Table 2. We observe that accuracy and similarity both decrease as the review-length increases and the classification task becomes more difficult for both humans and machine learning models. We identify two reasons for this: First, when a review is long, the prevailing opinion is usually not obvious at first glance and may require more intensive reading and contemplating. Second, the reviewers are more likely to state conflicting facts and opinion in long reviews. This, in turn, creates distracting and hardto-read text. Compared to unidirectional model, bidirectional RNN with attention consistently rates closer to human attention. This is most striking for the Yelp-50 subset. This can be explained with the fact that bidirectional RNNs possess information from both directions of the text similar to humans. For all three subsets, Yelp-50, Yelp-100, and Yelp-200, behavioral similarity for Consensus Attention Map is higher than all three HAMs. This is an important result because it indicates that the words all annotators agreed to be important are selected by machine attention too, whereas more subjective selections do not always get high attention from machine, indicated by lower SAM similarity. Finally, we compare similarity of these three sets of HAMs. Even though human-to-human similarity is usually higher than human-to-machine similarity (as expected), the numbers still far from being close to 1. This confirms the subjectivity of human attention. Also, note that human-to-human similarity decreases as the review length increases. We observe that the performance of the rationalebased models degrades more sharply as the reviewlength increases. As our goal is to compare human attention with machine-generated attention for model interpretability, we optimize the model not only for accuracy but also for the number of selected rationales. We aim to generate roughly an 4602 Yelp-50 HAM1, k = 10 HAM2, k = 12 HAM3, k = 12 CAM, k = 5 SAM, k = 22 HAM2 0.73 HAM3 0.74 0.75 RNN Attention 0.59 ± 0.021 0.59 ± 0.002 0.57 ± 0.012 0.59 ± 0.024 0.58 ± 0.021 Bi-RNN Attention 0.69 ± 0.004 0.70 ± 0.008 0.69 ± 0.007 0.79 ± 0.003 0.64 ± 0.008 Rationales 0.62 ± 0.014 0.62 ± 0.012 0.63 ± 0.015 0.68 ± 0.020 0.58 ± 0.010 Yelp-100 HAM1, k = 15 HAM2, k = 16 HAM3, k = 16 CAM, k = 6 SAM, k = 30 HAM2 0.71 HAM3 0.73 0.74 RNN Attention 0.57 ± 0.009 0.58 ± 0.011 0.59 ± 0.012 0.57 ± 0.010 0.58 ± 0.008 Bi-RNN Attention 0.65 ± 0.011 0.65 ± 0.021 0.66 ± 0.021 0.73 ± 0.031 0.62 ± 0.012 Rationales 0.55 ± 0.015 0.55 ± 0.005 0.55 ± 0.010 0.59 ± 0.015 0.54 ± 0.005 Yelp-200 HAM1, k = 26 HAM2, k = 27 HAM3, k = 25 CAM, k = 11 SAM, k = 45 HAM2 0.70 HAM3 0.69 0.71 RNN Attention 0.60 ± 0.011 0.60 ± 0.013 0.60 ± 0.014 0.60 ± 0.017 0.60 ± 0.011 Bi-RNN Attention 0.61 ± 0.015 0.61 ± 0.008 0.61 ± 0.018 0.63 ± 0.009 0.60 ± 0.008 Rationales 0.51 ± 0.013 0.52 ± 0.021 0.51 ± 0.018 0.52 ± 0.025 0.49 ± 0.019 Table 2: Behavioral similarity of human attention to machine on varying review length. k indicates the average number of words selected. (0.5:no similarity, 1.0:perfect similarity) equal number of words selected by both human annotators and machine-generated rationales. Hence, we force the rationale-models to pick fewer words by tuning the selection lambda accordingly. This gives a comparative advantage to attention-based models against rationale-based models, as the rationale model is a hard-attention mechanism. In addition, rationales are better suited for sentencelevel tasks as they encourage consecutive selection as opposed to the behavior of attention. 5.3 Lexical Similarity Analysis Next, we analyze if humans and neural networks pay more attention to words from particular lexical categories using Adjusted Lexical Similarity score. Lexical Similarity results, presented in Table 3, are consistent with Behavioral Similarity in that bidirectional model with attention is most similar to human (0.91 for Yelp-50 and 0.84 for Yelp-100). Rationales model follows bidirectional RNN, and unidirectional RNN is the least similar model to human. Overall, lexical similarity to human decreases for all models, as the reviews become longer. Next, we inspect which lexical categories are selected more heavily by human and machine. For this, we provide relative frequency of lexical categories for human-selected words, machine-selected words (bi-RNN), and overall relative frequency of this tag within the dataset. Adjectives (Human:0.24 bi-RNN:0.23 Overall:0.02), comparative adjectives (Human:0.002 bi-RNN:0.001 Overall:0.0001), and nouns (Human:0.38 bi-RNN:0.37 Overall:0.09) are among the lexical categories that humans and biRNN models favor heavily. Similarly, personal pronouns are rarely selected by neither humans nor bi-RNN models (Human:0.005 bi-RNN:0.005 Overall:0.01). 5.4 Cross-sentiment Selection Rate Analysis Finally, we compute CSSR scores, presented in Table 4, to evaluate the context-dependency of sentimental polarity for human and machine attentions. Our observations for Yelp-50 dataset are as follows. By human annotators, almost exclusively positive words are selected if the overall review sentiment is positive. For negative reviews, higher number of positive words are selected than negative words (CSSRp = 0.06, CSSRn = 0.20). Among the neural network models, the bidirectional RNN once more behaves most similar to human annotators with CSSRp = 0.04 and CSSRn = 0.19. RNN model’s approach differs from that of human’s and bi-RNN’s. Even though the behaviour is similar for positive polarity (CSSRp = 0.06), the opposite is true for negative polarity. In fact, positive words selected 2.28 times more than negative words in negative reviews, which is counter-intuitive. For the Rationales model, CSSRp is 0.08 and CSSRn 4603 is 0.44. This indicates that Rationales model is more similar to human attention than RNN model with attention. We observe similar trends for the Yelp-100 and Yelp-200 datasets. 6 Related Work A large body of work has been using attention mechanisms to attempt to bring ’interpretability’ to model predictions (Choi et al., 2016; Sha and Wang, 2017; Yang et al., 2016). However, they only assess the produced attention maps qualitatively by visualizing a few hand-selected instances. Recently, researchers began to question the interpretability of attention. Jain and Wallace (2019) and Serrano and Smith (2019) argue that if alternative attention distributions exist that produce similar results to those obtained by the original model, then the original model’s attention scores cannot be reliably used to explain the model’s prediction. They empirically show that achieving such alternative distributions is possible. In contrast, Wiegreffe and Pinter (2019) find that attention learns a meaningful relationship between input tokens and model predictions which cannot be easily hacked adversarially. Das et al. (2016) conducted the first quantitative assessment of computational attention mechanisms for the visual question answering (VQA) task. Similar to our work, they collect a human attention dataset, then measure the similarity of human and machine attention within the context of VQA. This VQA-HAT dataset now provides a fertile research vehicle for researchers in computer vision for studying the supervision of the attention mechanism (Liu et al., 2017a). The development of a similar dataset and an in-depth quantitative evaluation for text to advance NLP research is sorely lacking. In a concurrent and independent work, DeYoung et al. (2019) collects the ERASER dataset for human annotations of rationales. While ERASER includes multiple datasets for a number of NLP tasks with relatively small amounts of data for each, we focus on text classification and collect a large amount of data on a different corpus. 7 Discussion Recent papers, including our work, take strides at answering the question if attention is interpretable. This is complicated by the fact that “interpretability” remains a not well-defined concept. Attention adds transparency. Lipton (2018) defines transparency as overall humanunderstanding of a model, i.e., why a model makes its decisions. Under this definition, attention scores can be seen as partial transparency. That is, they provide a look into the inner workings of a model, in that they produce an easily-understandable weighting of hidden states (Wiegreffe and Pinter, 2019). Attention is not faithful. Whether adversarial attention scores exist that result in the same predictions as the original attention scores helps us understand if attention is faithful. With their empirical analyses, Serrano and Smith (2019) and Jain and Wallace (2019) show that attention is not faithful. Rationale models for human-like explanations. Riedl (2019) argues that explanations are post-hoc descriptions of how a system came to a given conclusion. This raises the question of what makes a good explanation of the behavior of a machine learning system. One line of research offers these explanations in the form of binary rationales, namely, explanations that plausibly justify a model’s actions (Bao et al., 2018; Lei et al., 2016). Our approach at attention as human-like explanations. In claiming attention is explanation, it is seen to mimic humans in rationalizing past actions. In our work, we approach interpretability from this human-centric perspective. We develop a systematic approach to either support or refute the hypothesis that attention corresponds to humanlike explanations for model behavior. Based on our comparative analyses, we provide initial answers to this important question by finding insights into the similarities and dissimilarities of attention-based architectures to human attention. Towards additional tasks beyond text classification. Confidently concluding whether attention mimics human requires tremendous efforts from many researchers with human data to be collected via a well-designed data collection methodology, both labor-intensive and costly task. In this work, we thus focus on one task, namely, sentiment classification, and collect HAM for this task and on a single dataset. We invite other researchers to continue this line of research by exploring other tasks (e.g., question answering). Next steps in attention research. Our work opens promising future research opportunities. One is to supervise attention models explicitly. Attention mechanisms themselves are typically learned in an unsupervised manner. However, initial re4604 Yelp-50 Yelp-100 Yelp-200 Lexical Sim. Adjusted LS Lexical Sim. Adjusted LS Lexical Sim. Adjusted LS Random Attention 0.85 ± 0.006 0.84 ± 0.013 0.90 ± 0.010 RNN Attention 0.93 ± 0.015 0.54 0.91 ± 0.007 0.44 0.93 ± 0.005 0.37 Bi-RNN Attention 0.99 ± 0.005 0.91 0.98 ± 0.013 0.84 0.93 ± 0.003 0.36 Rationales 0.95 ± 0.012 0.66 0.93 ± 0.027 0.53 0.90 ± 0.002 0.05 Table 3: Lexical Similarity and Adjusted Lexical Similarity of human attention to machine on varying review length. (Adjusted LS 0:no similarity, 1:perfect similarity) CSSRp CSSRn Human 0.06 0.20 RNN Attention 0.06 2.28 Bi-RNN Attention 0.04 0.19 Rationales 0.08 0.44 Table 4: Cross-sentiment Selection Rates for positive and negative reviews for Yelp-50 dataset. search offers compelling evidence for the success of supervised attention models (Chen et al., 2017; Liu et al., 2017b) in the computer vision area. Also, attention has the potential to be leveraged for both making predictions and concurrently producing human-centric explanations similar to rationalebased architectures. 8 Conclusion To gain a deeper understanding of the relationships between human and attention-based neural network models, we conduct a large crowd-sourcing study to collect human attention maps for text classification. This human attention dataset represents a valuable community resource that we then leverage for quantifying similarities between human and attention-based neural network models using novel attention-map similarity metrics. Our research not only results in insights into significant similarities between bidirectional RNNs and human attention, but also opens the avenue for promising future research directions. Acknowledgments This research was supported by the U.S. Dept. of Education grant P200A150306, Worcester Polytechnic Institute through the Arvid Anderson Fellowship, and the National Science Foundation through grants IIS-1815866, IIS-1910880, IIS1718310, and CNS -1852498. We thank Prof. Lane Harrison, WPI, for his advice and guidance on the design study for the data collection, and Prof. Jeanine Skorinko, WPI, for helpful discussion about the cognitive aspects of human attention. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. Proceedings of International Conference on Learning Representations. Yujia Bao, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human rationales. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1903–1913. Lei Chen, Mengyao Zhai, and Greg Mori. 2017. Attending to distinctive moments: Weakly-supervised attention models for action localization in video. In Proceedings of the IEEE International Conference on Computer Vision, pages 328–336. Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter Stewart. 2016. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. In Advances in Neural Information Processing Systems, pages 3504–3512. Michał Daniluk, Tim Rockt¨aschel, Johannes Welbl, and Sebastian Riedel. 2017. Frustratingly short attention spans in neural language modeling. ICLR. Abhishek Das, Harsh Agrawal, Larry Zitnick, Devi Parikh, and Dhruv Batra. 2016. Human attention in visual question answering: Do humans and deep networks look at the same regions? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 932–937. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models. arXiv preprint arXiv:1911.03429. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. ACM. 4605 Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543–3556. Souvik Kundu and Hwee Tou Ng. 2018. A questionfocused multi-factor attention network for question answering. In Thirty-Second AAAI Conference on Artificial Intelligence. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117. Zachary C Lipton. 2018. The mythos of model interpretability. Communications of the ACM, 61(10):36–43. Chenxi Liu, Junhua Mao, Fei Sha, and Alan Yuille. 2017a. Attention correctness in neural image captioning. In Thirty-First AAAI Conference on Artificial Intelligence. Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017b. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1789–1798. K Marimuthu and Sobha Lalitha Devi. 2012. How human analyse lexical indicators of sentiments-a cognitive analysis using reaction-time. In Proceedings of the 2nd Workshop on Sentiment Analysis where AI meets Psychology, pages 81–90. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101–1111. Mark O Riedl. 2019. Human-centered artificial intelligence and machine learning. arXiv preprint arXiv:1901.11184. Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Ying Sha and May D Wang. 2017. Interpretable predictions of clinical outcomes with an attention-based recurrent neural network. In Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, pages 233–240. ACM. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2019. Generating token-level explanations for natural language inference. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 963–969. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 11–20. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. 4606 A Appendix A.1 Training Rationale-based models For the Rationale Neural Prediction Framework, we use the Pytorch implementation3 suggested by Lei et al. (2016). In this framework, the encoder is built as Convolutional Neural Network (CNN) and the generator is built as Gumbel Softmax with independent selectors. The following hyper-parameters of CNN are used as pointed out by (Lei et al., 2016): 200 hidden dimensions, 0.1 dropout rate, 2 hidden layers, 128 batch size, 64 epochs, 0.0003 initial learning rate. We conducted an extensive parameter search to find the optimum values for the two key hyper-parameters of the rationale model, selectionlambda, and continuity-lambda, which regularize the number and the continuity of words selected during the optimization process. For the selection lambda, we experimented with values 1, 1e-1, 1e2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7, 1e-8, 1e-9, and 0. For the continuity lambda, we experimented with values 0 and two times of selection lambda. We observe that the performance of the rationalebased model is extremely sensitive to its hyperparameters. One conflicting interest with the rationale-based models is that the more words the model selects, the accuracy becomes higher. As our goal is to compare human attention with machine-generated attention for model interpretability, we optimize the model not only for accuracy but also for the number of selected rationales. We aim to generate roughly an equal number of words selected by both human annotators and machine-generated rationales. A.2 Training Attention-based models We used the following hyper-parameters to RNNbased models. 100 hidden dimensions, 100 attention size, 0.2 dropout rate, 128 batch size, 64 epochs, 0.0001 initial learning rate. A.3 Additional Analysis Results An example visualization of the attention maps annotated by human annotators and machine learning models is provided in Figure 4. The agreement between human annotators and all machine learning models can be considered high in this example, as there are many mutual selections. 3https://github.com/yala/text_nn Figure 3: Human attention is highly subjective. Some annotators tend to select only a few words, whereas others choose entire sentences. Another example is provided in Figure 3, demonstrating the attention maps provided by two different annotators for the same review. This is an extreme example of the subjectivity of human attention. The first annotator only highlights individual words with the strongest cues of sentiment, whereas the second annotator sometimes selects entire sentences when they indicate a sentiment. Table 5 shows the distribution of selected words over lexical categories for Human (CAM), Machine (bi-RNN), and the entire corpus for the Yelp-50 subset. Any divergence in the Human and Machine columns from the Corpus column indicates a tendency of selection for a lexical category. For example, adjectives are selected very heavily by both Human and Machine, even though they only make 0.02 of all words in the dataset. 4607 Lexical Category Human Machine(bi-RNN) Corpus Coordinating conjunction 0.0000 0.0098 0.0147 Cardinal number 0.0098 0.0077 0.0043 Determiner 0.0112 0.0168 0.0312 Existentialthere 0.0000 0.0000 0.0000 Foreign word 0.0000 0.0000 0.0000 Preposition or subordinating conjunction 0.0266 0.0084 0.0298 Adjective 0.2374 0.2269 0.0201 Adjective, comparative 0.0021 0.0014 0.0002 Adjective, superlative 0.0252 0.0287 0.0016 List item marker 0.0000 0.0000 0.0000 Modal 0.0035 0.0000 0.0030 Noun, singular or mass 0.3838 0.3711 0.0950 Noun, plural 0.0000 0.0000 0.0000 Proper noun, singular 0.0000 0.0000 0.0000 Proper noun, plural 0.0413 0.0665 0.0154 Predeterminer 0.0000 0.0000 0.0000 Possessive ending 0.0000 0.0000 0.0000 Personal pronoun 0.0056 0.0049 0.0141 Possessive pronoun 0.0035 0.0028 0.0067 Adverb 0.1296 0.0931 0.0277 Adverb, comparative 0.0070 0.0000 0.0014 Adverb, superlative 0.0000 0.0000 0.0000 Particle 0.0000 0.0000 0.0000 Symbol 0.0000 0.0000 0.0000 to 0.0035 0.0007 0.0077 Interjection 0.0000 0.0000 0.0000 Verb, base form 0.0196 0.0028 0.0098 Verb, past tense 0.0070 0.0609 0.0148 Verb, gerund or present participle 0.0357 0.0462 0.0053 Verb, past participle 0.0455 0.0455 0.0083 Verb, non-3rd person singular present 0.0000 0.0028 0.0023 Verb, 3rd person singular present 0.0007 0.0021 0.0065 Wh-determiner 0.0000 0.0000 0.0005 Wh-pronoun 0.0007 0.0000 0.0005 Possessive wh-pronoun 0.0000 0.0000 0.0000 Wh-adverb 0.0007 0.0007 0.0012 Table 5: Distribution over lexical categories for human-selected words, machine-selected words, and the entire corpus. 4608 Figure 4: Visualizations of attention maps by human annotators and machine learning models. From top to bottom: first human annotator, second human annotator, RNN, bi-RNN, Rationales.
2020
419
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 437–442 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 437 Opportunistic Decoding with Timely Correction for Simultaneous Translation Renjie Zheng 1,2,∗ Mingbo Ma 2,∗ Baigong Zheng 2 Kaibo Liu2 Liang Huang 1,2 1Oregon State University, Corvallis, OR, USA 2Baidu Research, Sunnyvale, CA, USA {renjiezheng,mingboma,baigongzheng}@baidu.com {kaiboliu,lianghuang}@baidu.com Abstract Simultaneous translation has many important application scenarios and attracts much attention from both academia and industry recently. Most existing frameworks, however, have difficulties in balancing between the translation quality and latency, i.e., the decoding policy is usually either too aggressive or too conservative. We propose an opportunistic decoding technique with timely correction ability, which always (over-)generates a certain mount of extra words at each step to keep the audience on track with the latest information. At the same time, it also corrects, in a timely fashion, the mistakes in the former overgenerated words when observing more source context to ensure high translation quality. Experiments show our technique achieves substantial reduction in latency and up to +3.1 increase in BLEU, with revision rate under 8% in Chinese-to-English and English-to-Chinese translation. 1 Introduction Simultaneous translation, which starts translation before the speaker finishes, is extremely useful in many scenarios, such as international conferences, travels, and so on. In order to achieve low latency, it is often inevitable to generate target words with insufficient source information, which makes this task extremely challenging. Recently, there are many efforts towards balancing the translation latency and quality with mainly two types of approaches. On one hand, Ma et al. (2019a) propose very simple frameworks that decode following a fixed-latency policy such as waitk. On the other hand, there are many attempts to learn an adaptive policy which enables the model to decide READ or WRITE action on the fly using various techniques such as reinforcement learning (Gu et al., 2017; Alinejad et al., 2018; Grissom II ∗These authors contributed equally. yt <latexit sha1_base64="SeGBdmF0K269BzW9NTI6ylilwtw=">AB6nicbVDLSgNBEOyNrxhfUY9eB oPgKez6QI9BLx4jmgckS5idzCZDZmeXmV4hLPkELx4U8eoXefNvnCR70MSChqKqm+6uIJHCoOt+O4WV1bX1jeJmaWt7Z3evH/QNHGqGW+wWMa6HVDpVC8gQIlbyea0yiQvBWMbqd+64lrI2L1iOE+xEdKBEKR tFKD+Me9soVt+rOQJaJl5MK5Kj3yl/dfszSiCtkhrT8dwE/YxqFEzySambGp5QNqID3rFU0YgbP5udOiEnVumTMNa2FJKZ+nsio5Ex4yiwnRHFoVn0puJ/XifF8NrPhEpS5IrNF4WpJBiT6d+kLzRnKMeWUKaFvZ WwIdWUoU2nZEPwFl9eJs2zqndevby/qNRu8jiKcATHcAoeXEN7qAODWAwgGd4hTdHOi/Ou/Mxby04+cwh/IHz+QN1Bo3r</latexit> ˆy6w t <latexit sha1_base64="KB9f+IQoMdoV0w7Aq9F4Qd4Ea5M=">ACXicbVC5TsNAEF2HK4QrQEmzIkKimwOQ RlBQxkckixsdabdbLK+mB3DLIstzT8Cg0FCNHyB3T8DZvEBSQ8aSn92Y0M8+LBVdgmt9GaWFxaXmlvFpZW9/Y3Kpu7RVlEjKWjQSkex6RDHBQ9YCDoJ1Y8lI4AnW8UaXY79z6TiUXgDacycgAxC7nNKQEtuFdtDApkdEBh6fpbmuQu 3mS3YnRIkBPyQu9WaWTcnwPEKkgNFWi61S+7H9EkYCFQZTqWYMTkYkcCpYXrETxWJCR2TAepqGJGDKySaf5PhAK3sR1KXj9Rf09kJFAqDTzdOT5ZzXpj8T+vl4B/7mQ8jBNgIZ0u8hOBIcLjWHCfS0ZBpJoQKrm+FdMhkYSCDq+iQ 7BmX54n7aO6dVw/vT6pNS6KOMpoD+2jQ2ShM9RAV6iJWoiR/SMXtGb8WS8GO/Gx7S1ZBQzu+gPjM8fJaObQg=</latexit> revision window decoding time t <latexit sha1_base64="t6XaytdIsHwdU4AeCNDSjPNP5sM=">AB6HicbVDLSgNBEOz1GeMr6tHLYBA8hV0f6DHoxWMC5gHJEmYns8mY2dlplcIS7AiwdFvPpJ 3vwbJ8keNLGgoajqprsrSKQw6Lrfzsrq2vrGZmGruL2zu7dfOjhsmjVjDdYLGPdDqjhUijeQIGStxPNaRI3gpGd1O/9cS1EbF6wHC/YgOlAgFo2ilOvZKZbfizkCWiZeTMuSo9Upf3X7M0ogrZJIa0/HcBP2MahRM8kmxmxqeUDaiA96xVNGIGz+bHTohp1bpkzDWthSmfp7IqORMeMosJ0RxaFZ9Kbif14nxfDGz4RKUuSKzReFqSQYk+ nXpC80ZyjHlCmhb2VsCHVlKHNpmhD8BZfXibN84p3UbmqX5art3kcBTiGEzgD6hCvdQgwYw4PAMr/DmPDovzrvzMW9dcfKZI/gD5/MH4m2M/w=</latexit> t + 1 <latexit sha1_base64="tGQ1GTwhm57M/cuPNJRIL/VDcNU=">AB6nicbVDLSgNBEOz1GeMr6tHLYBAEIez6QI9BLx4jmgckS5idzCZDZmeXmV4hLPkELx4U8eoXefNvnCR70MSChqKqm+6uI JHCoOt+O0vLK6tr64WN4ubW9s5uaW+/YeJUM15nsYx1K6CGS6F4HQVK3ko0p1EgeTMY3k785hPXRsTqEUcJ9yPaVyIUjKVHvDU65bKbsWdgiwSLydlyFHrlr46vZilEVfIJDWm7bkJ+hnVKJjk42InNTyhbEj7vG2pohE3fjY9dUyOrdIjYaxtKSRT9fdERiNjRlFgOyOKAzPvTcT/vHaK4bWfCZWkyBWbLQpTSTAmk79JT2jOUI4soUwLeythA6opQ5tO0Ybgzb+8SBpnFe+8cnl/Ua7e5HEU4BCO4 AQ8uIq3EN6sCgD8/wCm+OdF6cd+dj1rk5DMH8AfO5w+4yY1v</latexit> correction 8 > > < > > : <latexit sha1_base64="vOXxVXrmuiv79AW8seb2TK J6MCA=">ACN3icfVDLSgMxFM34dnxVXboJFsVmfGBLotuXImCVaEpJZPeaYOZzJDcEcvQv3Ljb7jTjQtF3PoH prWCL7wkcDjnHu69J8qUtBgE97I6Nj4xOTUtD8zOze/UFpcOrNpbgTURKpScxFxC0pqKFEBReZAZ5ECs6jy4O+ fn4FxspUn2I3g0bC21rGUnB0VLN0xBTEyAoWQVvqIuFo5HXPp+vUPcboP8hnoFufDmZku4OVZqkcVIJB0d8gHI yGdZxs3THWqnIE9AoFLe2HgYZNgpuUAoFPZ/lFjIuLnkb6g5qnoBtFIO7e3TNMS0ap8Z9jXTAfnUPLG2m0Su063 ZsT+1PvmXVs8x3msUmc5ghYfg+JcUxpP0TakgYEq4DXBjpdqWiw0X6KL2XQjhz5N/g7PNSrhV2TnZLlf3h3F MkRWySjZISHZJlRySY1IjgtyQB/JEnr1b79F78V4/Wke8oWeZfCv7R1tL6Y8</latexit> … … … 8 > > < > > : <latexit sha1_base64="vOXxVXrmuiv79AW8seb2TKJ6MCA=">AC N3icfVDLSgMxFM34dnxVXboJFsVmfGBLotuXImCVaEpJZPeaYOZzJDcEcvQv3Ljb7jTjQtF3PoHprWCL7wkcDjnHu69J8qUtBgE97I6Nj4xOTU tD8zOze/UFpcOrNpbgTURKpScxFxC0pqKFEBReZAZ5ECs6jy4O+fn4FxspUn2I3g0bC21rGUnB0VLN0xBTEyAoWQVvqIuFo5HXPp+vUPcboP8h noFufDmZku4OVZqkcVIJB0d8gHIyGdZxs3THWqnIE9AoFLe2HgYZNgpuUAoFPZ/lFjIuLnkb6g5qnoBtFIO7e3TNMS0ap8Z9jXTAfnUPLG2m0S u063ZsT+1PvmXVs8x3msUmc5ghYfg+JcUxpP0TakgYEq4DXBjpdqWiw0X6KL2XQjhz5N/g7PNSrhV2TnZLlf3h3FMkRWySjZISHZJlRySY1 IjgtyQB/JEnr1b79F78V4/Wke8oWeZfCv7R1tL6Y8</latexit> t + 2 <latexit sha1_base64="9+8co5X+9cuaDI/4vU0v8CjHtc=">AB6nicbVDLSgNBEOyNrxhfUY9eBoMgCGE3KnoMevEY0TwgWcLsZJIMmZ1dZnqFsOQTvHhQxKtf5M2/cZLsQRMLGoqbrq7glg Kg67eRWVtfWN/Kbha3tnd294v5Bw0SJZrzOIhnpVkANl0LxOgqUvBVrTsNA8mYwup36zSeujYjUI45j7od0oERfMIpWesCzSrdYcsvuDGSZeBkpQYZat/jV6UsCblCJqkxbc+N0U+pRsEknxQ6ieExZSM64G1LFQ258dPZqRNyYpUe6UfalkIyU39PpDQ0ZhwGtjOkODSL3lT8z2sn2L/2U6HiBLli80X9RBKMyPRv0hOaM5RjSyjTwt5K2JBqytCmU7AheIsvL5NGpeydly/vL0rVmyOPBzBMZyCB1d QhTuoQR0YDOAZXuHNkc6L8+58zFtzTjZzCH/gfP4Auk2NcA=</latexit> correction y<t <latexit sha1_base64="5wmZKBhkP+VlNy8pxWZS+/TmoiA=">AB+nicbVDLSsNAFL2pr1pfqS7dDBbBVUl8oAs XRTcuK9gHtCFMpN26OTBzEQJMZ/ixoUibv0Sd/6NkzYLrR4YOJxzL/fM8WLOpLKsL6OytLyulZdr21sbm3vmPXdrowSQWiHRDwSfQ9LylIO4opTvuxoDjwO150+vC791TIVkU3qk0pk6AxyHzGcFKS65Zz4YBVhPz9I8d7NLlbtmw2paM 6C/xC5JA0q0XfNzOIpIEtBQEY6lHNhWrJwMC8UIp3ltmEgaYzLFYzrQNMQBlU42i56jQ62MkB8J/UKFZurPjQwHUqaBpyeLnHLRK8T/vEGi/AsnY2GcKBqS+SE/4UhFqOgBjZigRPFUE0wE01kRmWCBidJt1XQJ9uKX/5LucdM+aZ7dnjZaV2U dVdiHAzgCG86hBTfQhg4QeIAneIFX49F4Nt6M9/loxSh39uAXjI9vLjKUmg=</latexit> irreversible Figure 1: Besides yt, opportunistic decoding continues to generate additional w words which are represented as ˆy⩽w t . The timely correction only revises this part in future steps. Different shapes denote different words. In this example, from step t to t + 1, all previously opportunistically decoded words are revised, and an extra triangle word is generated in opportunistic window. From step t + 1 to t + 2, two words from previous opportunistic window are kept and only the triangle word is revised. et al., 2014), supervised learning over pseudooracles (Zheng et al., 2019a), imitation learning (Zheng et al., 2019b), model ensemble (Zheng et al., 2020) or monotonic attention (Ma et al., 2019d; Arivazhagan et al., 2019). Though the existing efforts improve the performance in both translation latency and quality with more powerful frameworks, it is still difficult to choose an appropriate policy to explore the optimal balance between latency and quality in practice, especially when the policy is trained and applied in different domains. Furthermore, all existing approaches are incapable of correcting the mistakes from previous steps. When the former steps commit errors, they will be propagated to the later steps, inducing more mistakes to the future. Inspired by our previous work on speculative beam search (Zheng et al., 2019c), we propose an opportunistic decoding technique with timely correction mechanism to address the above problems. As shown in Fig. 1, our proposed method always decodes more words than the original policy at each step to catch up with the speaker and 438 reduce the latency. At the same time, it also employs a timely correction mechanism to review the extra outputs from previous steps with more source context, and revises these outputs with current preference when there is a disagreement. Our algorithm can be used in both speech-to-text and speech-to-speech simultaneous translation (Oda et al., 2014; Bangalore et al., 2012; Yarmohammadi et al., 2013). In the former case, the audience will not be overwhelmed by the modifications since we only review and modify the last few output words with a relatively low revision rate. In the later case, the revisable extra words can be used in look-ahead window in incremental TTS (Ma et al., 2019b). By contrast, the alternative retranslation strategy (Arivazhagan et al., 2020) will cause non-local revisions which makes it impossible to be used in incremental TTS. We also define, for the first time, two metrics for revision-enabled simultaneous translation: a more general latency metric Revision-aware Average Lagging (RAL) as well as the revision rate. We demonstrate the effectiveness of our proposed technique using fixed (Ma et al., 2019a) and adaptive (Zheng et al., 2019a) policies in both Chineseto-English and English-to-Chinese translation. 2 Preliminaries Full-sentence NMT. The conventional fullsentence NMT processes the source sentence x = (x1, ..., xn) with an encoder, where xi represents an input token. The decoder on the target side (greedily) selects the highest-scoring word yt given source representation h and previously generated target tokens, y<t = (y1, ..., yt−1), and the final hypothesis y = (y1, ..., yt) with yt = <eos> has the highest probability: p(y | x) = Q|y| t=1 p(yt | x, y<t) (1) Simultaneous Translation. Without loss of generality, regardless the actual design of policy, simultaneous translation is represented as: pg(y | x) = Q|y| t=1 p(yt | x⩽g(t), y<t) (2) where g(t) can be used to represent any arbitrary fixed or adaptive policy. For simplicity, we assume the policy is given and does not distinguish the difference between two types of policies. 3 Opportunistic Decoding with Timely Correction and Beam Search Opportunistic Decoding. For simplicity, we first apply this method to fixed policies. We define the original decoded word sequence at time step t with yt, which represents the word that is decoded in time step t with original model. We denote the additional decoded words at time step t as ˆy⩽w t = (y1 t , ..., yw t ), where w denote the number of extra decoded words. In our setting, the decoding process is as follows: pg(yt ◦ˆy⩽w t | x⩽g(t)) = pg(yt | x⩽g(t)) Qw i=1 pg(ˆyi t | x⩽g(t), yt ◦ˆy<i t ) (3) where ◦is the string concatenation operator. We treat the procedure for generating the extra decoded sequence as opportunistic decoding, which prefers to generate more tokens based on current context. When we have enough information, this opportunistic decoding eliminates unnecessary latency and keep the audience on track. With a certain chance, when the opportunistic decoding tends to aggressive and generates inappropriate tokens, we need to fix the inaccurate token immediately. Timely Correction. In order to deliver the correct information to the audience promptly and fix previous mistakes as soon as possible, we also need to review and modify the previous outputs. At step t + 1, when encoder obtains more information from x⩽g(t) to x⩽g(t+1), the decoder is capable to generate more appropriate candidates and may revise and replace the previous outputs from opportunistic decoding. More precisely, ˆy⩽w t and yt+1 ◦ˆy⩽w−1 t+1 are two different hypothesis over the same time chunk. When there is a disagreement, our model always uses the hypothesis from later step to replace the previous commits. Note our model does not change any word in yt from previous step and it only revise the words in ˆy⩽w t . Modification for Adaptive Policy. For adaptive policies, the only difference is, instead of committing a single word, the model is capable of generating multiple irreversible words. Thus our proposed methods can be easily applied to adaptive policies. Correction with Beam Search. When the model is committing more than one word at a time, we can use beam search to further improve the translation quality and reduce revision rate (Murray and Chiang, 2018; Ma et al., 2019c). The decoder maintains a beam Bk t of size b at step t, which is ordered list of pairs 439 bùshí ૲Ջ Bush zǒngtǒng ௛ᕹ President de ጱ of Jiāng ࿯ Jiang fāyán ݎ᥺ speech biăoshì ᤒᐏ express Zémín ၂࿆ Zemin dùi ੒ to Jiang Zemin expressed his welcome to his agreement to President 1 2 3 4 5 6 7 8 9 zàntóng ᩩݶ agreement … decoding time t = 4 t = 5 expressed … ҅ bìngqiě ଚӬ and 10 11 Jiang Zemin his to President t = 6 expressed Jiang Zemin Bush agreement Figure 2: The decoder generates target word y4 = “his” and two extra words “welcome to” at step t = 4 when input x9 = “z`ant´ong” (“agreement”) is not available yet. When the model receives x9 at step t = 5, the decoder immediately corrects the previously made mistake “welcome” with “agreement” and emits two additional target words (“to President”). The decoder not only is capable to fix the previous mistake, but also has enough information to perform more correct generations. Our framework benefits from opportunistic decoding with reduced latency here. Note though the word “to” is generated in step t = 4, it only becomes irreversible at step t = 6. ⟨hypothesis, probability⟩, where k denotes the kth step in beam search. At each step, there is an initial beam B0 t = [⟨yt−1, 1⟩]. We denote one-step transition from the previous beam to the next as Bk+1 t = nextb 1(Bk t ) = b top{⟨y′◦v, u·p(v|x⩽g(t), y′)⟩| ⟨y′, u⟩∈Bk t } where topb(·) returns the top-scoring b pairs. Note we do not distinguish the revisable and nonrevisable output in y′ for simplicity. We also define the multi-step advance beam search function with recursive fashion as follows: nextb i(Bk t )=nextb 1(nextb i−1(Bk t )) When the opportunistic decoding window is w at decoding step t, we define the beam search over w + 1 (include the original output) as follows: ⟨y′ t, ut⟩= top1nextb n+w(B0 t )  (4) where nextb n+w(·) performs a beam search with n + w steps, and generate y′ t as the outputs which include both original and opportunistic decoded words. n represents the length of yt 4 Revision-aware AL and Revision Rate We define, for the first time, two metrics for revision-enabled simultaneous translation. 4.1 Revision-aware AL AL is introduced in (Ma et al., 2019a) to measure the average delay for simultaneous translation. Besides the limitations that are mentioned in (Cherry and Foster, 2019), AL is also not sensitive to the modifications to the committed words. Furthermore, in the case of re-translation, AL is incapable to measure the meaningful latency anymore. A A B A D B C A C F A D F E F A D F B E A D F B E target source final
 outputs s = 0 <latexit sha1_base64="D1yS6RzrA1lVOBtvBtnRjRFu1H8=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKez6QC9C0IvHiOYByRJmJ7PJkNnZaZXCEs+wYsHRbz6Rd78 GyfJHjRa0FBUdPdFSRSGHTdL6ewtLyulZcL21sbm3vlHf3miZONeMNFstYtwNquBSKN1Cg5O1EcxoFkreC0c3Ubz1ybUSsHnCcD+iAyVCwSha6d5cub1yxa26M5C/xMtJBXLUe+XPbj9macQVMkmN6Xhugn5GNQom+aTUTQ1PKBvRAe9YqmjEjZ/NTp2QI6v0SRhrWwrJTP05kdHImHEU2M6I4tAselPxP6+TYnjpZ0IlKXLF5ovCVBKMyfRv0heaM5R jSyjTwt5K2JBqytCmU7IheIsv/yXNk6p3Wj2/O6vUrvM4inAh3AMHlxADW6hDg1gMIAneIFXRzrPzpvzPm8tOPnMPvyC8/EN0RmNfw=</latexit> s = 1 <latexit sha1_base64="tv1gl2xEnOTihOnrQ3xwG0IywLM=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKez6QC9C0IvHiOYByRJmJ7PJkNnZaZXCEs+wYsHRbz6Rd78 GyfJHjRa0FBUdPdFSRSGHTdL6ewtLyulZcL21sbm3vlHf3miZONeMNFstYtwNquBSKN1Cg5O1EcxoFkreC0c3Ubz1ybUSsHnCcD+iAyVCwSha6d5ceb1yxa26M5C/xMtJBXLUe+XPbj9macQVMkmN6Xhugn5GNQom+aTUTQ1PKBvRAe9YqmjEjZ/NTp2QI6v0SRhrWwrJTP05kdHImHEU2M6I4tAselPxP6+TYnjpZ0IlKXLF5ovCVBKMyfRv0heaM5R jSyjTwt5K2JBqytCmU7IheIsv/yXNk6p3Wj2/O6vUrvM4inAh3AMHlxADW6hDg1gMIAneIFXRzrPzpvzPm8tOPnMPvyC8/EN0p2NgA=</latexit> s = 2 <latexit sha1_base64="OyODLHvyAVgcgVeW1v0anu3NT0=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexGRS9C0IvHiOYByRJmJ51kyOzsMjMrhCWf4MWDIl79Im/+ jZNkD5pY0FBUdPdFcSCa+O6305uZXVtfSO/Wdja3tndK+4fNHSUKIZ1FolItQKqUXCJdcONwFaskIaBwGYwup36zSdUmkfy0Yxj9EM6kLzPGTVWetDXlW6x5JbdGcgy8TJSgy1bvGr04tYEqI0TFCt254bGz+lynAmcFLoJBpjykZ0gG1LJQ1R+ns1Ak5sUqP9CNlSxoyU39PpDTUehwGtjOkZqgXvan4n9dOTP/KT7mME4OSzRf1E0FMRKZ/kx5XyIw YW0KZ4vZWwoZUWZsOgUbgrf48jJpVMreWfni/rxUvcniyMRHMpeHAJVbiDGtSBwQCe4RXeHOG8O/Ox7w152Qzh/AHzucP1CGNgQ=</latexit> s = 3 <latexit sha1_base64="2cq5hRI5oOfi0+KTbgxpw8ViPNI=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKewaRS9C0IvHiOYByRJmJ51kyOzsMjMrhCWf4MWDIl79Im/+ jZNkD5pY0FBUdPdFcSCa+O6305uZXVtfSO/Wdja3tndK+4fNHSUKIZ1FolItQKqUXCJdcONwFaskIaBwGYwup36zSdUmkfy0Yxj9EM6kLzPGTVWetDXlW6x5JbdGcgy8TJSgy1bvGr04tYEqI0TFCt254bGz+lynAmcFLoJBpjykZ0gG1LJQ1R+ns1Ak5sUqP9CNlSxoyU39PpDTUehwGtjOkZqgXvan4n9dOTP/KT7mME4OSzRf1E0FMRKZ/kx5XyIw YW0KZ4vZWwoZUWZsOgUbgrf48jJpnJW9Svni/rxUvcniyMRHMpeHAJVbiDGtSBwQCe4RXeHOG8O/Ox7w152Qzh/AHzucP1aWNg=</latexit> s = 4 <latexit sha1_base64="XPATyeSDT1r3YAs7G20W9viZ8VM=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexqRC9C0IvHiOYByRJmJ5NkyOzsMtMrhCWf4MWDIl79Im/+ jZNkD5pY0FBUdPdFcRSGHTdbye3srq2vpHfLGxt7+zuFfcPGiZKNON1FslItwJquBSK1Gg5K1YcxoGkjeD0e3Ubz5xbUSkHnEcz+kAyX6glG0oO5rnSLJbfszkCWiZeREmSodYtfnV7EkpArZJIa0/bcGP2UahRM8kmhkxgeUzaiA962VNGQGz+dnTohJ1bpkX6kbSkM/X3REpDY8ZhYDtDikOz6E3F/7x2gv0rPxUqTpArNl/UTyTBiEz/Jj2hOUM 5toQyLeythA2pgxtOgUbgrf48jJpnJW98/LFfaVUvcniyMRHMpeHAJVbiDGtSBwQCe4RXeHOm8O/Ox7w152Qzh/AHzucP1ymNgw=</latexit> s = 5 <latexit sha1_base64="nujZy26rKbMtSsGONODK1q+R0=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKeyqQS9C0IvHiOYByRJmJ51kyOzsMjMrhCWf4MWDIl79Im/+ jZNkD5pY0FBUdPdFcSCa+O6305uZXVtfSO/Wdja3tndK+4fNHSUKIZ1FolItQKqUXCJdcONwFaskIaBwGYwup36zSdUmkfy0Yxj9EM6kLzPGTVWetDXlW6x5JbdGcgy8TJSgy1bvGr04tYEqI0TFCt254bGz+lynAmcFLoJBpjykZ0gG1LJQ1R+ns1Ak5sUqP9CNlSxoyU39PpDTUehwGtjOkZqgXvan4n9dOTP/KT7mME4OSzRf1E0FMRKZ/kx5XyIw YW0KZ4vZWwoZUWZsOgUbgrf48jJpnJW983Ll/qJUvcniyMRHMpeHAJVbiDGtSBwQCe4RXeHOG8O/Ox7w152Qzh/AHzucP2K2NhA=</latexit> s = 6 <latexit sha1_base64="cL0DGlOQBArHBgYuFOl5u4ovjs=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKez6vghBLx4jmgckS5idJIhs7PLzKwQlnyCFw+KePWLvPk3 TpI9aGJBQ1HVTXdXEAujet+O7ml5ZXVtfx6YWNza3unuLtX1GiGNZYJCLVDKhGwSXWDcCm7FCGgYCG8HwduI3nlBpHslHM4rRD2lf8h5n1FjpQV9fdIolt+xOQRaJl5ESZKh2il/tbsSEKVhgmrd8tzY+ClVhjOB40I70RhTNqR9bFkqaYjaT6enjsmRVbqkFylb0pCp+nsipaHWozCwnSE1Az3vTcT/vFZield+ymWcGJRstqiXCGIiMvmbdLlCZsT IEsoUt7cSNqCKMmPTKdgQvPmXF0n9pOydls/vz0qVmyOPBzAIRyDB5dQgTuoQg0Y9OEZXuHNEc6L8+58zFpzTjazD3/gfP4A2jGNhQ=</latexit> t = 1 <latexit sha1_base64="ER7jzwoVgjsUgXLZQcA8BLCHM0=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKez6QC9C0IvHiOYByRJmJ7PJkNnZaZXCEs+wYsHRbz6Rd7 8GyfJHjRa0FBUdPdFSRSGHTdL6ewtLyulZcL21sbm3vlHf3miZONeMNFstYtwNquBSKN1Cg5O1EcxoFkreC0c3Ubz1ybUSsHnCcD+iAyVCwSha6R6vF654lbdGchf4uWkAjnqvfJntx+zNOIKmaTGdDw3QT+jGgWTfFLqpoYnlI3ogHcsVTixs9mp07IkVX6JIy1LYVkpv6cyGhkzDgKbGdEcWgWvan4n9dJMbz0M6GSFLli80VhKgnGZPo36QvNG cqxJZRpYW8lbEg1ZWjTKdkQvMWX/5LmSdU7rZ7fnVq13kcRTiAQzgGDy6gBrdQhwYwGMATvMCrI51n5815n7cWnHxmH37B+fgG1CONgQ=</latexit> t = 2 <latexit sha1_base64="JTAGu8mLCMqvYJ9WV/XCaGanJg=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexGRS9C0IvHiOYByRJmJ5NkyOzsMtMrhCWf4MWDIl79Im/ +jZNkD5pY0FBUdPdFcRSGHTdbye3srq2vpHfLGxt7+zuFfcPGiZKNON1FslItwJquBSK1Gg5K1YcxoGkjeD0e3Ubz5xbUSkHnEcz+kAyX6glG0gNeV7rFklt2ZyDLxMtICTLUusWvTi9iScgVMkmNaXtujH5KNQom+aTQSQyPKRvRAW9bqmjIjZ/OTp2QE6v0SD/SthSmfp7IqWhMeMwsJ0hxaFZ9Kbif147wf6VnwoVJ8gVmy/qJ5JgRKZ/k57Qn KEcW0KZFvZWwoZU4Y2nYINwVt8eZk0KmXvrHxf16q3mRx5OEIjuEUPLiEKtxBDerAYADP8ApvjnRenHfnY96ac7KZQ/gD5/MH1aeNg=</latexit> t = 3 <latexit sha1_base64="ztmF+NWK7YHJFlSK1UJItkJVKc=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKewaRS9C0IvHiOYByRJmJ5NkyOzsMtMrhCWf4MWDIl79Im/ +jZNkD5pY0FBUdPdFcRSGHTdbye3srq2vpHfLGxt7+zuFfcPGiZKNON1FslItwJquBSK1Gg5K1YcxoGkjeD0e3Ubz5xbUSkHnEcz+kAyX6glG0gNeV7rFklt2ZyDLxMtICTLUusWvTi9iScgVMkmNaXtujH5KNQom+aTQSQyPKRvRAW9bqmjIjZ/OTp2QE6v0SD/SthSmfp7IqWhMeMwsJ0hxaFZ9Kbif147wf6VnwoVJ8gVmy/qJ5JgRKZ/k57Qn KEcW0KZFvZWwoZU4Y2nYINwVt8eZk0zspepXxf16q3mRx5OEIjuEUPLiEKtxBDerAYADP8ApvjnRenHfnY96ac7KZQ/gD5/MH1yuNgw=</latexit> t = 4 <latexit sha1_base64="5A3jCSJYnP/zs91NH2ZC1xVM40=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexqRC9C0IvHiOYByRJmJ5NkyOzsMtMrhCWf4MWDIl79Im/ +jZNkD5pY0FBUdPdFcRSGHTdbye3srq2vpHfLGxt7+zuFfcPGiZKNON1FslItwJquBSK1Gg5K1YcxoGkjeD0e3Ubz5xbUSkHnEcz+kAyX6glG0gNeV7rFklt2ZyDLxMtICTLUusWvTi9iScgVMkmNaXtujH5KNQom+aTQSQyPKRvRAW9bqmjIjZ/OTp2QE6v0SD/SthSmfp7IqWhMeMwsJ0hxaFZ9Kbif147wf6VnwoVJ8gVmy/qJ5JgRKZ/k57Qn KEcW0KZFvZWwoZU4Y2nYINwVt8eZk0zsrefnivlKq3mRx5OEIjuEUPLiEKtxBDerAYADP8ApvjnRenHfnY96ac7KZQ/gD5/MH2K+NhA=</latexit> t = 5 <latexit sha1_base64="VBnB3NxfkozrscJfPwxvIb0q3iU=">AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKeyqQS9C0IvHiOYByRJmJ5NkyOzsMtMrhCWf4MWDIl79Im/ +jZNkD5pY0FBUdPdFcRSGHTdbye3srq2vpHfLGxt7+zuFfcPGiZKNON1FslItwJquBSK1Gg5K1YcxoGkjeD0e3Ubz5xbUSkHnEcz+kAyX6glG0gNeV7rFklt2ZyDLxMtICTLUusWvTi9iScgVMkmNaXtujH5KNQom+aTQSQyPKRvRAW9bqmjIjZ/OTp2QE6v0SD/SthSmfp7IqWhMeMwsJ0hxaFZ9Kbif147wf6VnwoVJ8gVmy/qJ5JgRKZ/k57Qn KEcW0KZFvZWwoZU4Y2nYINwVt8eZk0zsreblyf1Gq3mRx5OEIjuEUPLiEKtxBDerAYADP8ApvjnRenHfnY96ac7KZQ/gD5/MH2jONhQ=</latexit> Figure 3: The red arrows represent the changes between two different commits, and the last changes for each output word is highlighted with yellow. We hereby propose a new latency, Revisionaware AL (RAL), which can be applied to any kind of translation scenarios, i.e., full-sentence translation, use re-translation as simultaneous translation, fixed and adaptive policy simultaneous translation. Note that for latency and revision rate calculation, we count the target side difference respect to the growth of source side. As it is shown in Fig. 3, there might be multiple changes for each output words during the translation, and we only start to calculate the latency for this word once it agrees with the final results. Therefore, it is necessary to locate the last change for each word. For a given source side time s, we denote the tth outputs on target side as f(x⩽s)t. Then we are able to find the Last Revision (LR) for the tth word on target side as follows: LR(t) = argmax s<|x| f(x⩽(s−1))t ̸= f(x⩽s)t  From the audience point of view, once the former words are changed, the audience also needs to take the efforts to read the following as well. Then we also penalize the later words even there are no changes, which is shown with blue arrow in Fig. 3. We then re-formulate the LR(t) as follows (assume LR(0) = 0): 440 5 10 15 Revision-aware Average Lagging (zh en) 25.0 27.5 30.0 32.5 35.0 37.5 40.0 42.5 45.0 4-ref BLEU w=5, b>1 w=3, b>1 w=1, b>1 w=5, b=1 w=3, b=1 w=1, b=1 w=0, b=1 29.6 0 5 10 15 Revision-aware Average Lagging (en zh) 10 12 14 16 18 20 22 24 1-ref BLEU w=5, b>1 w=3, b>1 w=1, b>1 w=5, b=1 w=3, b=1 w=1, b=1 w=0, b=1 38.3 Figure 4: BLEU against RAL using wait-k polocies. ▲▲▲: wait-1 policies, : wait-3 policies, : wait-5 policies, : wait-7 policies, ▼▼▼: wait-9 policies,⋆(⋆): re-translation with pre-trained NMT model with greedy (beam search) decoding, ⋆(⋆): full-sentence translation with pre-trained NMT model with greedy (beam search) decoding. The baseline for wait-k policies is decoding with w = 0, b = 1. 1 2 3 4 5 Window Size (zh en) 0 2 4 6 8 Revision Rate b=1 b>1 39.5 40.0 1 2 3 4 5 Window Size (en zh) 0 2 4 6 8 Revision Rate b=1 b>1 25 30 Figure 5: Revision rate against window size with different wait-k policies. ⋆(⋆): re-translation with pre-trained NMT model with greedy (beam search) decoding. LR(t) = max{LR(t −1), LR(t)} (5) The above definition can be visualized as the thick black line in Fig. 3. Similar with original AL, our proposed RAL is defined as follows: RAL(x, y) = 1 τ(|x|) τ(|x|) X t=1 LR(t) −t −1 r (6) where τ(|x|) denotes the cut-off step, and r = |y|/|x| is the target-to-source length ratio. 4.2 Revision Rate Since each modification on the target side would cost extra effort for the audience to read, we penalize all the revisions during the translation. We define the revision rate as follows:  |x|−1 X s=1 dist  f(x⩽s), f(x⩽s+1) . |x| X s=1 |f(x⩽s)|  where dist can be arbitrary distance measurement between two sequences. For simplicity, we design a modified Hamming Distance to measure the difference: dist(a, b) = hamming a, b≤|a|◦⟨pad⟩max(|a|−|b|,0) where ⟨pad⟩is a padding symbol in case b is shorter than a. 5 Experiments Datasets and Implementation We evaluate our work on Chinese-to-English and English-toChinese simultaneous translation tasks. We use the NIST corpus (2M sentence pairs) as the training data. We first apply BPE (Sennrich et al., 2015) on all texts to reduce the vocabulary sizes. For evaluation, we use NIST 2006 and NIST 2008 as our dev and test sets with 4 English references. We re-implement wait-k model (Ma et al., 2019a) and adaptive policy (Zheng et al., 2019a). We use Transformer (Vaswani et al., 2017) based waitk model and pre-trained full-sentence model for learning adaptive policy. 441 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 Revision-aware Average Lagging (zh en) 29 30 31 32 33 4-ref BLEU w=0, b=1 w=0, b>1 w=1, b=1 w=1, b>1 w=3, b=1 w=3, b>1 w=5, b=1 w=5, b>1 5 6 7 8 9 Revision-aware Average Lagging (en zh) 16 17 18 19 20 21 1-ref BLEU w=0, b=1 w=0, b>1 w=1, b=1 w=1, b>1 w=3, b=1 w=3, b>1 w=5, b=1 w=5, b>1 Figure 6: BLEU against RAL using adaptive policies. Baseline is decoded with w = 0, b = 1 and w = 0, b > 1. Performance on Wait-k Policy We perform experiments using opportunistic decoding on waitk policies with k ∈{1, 3, 5, 7, 9}, opportunistic window w ∈{1, 3, 5} and beam size b ∈ {1, 3, 5, 7, 10, 15}. We select the best beam size for each policy and window pair on dev-set. We compare our proposed method with a baseline called re-translation which uses a fullsentence NMT model to re-decode the whole target sentence once a new source word is observed. The final output sentences of this method are identical to the full sentence translation output with the same model but the latency is reduced. Fig. 4 (left) shows the Chinese-to-English results of our proposed algorithm. Since our greedy opportunistic decoding doesn’t change the final output, there is no difference in BLEU compared with normal decoding, but the latency is reduced. However, by applying beam search, we can achieve 3.1 BLEU improvement and 2.4 latency reduction on wait-7 policy. Fig. 4 (right) shows the English-to-Chinese results. Compare to the Chinese-to-English translation results in previous section, there is comparatively less latency reduction by using beam search because the output translations are slightly longer which hurts the latency. As shown in Fig. 5(right), the revision rate is still controlled under 8%. Fig. 5 shows the revision rate with different window size on wait-k policies. In general, with opportunity window w ≤5, the revision rate of our proposed approach is under 8%, which is much lower than re-translation. Performance on Adaptive Policy Fig. 6 shows the performance of the proposed algorithm on adaptive policies. We use threshold ρ ∈ {0.55, 0.53, 0.5, 0.47, 0.45}. We vary beam size b ∈{1, 3, 5, 7, 10} and select the best one on devset. Comparing with conventional beam search on consecutive writes, our decoding algorithm achieves even much higher BLEU and less latency. 5.1 Revision Rate vs. Window Size 1 3 5 7 10 15 Beam Size (zh en) 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Revision Rate wait-1 wait-3 wait-5 wait-7 wait-9 Figure 7: Revision rate against beam size with window size of 3 and different wait-k policies. We further investigate the revision rate with different beam sizes on wait-k policies. Fig. 7 shows that the revision rate is higher with lower wait-k policies. This makes sense because the low k policies are always more aggressive and easy to make mistakes. Moreover, we can find that the revision rate is not very sensitive to beam size. 6 Conclusions We have proposed an opportunistic decoding timely correction technique which improves the latency and quality for simultaneous translation. We also defined two metrics for revision-enabled simultaneous translation for the first time. Acknowledgments L. H. was supported in part by NSF IIS-1817231. 442 References Ashkan Alinejad, Maryam Siahbani, and Anoop Sarkar. 2018. Prediction improves simultaneous neural machine translation. In EMNLP. Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simultaneous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Association for Computational Linguistics. Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, and George Foster. 2020. Re-translation versus streaming for simultaneous translation. arXiv preprint arXiv:2004.03643. Srinivas Bangalore, Vivek Kumar Rangarajan Sridhar, Prakash Kolan, Ladan Golipour, and Aura Jimenez. 2012. Real-time incremental speech-tospeech translation of dialogs. In Proc. of NAACLHLT. Colin Cherry and George Foster. 2019. Thinking slow about latency evaluation for simultaneous machine translation. arXiv preprint arXiv:1906.00048. Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum´e III. 2014. Don’t until the final verb wait: Reinforcement learning for simultaneous machine translation. In EMNLP, pages 1342– 1352. Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O. K. Li. 2017. Learning to translate in realtime with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 1053–1062. Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019a. STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025–3036, Florence, Italy. Association for Computational Linguistics. Mingbo Ma, Baigong Zheng, Kaibo Liu, Renjie Zheng, Hairong Liu, Kainan Peng, Kenneth Church, and Liang Huang. 2019b. Incremental text-to-speech synthesis with prefix-to-prefix framework. arXiv preprint arXiv:1911.02750. Mingbo Ma, Renjie Zheng, and Liang Huang. 2019c. Learning to stop in structured prediction for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1884–1889. Xutai Ma, Juan Pino, James Cross, Liezl Puzon, and Jiatao Gu. 2019d. Monotonic multihead attention. arXiv preprint arXiv:1909.12406. Kenton Murray and David Chiang. 2018. Correcting length bias in neural machine translation. In Proceedings of WMT 2018. Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Optimizing segmentation strategies for simultaneous speech translation. In ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30. Mahsa Yarmohammadi, Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, and Baskaran Sankaran. 2013. Incremental segmentation and decoding strategies for simultaneous translation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, and Liang Huang. 2020. Simultaneous translation policies: from fixed to adaptive. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019a. Simpler and faster learning of adaptive policies for simultaneous translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1349–1354. Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019b. Simultaneous translation with flexible policy via restricted imitation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5816– 5822. Renjie Zheng, Mingbo Ma, Baigong Zheng, and Liang Huang. 2019c. Speculative beam search for simultaneous translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 1395–1402.
2020
42
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4609–4622 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4609 Information-Theoretic Probing for Linguistic Structure Tiago PimentelD Josef ValvodaD Rowan Hall MaudslayD Ran ZmigrodD Adina Williams@ Ryan CotterellD,Q DUniversity of Cambridge @Facebook AI Research QETH Zürich [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract The success of neural networks on a diverse set of NLP tasks has led researchers to question how much these networks actually “know” about natural language. Probes are a natural way of assessing this. When probing, a researcher chooses a linguistic task and trains a supervised model to predict annotations in that linguistic task from the network’s learned representations. If the probe does well, the researcher may conclude that the representations encode knowledge related to the task. A commonly held belief is that using simpler models as probes is better; the logic is that simpler models will identify linguistic structure, but not learn the task itself. We propose an information-theoretic operationalization of probing as estimating mutual information that contradicts this received wisdom: one should always select the highest performing probe one can, even if it is more complex, since it will result in a tighter estimate, and thus reveal more of the linguistic information inherent in the representation. The experimental portion of our paper focuses on empirically estimating the mutual information between a linguistic property and BERT, comparing these estimates to several baselines. We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research—plus English— totalling eleven languages. Our implementation is available in https://github.com/ rycolab/info-theoretic-probing. 1 Introduction Neural networks are the backbone of modern stateof-the-art natural language processing (NLP) systems. One inherent by-product of training a neural network is the production of real-valued representations. Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks’ impressive performance on many NLP tasks (Belinkov et al., 2017). As a result of this speculation, one common thread of research focuses on the construction of probes, i.e., supervised models that are trained to extract the linguistic properties directly (Belinkov et al., 2017; Conneau et al., 2018; Peters et al., 2018b; Zhang and Bowman, 2018; Naik et al., 2018; Tenney et al., 2019). A syntactic probe, then, is a model for extracting syntactic properties, such as part of speech, from the representations (Hewitt and Liang, 2019). In this work, we question what the goal of probing for linguistic properties ought to be. Informally, probing is often described as an attempt to discern how much information representations encode about a specific linguistic property. We make this statement more formal: We assert that the natural operationalization of probing is estimating the mutual information (Cover and Thomas, 2012) between a representation-valued random variable and a linguistic property–valued random variable. This operationalization gives probing a clean, information-theoretic foundation, and allows us to consider what “probing” actually means. Our analysis also provides insight into how to choose a probe family: We show that choosing the highest-performing probe, independent of its complexity, is optimal for achieving the best estimate of mutual information (MI). This contradicts the received wisdom that one should always select simple probes over more complex ones (Alain and Bengio, 2017; Liu et al., 2019; Hewitt and Manning, 2019). In this context, we also discuss the recent work of Hewitt and Liang (2019) who proposes selectivity as a criterion for choosing families of probes. Hewitt and Liang (2019) defines selectivity as the performance difference between a probe on the target task and a control task, writing “[t]he selectivity of a probe puts linguistic task accuracy in context with the probe’s capacity to memorize from word types.” They further ponder: “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe 4610 just learned the task?” Information-theoretically, there is no difference between learning the task and probing for linguistic structure, as we will show; thus, it follows that one should always employ the best possible probe for the task without resorting to artificial constraints. In the experimental portion of the paper, we empirically analyze word-level part-of-speech labeling, a common syntactic probing task (Hewitt and Liang, 2019; Sahin et al., 2019), within our MI operationalization. Working on a typologically diverse set of languages (Basque, Czech, English, Finnish, Indonesian, Korean, Marathi, Tamil, Telugu, Turkish and Urdu), we show that only in five of these eleven languages do we recover higher estimates of mutual information between part-ofspeech tags and BERT (Devlin et al., 2019), a common contextualized embedder, than from a control. These modest improvements suggest that most of the information needed to tag part-of-speech well is encoded at the lexical level, and does not require sentential context. Put more simply, words are not very ambiguous with respect to part of speech, a result known to practitioners of NLP (Garrette et al., 2013). We interpret this to mean that part-ofspeech labeling is not a very informative probing task. We further investigate how BERT fares in dependency labeling, as analysed by Tenney et al. (2019). In this task, estimates based on BERT return more information than a type-level embedding in all analysed languages. However, our MI estimates still only show that BERT contains at most 12% more information than the control. We also remark that operationalizing probing information-theoretically gives us a simple, but stunning result: contextual word embeddings, e.g., BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018a), contain the same amount of information about the linguistic property of interest as the original sentence. This follows from the data-processing inequality under a very mild assumption. What this suggests is that, in a certain sense, probing for linguistic properties in representations may not be a well grounded enterprise at all. It also highlights the need to more formally define ease of extraction. 2 Word-Level Syntactic Probes for Contextual Embeddings Following Hewitt and Liang (2019), we consider probes that examine syntactic knowledge in contextualized embeddings. These probes only consider a token’s embedding in isolation, and try to perform the task using only that information. Specifically, in this work, we consider part-of-speech (POS) and dependency labeling: determining a word’s part of speech in a given sentence and the dependency relation for a pair of tokens joined by a dependency arc. Say we wish to determine whether the word love is a NOUN or a VERB. This task requires the sentential context for success. As an example, consider the utterance “love is blind” where, only with the context, is it clear that love is a NOUN. Thus, to do well on this task, the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS. Analogously, we need the whole sentence to know that love is the NOMINAL SUBJECT. Whereas in the sentence “greed can blind love”, love is the DIRECT OBJECT. 2.1 Notation Let S be a random variable ranging over all possible sequences of words. For the sake of this paper, we assume the vocabulary V is finite and, thus, the values S can take are in V∗. We write s ∈S as s = s1 · · · s|s| for a specific sentence, where each si ∈V is a specific token in the sentence at the position i ∈Z+. We also define the random variable W that ranges over the vocabulary V. We define both a sentence-level random variable S and a word type-level random variable W since each will be useful in different contexts during our exposition. Next, let T be a random variable whose possible values are the analyses t that we want to consider for token si in its sentential context, s = s1 · · · si · · · s|s|. In the discussion, we focus on predicting the part-of-speech tag of the ith word si, but the same results apply to the dependency label of an edge between two words. We denote the set of values T can take as the set T . Finally, let R be a representation-valued random variable for a token si derived from the entire sentence s. We write r ∈Rd for a value of R. While any given value r is a continuous vector, there are only a countable number of values R can take.1 To see this, note there are only a countable number of sentences in V∗. Next, we assume there exists a true distribution p(t, s, i) over analyses t (elements of T ), sentences s (elements of V∗), and positions i (elements of Z+). Note that the conditional distribution p(t | s, i) gives us the true distribution over analyses t 1In this work, we ignore the fact that the floating points have precision constraints in practice. 4611 for the ith word token in the sentence s. We will augment this distribution such that p is additionally a distribution over r, i.e., p(r, t, s, i) = δ(r | s, i) p(t, s, i) (1) where we define the augmentation as: δ(r | s, i) = 1{r = BERT(s)i} (2) Since contextual embeddings are a deterministic function of a sentence s, the augmented distribution in eq. (1) has no more randomness than the original—its entropy is the same. We assume the values of the random variables defined above are distributed according to this (unknown) p. While we do not have access to p, we assume the data in our corpus were drawn according to it. Note that W—the random variable over possible word types—is distributed according to p(w) = X s∈V∗ |s| X i=1 δ(w | s, i) p(s, i) (3) where we define the deterministic distribution δ(w | s, i) = 1{si = w} (4) 2.2 Probing as Mutual Information The task of supervised probing is an attempt to ascertain how much information a specific representation r tells us about the value of t. This is naturally operationalized as the mutual information, a quantity from information theory: I(T; R) = H(T) −H(T | R) (5) where we define the entropy, which is constant with respect to the representations, as H(T) = − X t∈T p(t) log p(t) (6) and we define the conditional entropy as H(T | R) = Z p(r) H (T | R = r) dr (7) = X s∈V∗ |s| X i=1 p(s, i) H (T | R = BERT(s)i) where the point-wise conditional entropy inside the sum is defined as H(T | R = r) = − X t∈T p(t | r) log p(t | r) (8) Again, we will not know any of the distributions required to compute these quantities; the distributions in the formulae are marginals and conditionals of the true distribution discussed in eq. (1). 2.3 Bounding Mutual Information The desired conditional entropy, H(T | R) is not readily available, but with a model qθ(t | r) in hand, we can upper-bound it by measuring their empirical cross entropy: H(T | R) := − E (t,r)∼p(·,·) [log p(t | r)] (9) = − E (t,r)∼p(·,·)  log p(t | r)qθ(t | r) qθ(t | r)  = − E (t,r)∼p(·,·)  log qθ(t | r) + log p(t | r) qθ(t | r)  = Hqθ(T | R) | {z } estimate − E r∼p(·)KL(p(· | r) || qθ(· | r)) | {z } expected estimation error where Hqθ(T | R) is the cross-entropy we obtain by using qθ to get this estimate. Since the KL divergence is always positive, we may lower-bound the desired mutual information I(T; R) := H(T) −H(T | R) ≥H(T) −Hqθ(T | R) (10) This bound gets tighter, the more similar—in the sense of the KL divergence—qθ(· | r) is to the true distribution p(· | r). Bigger Probes are Better. If we accept mutual information as a natural operationalization for how much representations encode a target linguistic task (§2.2), the best estimate of that mutual information is the one where the probe qθ(t | r) is best at the target task. In other words, we want the best probe qθ(t | r) such that we get the tightest bound to the actual distribution p(t | r). This paints the question posed in Hewitt and Liang (2019), who write “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” as a false dichotomy.2 From an informationtheoretic view, we will always prefer the probe that does better at the target task, since there is no difference between learning a task and the representations encoding the linguistic structure. 2Assuming that the authors intended ‘or’ here as strictly non-inclusive. See Levinson (2000, 91) and Chevallier et al. (2008, 1743) on conversational implicatures from ‘or’. 4612 3 Control Functions To place the performance of a probe in perspective, Hewitt and Liang (2019) develops the notion of a control task. Inspired by this, we develop an analogue we term control functions, which are functions of the representation-valued random variable R. Similar to Hewitt and Liang (2019)’s control tasks, the goal of a control function c(·) is to place the mutual information I(T; R) in the context of a baseline that the control function encodes. Control functions have their root in the data-processing inequality (Cover and Thomas, 2012), which states that, for any function c(·), we have I(T; R) ≥I(T; c(R)) (11) In other words, information can only be lost by processing data. A common adage associated with this inequality is “garbage in, garbage out.” 3.1 Type-Level Control Functions We focus on type-level control functions in this paper. These functions have the effect of decontextualizing the embeddings, being related to the common trend of analyzing probe results in comparison to input layer embeddings (Belinkov and Glass, 2017; Liu et al., 2019; Hewitt and Manning, 2019; Tenney et al., 2019). Such functions allow us to inquire how much the contextual aspect of the contextual embeddings help the probe perform the target task. To show that we may map from contextual embeddings to the identity of the word type, we need the following assumption. Assumption 1. Every contextualized embedding is unique, i.e., for any pair of sentences s, s′ ∈V∗, we have (s ̸= s′) || (i ̸= j) ⇒BERT(s)i ̸= BERT(s′)j for all i ∈ {1, . . . |s|} and j ∈ {1, . . . , |s′|}. We note that Assumption 1 is mild. Contextualized word embeddings map words (in their context) to Rd, which is an uncountably infinite space. However, there are only a countable number of sentences, which implies only a countable number of sequences of real vectors in Rd that a contextualized embedder may produce. The event that any two embeddings would be the same across two distinct sentences is infinitesimally small.3 Assumption 1 yields the following corollary. 3Indeed, even if we sampled every embedding randomly from a d-dimensional Gaussian, the probability that we would ever sample the same real vector is zero. Corollary 1. There exists a function id : Rd →V that maps a contextualized embedding to its word type. The function id is not a bijection since multiple embeddings will map to the same type. Using Corollary 1, we can show that any noncontextualized word embedding will contain no more information than a contextualized word embedding. More formally, we do this by constructing a look-up function e : V →Rd that maps a word to a word embedding. This embedding may be onehot, randomly generated ahead of time, or the output of a data-driven embedding method, e.g. fastText (Bojanowski et al., 2017). We can then construct a control function as the composition of the look-up function e and the id function id. Using the data-processing inequality, we can prove that in a word-level prediction task, any non-contextual (type level) word-embedding will contain no more information than a contextualized (token level) one, such as BERT and ELMo. Specifically, we have I(T; R) ≥ (12) I(T; id(R)) = I(T; W) ≥I(T; e(W)) This result4 is intuitive and, perhaps, trivial— context matters information-theoretically. However, it gives us a principled foundation by which to measure the effectiveness of probes as we will show in §3.2. 3.2 How Much Information Did We Gain? We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function c(·). We term how much more information the contextualized embeddings have about a task than a control variable the gain, G, which we define as G(T, R, c) = I(T; R) −I(T; c(R)) (13) = H(T | c(R)) −H(T | R) ≥0 The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline, encoded as the function c. We will empirically estimate this value in §6. Interestingly enough, the gain has a straightforward interpretation. Proposition 1. The gain function is equal to the following conditional mutual information I(T; R | c(R)) = G(T, R, c) (14) 4Note that although this result holds in theory, in practice the functions id and e(·) might be arbitrarily hard to estimate. This is discussed in length in §4.3. 4613 Proof. I(T; R | c(R)) := I(T; R) −I(T; R; c(R)) = I(T; R) −I(T; c(R)) = G(T, R, c) The jump from the first to the second equality follows since R encodes, by construction, all the information about T provided by c(R). Proposition 1 gives us a clear understanding of the quantity we wish to estimate: It is how much information about a task is encoded in the representations, given some control knowledge. If properly designed, this control transformation will remove information from the probed representations. 3.3 Approximating the Gain The gain, as defined in eq. (13), is intractable to compute. In this section we derive a pair of variational bounds on G(T, R, e)—one upper and one lower. To approximate the gain, we will simultaneously minimize an upper and maximize a lowerbound on eq. (13). We begin by approximating the gain in the following manner G(T, R, e) ≈ (15) Hqθ2(T | c(R)) −Hqθ1(T | R) | {z } estimated Gqθ (T,R,e) these cross-entropies can be empirically estimated. We will assume access to a corpus {(ti, ri)}N i=1 that is human-annotated for the target linguistic property; we further assume that these are samples (ti, ri) ∼p(·, ·) from the true distribution. This yields a second approximation that is tractable: Hqθ(T; R) ≈−1 N N X i=1 log qθ(ti | ri) (16) This approximation is exact in the limit N →∞ by the law of large numbers. We note the approximation given in eq. (15) may be either positive or negative and its estimation error follows from eq. (9): ∆= E r∼p(·)KL(p(· | r) || qθ1(· | r)) (17) − E r∼p(·)KL(p(· | c(r)) || qθ2(· | c(r))) = KLqθ1(T, R) −KLqθ2(T, c(R)) where we abuse the KL notation to simplify the equation. This is an undesired behavior since we know the gain itself is non-negative by the data-processing inequality, but we have yet to devise a remedy. We justify the approximation in eq. (15) with a pair of variational bounds. The following two corollaries are a result of Theorem 2 in App. A. Corollary 2. We have the following upper-bound on the gain G(T, R, e) (18) ≤Gqθ(T, R, e)+KLqθ1(T, R) Corollary 3. We have the following lower-bound on the gain G(T, R, e) (19) ≥Gqθ(T, R, e) −KLqθ2(T, c(R)) The conjunction of Corollary 2 and Corollary 3 suggest a simple procedure for finding a good approximation: We choose qθ1(· | r) and qθ2(· | r) so as to minimize eq. (18) and maximize eq. (19), respectively. These distributions contain no overlapping parameters, by construction, so these two optimization routines may be performed independently. We will optimize both with a gradient-based procedure, discussed in §6. 4 Understanding Probing Information-Theoretically In §3, we developed an information-theoretic framework for thinking about probing contextual word embeddings for linguistic structure. However, we now cast doubt on whether probing makes sense as a scientific endeavour. We prove in §4.1 that contextualized word embeddings, by construction, contain no more information about a wordlevel syntactic task than the original sentence itself. Nevertheless, we do find a meaningful scientific interpretation of control functions. We expound upon this in §4.2, arguing that control functions are useful, not for understanding representations, but rather for understanding the influence of sentential context on word-level syntactic tasks, e.g., labeling words with their part of speech. 4.1 You Know Nothing, BERT To start, we note the following corollary Corollary 4. It directly follows from Assumption 1 that BERT is a bijection between sentences s and sequences of embeddings ⟨r1, . . . , r|s|⟩. As BERT is a bijection, it has an inverse, which we will denote as BERT−1. 4614 Theorem 1. BERT(S) cannot provide more information about T than the sentence S itself. Proof. I(T; S) ≥I(T; BERT(S)) (20) ≥I(T; BERT−1(BERT(S))) = I(T; S) This implies I(T; S) = I(T; BERT(S)).5 This is not a BERT-specific result—it rests on the fact that the data-processing inequality is tight for bijections. While Theorem 1 is a straightforward application of the data-processing inequality, it has deeper ramifications for probing. It means that if we search for syntax in the contextualized word embeddings of a sentence, we should not expect to find any more syntax than is present in the original sentence. In a sense, Theorem 1 is a cynical statement: under our operationalization, the endeavour of finding syntax in contextualized embeddings sentences is nonsensical. This is because, under Assumption 1, we know the answer a priori—the contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself. 4.2 What Do Control Functions Mean? Information-theoretically, the interpretation of control functions is also interesting. As previously noted, our interpretation of control functions in this work does not provide information about the representations themselves. Indeed, the same reasoning used in Corollary 1 can be used to devise a function ids(r) which maps a contextual representation of a token back to its sentence. For a typelevel control function c, by the data-processing inequality, we have that I(T; W) ≥I(T; c(R)). Consequently, we can get an upper-bound on how much information we can get out of a decontextualized representation. If we assume we have perfect probes, then we get that the true gain function is I(T; S) −I(T; W) = I(T; S | W). This quantity is interpreted as the amount of knowledge we gain about the word-level task T by knowing S (i.e., the sentence) in addition to W (i.e., the word type). Therefore, a perfect probe provides insights about language and not about the actual representations. 5Actually, Hewitt and Liang likely had an intuition about this in mind when they wrote “[a] sufficiently expressive probe with enough training data could learn any task on top of it” (Hewitt and Liang, 2019). 4.3 Discussion: Ease of Extraction We do acknowledge another interpretation of the work of Hewitt and Liang (2019) inter alia; BERT makes the syntactic information present in an ordered sequence of words more easily extractable. However, ease of extraction is not a trivial notion to operationalize, and indeed, we know of no attempt to do so;6 it is certainly more complex to determine than the number of layers in a multi-layer perceptron (MLP). Indeed, a MLP with a single hidden layer can represent any function over the unit cube, with the caveat that we may need a very large number of hidden units (Cybenko, 1989). Although for perfect probes the above results should hold, in practice id(·) and c(·) may be hard to approximate. Furthermore, if these functions were to be learned, they might require an unreasonably large dataset. Learning a random embedding control function, for example, would require a dataset containing all words in the vocabulary V —in an open vocabulary setting an infinite dataset would be required! “Better” representations should make their respective probes easily learnable—and consequently their encoded information is more accessible (Voita and Titov, 2020). We suggest that future work on probing should focus on operationalizing ease of extraction more rigorously—even though we do not attempt this ourselves. As previously argued by Saphra and Lopez (2019, §5), the advantage of simple probes is that they may reveal something about the structure of the encoded information—i.e., is it structured in such a way that it can be easily taken advantage of by downstream consumers of the contextualized embeddings? Many researchers who are interested in less complex probes have, either implicitly or explicitly, had this in mind. 5 A Critique of Control Tasks We agree with Hewitt and Liang (2019)—and with both Zhang and Bowman (2018) and Tenney et al. (2019)—that we should have controlled baselines when probing for linguistic properties. However, we disagree with parts of their methodology for constructing control tasks. We present these disagreements here. 5.1 Structure and Randomness Hewitt and Liang (2019) introduces control tasks to evaluate the effectiveness of probes. We draw 6Xu et al. (2020) is a possible exception. 4615 inspiration from this technique as evidenced by our introduction of control functions. However, we take issue with the suggestion that controls should have structure and randomness, to use the terminology from Hewitt and Liang (2019). They define structure as “the output for a word token is a deterministic function of the word type.” This means that they are stripping the language of ambiguity with respect to the target task. In the case of partof-speech labeling, love would either be a NOUN or a VERB in a control task, never both: this is a problem. The second feature of control tasks is randomness, i.e., “the output for each word type is sampled independently at random.” In conjunction, structure and randomness may yield a relatively trivial task that does not look like natural language. What is more, there is a closed-form solution for an optimal, retrieval-based “probe” that has zero learned parameters: If a word type appears in the training set, return the label with which it was annotated there, otherwise return the most frequently occurring label across all words in the training set. This probe will achieve an accuracy that is 1 minus the out-of-vocabulary rate (the number of tokens in the test set that correspond to novel types divided by the number of tokens) times the percentage of tags in the test set that do not correspond to the most frequent tag (the error rate of the guess-the-mostfrequent-tag classifier). In short, the best model for a control task is a pure memorizer that guesses the most frequent tag for out-of-vocabulary words. 5.2 What’s Wrong with Memorization? Hewitt and Liang (2019) proposes that probes should be optimized to maximize accuracy and selectivity. Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture. Given their characterization of control tasks, maximising selectivity leads to a selection of a model that is bad at memorization. But why should we punish memorization? Much of linguistic competence is about generalization, however memorization also plays a key role (Fodor et al., 1974; Nooteboom et al., 2002; Fromkin et al., 2018), with word learning (Carey, 1978) being an obvious example. Indeed, maximizing selectivity as a criterion for creating probes seems to artificially disfavor this property. 5.3 What Low-Selectivity Means Hewitt and Liang (2019) acknowledges that for the more complex task of dependency edge prediction, a MLP probe is more accurate and, therefore, preferable despite its low selectivity. However, they offer two counter-examples where the less selective neural probe exhibits drawbacks when compared to its more selective, linear counterpart. We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection. First, Hewitt and Liang (2019, §3.6) point out that, in their experiments, the MLP-1 model frequently mislabels the word with suffix -s as NNPS on the POS labeling task. They present this finding as a possible example of a less selective probe being less faithful in representing what linguistic information has the model learned. Our analysis leads us to believe that, on contrary, this shows that one should be using the best possible probe to minimize the chance of misinterpreting its encoded information. Since more complex probes achieve higher accuracy on the task, as evidence by the findings of Hewitt and Liang (2019), we believe that the overall trend of misinterpretation is higher for the probes with higher selectivity. The same applies for the second example in Hewitt and Liang 2019, §4.2 where a less selective probe appears to be less faithful. The paper shows that the representations on ELMo’s second layer fail to outperform its word type ones (layer zero) on the POS labeling task when using the MLP-1 probe. While the paper argues this is evidence for selectivity being a useful metric in choosing appropriate probes, we argue that this demonstrates, yet again, that one needs to use a more complex probe to minimize the chances of misinterpreting what the model has learned. The fact that the linear probe shows a difference only demonstrates that the information is perhaps more accessible with ELMo, not that it is not present. 6 Experiments Despite our discussion in §4, we still wish to empirically vet our estimation technique for the gain and we use this section to highlight the need to formally define ease of extraction (as argued in §4.3). We consider the tasks of POS and dependency labeling, using the universal POS tag (Petrov et al., 2012) and dependency label information from the Universal Dependencies 2.5 (Zeman et al., 2019). We probe the multilingual release of BERT7 on eleven typologically diverse languages: Basque, Czech, 7We used Wolf et al. (2019)’s implementation. 4616 # Tokens BERT fastText one-hot Language Train Test # POS H(T) H(T | R) H(T | c(R)) G(T, R, c) H(T | c(R)) G(T, R, c) Basque 72,869 24,335 15 3.17 0.36 0.29 -0.06 (-2.0%) 0.80 0.44 (14.0%) Czech 1,173,281 173,906 16 3.33 0.10 0.11 0.02 ( 0.5%) 0.35 0.25 ( 7.6%) English 203,762 24,958 16 3.61 0.23 0.39 0.16 ( 4.4%) 0.64 0.41 (11.4%) Finnish 162,584 21,078 14 3.17 0.25 0.19 -0.06 (-2.0%) 0.80 0.54 (17.1%) Indonesian 97,495 11,779 15 3.24 0.38 0.35 -0.03 (-0.8%) 0.64 0.26 ( 8.0%) Korean 295,899 28,234 16 3.04 0.33 0.60 0.27 ( 8.8%) 1.15 0.82 (27.0%) Marathi 2,997 412 15 3.17 0.76 0.90 0.14 ( 4.4%) 1.49 0.74 (23.2%) Tamil 6,329 1,988 13 3.15 0.58 0.47 -0.11 (-3.5%) 1.57 0.99 (31.4%) Telugu 5,082 721 14 2.73 0.42 0.42 -0.00 (-0.1%) 0.93 0.51 (18.6%) Turkish 37,769 10,023 13 3.03 0.36 0.23 -0.13 (-4.2%) 0.88 0.52 (17.1%) Urdu 108,674 14,806 15 3.23 0.32 0.41 0.09 ( 2.8%) 0.54 0.22 ( 6.9%) Table 1: Amount of information BERT, fastText or one-hot embeddings share with a POS probing task. H(T) is estimated with a plug-in estimator from same treebanks we use to train the POS labelers. English, Finnish, Indonesian, Korean, Marathi, Tamil, Telugu, Turkish and Urdu; and we compute the contextual representations of each sentence by feeding it into BERT and averaging the output word piece representations for each word, as tokenized in the treebank. 6.1 Control Functions We will consider two different control functions. Each is defined as the composition c = e◦id with a different look-up function: • efastText returns a language specific fastText embedding (Bojanowski et al., 2017); • eonehot returns a one-hot embedding.8 These functions can be considered type level, as they remove the influence of context on the word. 6.2 Probe Architecture As expounded upon above, our purpose is to achieve the best bound on mutual information we can. To this end, we employ a deep MLP as our probe. We define the probe as qθ(t | r) = (21) softmax  W (m)σ  W (m−1) · · · σ(W (1) r)  an m-layer neural network with the non-linearity σ(·) = ReLU(·). The initial projection matrix is W (1) ∈Rr1×d and the final projection matrix is W (m) ∈R|T |×rm−1, where ri = r 2i−1 . The remaining matrices are W (i) ∈Rri×ri−1, so we halve the number of hidden states in each layer. We optimize 8We initialize random embeddings at the type level, and let them train during the model’s optimization. We also experiment with fixed random embeddings—results for this control are in the Appendix. over the hyperparameters—number of layers, hidden size, one-hot embedding size, and dropout—by using random search. For each estimate, we train 50 models and choose the one with the best validation cross-entropy. The cross-entropy in the test set is then used as our entropy estimate. For dependency labeling, we follow Tenney et al. (2019) and concatenate the embeddings for both a token and its head—i.e. r = [ri; rhead(i)]—as such, the initial projection matrix is actually W (1) ∈Rr1×2d. 6.3 Results We know BERT can generate text in many languages. Here we assess how much it actually “knows” about syntax in those languages—or at least how much we can extract from it given as powerful probes as we can train. We further evaluate how much it knows above and beyond simple type-level baselines. POS tags Table 1 presents these results, showing how much information BERT, fastText, and one-hot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small gains in all languages—BERT even presents negative gains in some of them. Although this may seem to contradict the information processing inequality, it is actually caused by the difficulty of approximating id and c(·) with a finite training set—causing KLqθ1(T | R) to be larger than KLqθ2(T | c(R)). This highlights the need to formalize ease of extraction, as discussed in §4.3. Dependency labels As shown in Table 2, BERT improves over type-level embeddings in all lan4617 # Tokens BERT fastText one-hot Language Train Test # Classes H(T) H(T | R) H(T | c(R)) G(T, R, c) H(T | c(R)) G(T, R, c) Basque 67,578 22,575 29 4.03 0.62 0.75 0.13 ( 3.1%) 1.39 0.77 (19.0%) Czech 1,104,787 163,770 42 4.24 0.42 0.59 0.17 ( 4.1%) 0.97 0.55 (13.1%) English 192,042 23,019 48 4.48 0.45 1.00 0.55 (12.2%) 1.35 0.89 (19.9%) Finnish 150,362 19,515 44 4.42 0.62 0.72 0.10 ( 2.2%) 1.77 1.15 (26.0%) Indonesian 93,054 11,223 30 4.16 0.77 1.13 0.36 ( 8.6%) 1.52 0.75 (18.0%) Korean 273,436 26,079 30 4.17 0.40 0.76 0.36 ( 8.7%) 1.50 1.10 (26.4%) Marathi 2,624 365 39 4.01 1.39 1.65 0.26 ( 6.5%) 2.26 0.87 (21.6%) Tamil 5,929 1,869 28 3.78 1.17 1.23 0.06 ( 1.6%) 2.44 1.27 (33.7%) Telugu 4,031 575 41 3.64 1.09 1.31 0.23 ( 6.2%) 1.85 0.76 (20.9%) Turkish 34,120 9,046 31 3.95 1.12 1.17 0.05 ( 1.2%) 2.01 0.89 (22.4%) Urdu 104,647 14,271 24 3.83 0.63 0.93 0.30 ( 8.0%) 1.08 0.46 (11.9%) Table 2: Amount of information BERT, fastText or one-hot embeddings share with a dependency arc labeling task. H(T) is again estimated with a plug-in estimator from same treebanks we use to train our models. guages on this task. Nonetheless, although this is a much more context-dependent task, we see BERT-based estimates reveal at most 12% more information than fastText in English, the highest resource language in our set. If we look at the lower-resource languages, in five of them the gains are of less than 5%. Discussion When put into perspective, multilingual BERT’s representations do not seem to encode much more information about syntax than a simple baseline. On POS labeling, BERT only improves upon fastText in five of the eleven analysed languages—and by small amounts (less than 9%) when it does. Even at dependency labelling, a task considered to require more contextual knowledge, we could only decode from BERT at most (in English) 12% additional information— which again highlights the need to formalize ease of extraction. 7 Conclusion We propose an information-theoretic operationalization of probing that defines it as the task of estimating conditional mutual information. We introduce control functions, which put in context our mutual information estimates—how much more informative are contextual representations than some knowledge judged to be trivial? We further explored our operationalization and showed that, given perfect probes, probing can only yield insights into the language itself and cannot tell us anything about the representations under investigation. Keeping this in mind, we suggest a change of focus—instead of concentrating on probe size or information, we should pursue ease of extraction going forward. On a final note, we apply our formalization to evaluate multilingual BERT’s syntactic knowledge on a set of eleven typologically diverse languages. Although it does encode a large amount of information about syntax—more than 76% and 65%, respectively, about POS and dependency labels in all languages9—BERT only encodes at most 12% more information than a simple baseline (a type-level representation). On POS labeling, more specifically, our MI estimates based on BERT are higher than the control in less than half of the analyzed languages. This indicates that word-level POS labeling may not be ideal for contemplating the syntax contained in contextual word embeddings. Acknowledgements The authors would like to thank Adam Poliak and John Hewitt for several helpful suggestions. References Guillaume Alain and Yoshua Bengio. 2017. Understanding intermediate layers using linear classifier probes. In 5th International Conference on Learning Representations. Evan Archer, Il Memming Park, and Jonathan W. Pillow. 2014. Bayesian entropy estimation for countable discrete distributions. The Journal of Machine Learning Research, 15(1):2833–2868. Yonatan Belinkov and James Glass. 2017. Analyzing hidden representations in end-to-end automatic speech recognition systems. In Advances in Neural Information Processing Systems, pages 2441–2451. Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017. Evaluating layers of representation in neural machine translation on part-of-speech and semantic 9This is measured as the relative difference between H(T) and H(T | R). On average, this value is 88% and 80% on POS and dependency labels, respectively. 4618 tagging tasks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1–10, Taipei, Taiwan. Asian Federation of Natural Language Processing. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135–146. Susan Carey. 1978. The child as word learner. In Linguistic Theory and Psychological Reality. MIT Press. Coralie Chevallier, Ira A. Noveck, Tatjana Nazir, Lewis Bott, Valentina Lanzetti, and Dan Sperber. 2008. Making disjunctions exclusive. The Quarterly Journal of Experimental Psychology, 61(11):1741–1760. Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics. Thomas M. Cover and Joy A. Thomas. 2012. Elements of Information Theory. John Wiley & Sons. George Cybenko. 1989. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4):303–314. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jerry A. Fodor, Thomas G. Bever, and Merrill F. Garrett. 1974. The Psychology of Language: An Introduction to Psycholinguistics and Generative Grammar. McGraw-Hill. Victoria Fromkin, Robert Rodman, and Nina Hyams. 2018. An Introduction to Language. Cengage Learning. Dan Garrette, Jason Mielens, and Jason Baldridge. 2013. Real-world semi-supervised learning of POStaggers for low-resource languages. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 583–592, Sofia, Bulgaria. Association for Computational Linguistics. John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733–2743, Hong Kong, China. Association for Computational Linguistics. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics. Stephen C. Levinson. 2000. Presumptive Meanings: The Theory of Generalized Conversational Implicature. MIT Press. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094, Minneapolis, Minnesota. Association for Computational Linguistics. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Sieb G. Nooteboom, Fred Weerman, and F. N. K. Wijnen. 2002. Storage and Computation in the Language Faculty. Springer Science & Business Media. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Brussels, Belgium. Association for Computational Linguistics. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), pages 2089– 2096, Istanbul, Turkey. European Language Resources Association (ELRA). 4619 Gözde Gül Sahin, Clara Vania, Ilia Kuznetsov, and Iryna Gurevych. 2019. LINSPECTOR: Multilingual probing tasks for word representations. CoRR, abs/1903.09442. Naomi Saphra and Adam Lopez. 2019. Understanding learning dynamics of language models with SVCCA. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3257–3267, Minneapolis, Minnesota. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Elena Voita and Ivan Titov. 2020. Informationtheoretic probing with minimum description length. arXiv preprint arXiv:2003.12298. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771. Yilun Xu, Shengjia Zhao, Jiaming Song, Russell Stewart, and Stefano Ermon. 2020. A theory of usable information under computational constraints. In International Conference on Learning Representations. Daniel Zeman, Joakim Nivre, Mitchell Abrams, Noëmi Aepli, Željko Agi´c, Lars Ahrenberg, Gabriel˙e Aleksandraviˇci¯ut˙e, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Victoria Basmov, Colin Batchelor, John Bauer, Sandra Bellato, Kepa Bengoetxea, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agn˙e Bielinskien˙e, Rogier Blokland, Victoria Bobicev, Loïc Boizou, Emanuel Borges Völker, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Kristina Brokait˙e, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Tatiana Cavalcanti, Gül¸sen Cebiro˘glu Eryi˘git, Flavio Massimiliano Cecchini, Giuseppe G. A. Celano, Slavomír ˇCéplö, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Alessandra T. Cignarella, Silvie Cinková, Aurélie Collomb, Ça˘grı Çöltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, MarieCatherine de Marneffe, Valeria de Paiva, Elvis de Souza, Arantza Diaz de Ilarraza, Carly Dickerson, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Tomaž Erjavec, Aline Etienne, Wograine Evelyn, Richárd Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Kazunori Fujita, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Bernadeta Grici¯ut˙e, Matias Grioni, Normunds Gr¯uz¯ıtis, Bruno Guillaume, Céline Guillot-Barbance, Nizar Habash, Jan Hajiˇc, Jan Hajiˇc jr., Mika Hämäläinen, Linh Hà M˜y, Na-Rae Han, Kim Harris, Dag Haug, Johannes Heinecke, Felix Hennig, Barbora Hladká, Jaroslava Hlaváˇcová, Florinel Hociung, Petter Hohle, Jena Hwang, Takumi Ikeda, Radu Ion, Elena Irimia, O. lájídé Ishola, Tomáš Jelínek, Anders Johannsen, Fredrik Jørgensen, Markus Juutinen, Hüner Ka¸sıkara, Andre Kaasen, Nadezhda Kabaeva, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Boris Katz, Tolga Kayadelen, Jessica Kenney, Václava Kettnerová, Jesse Kirchner, Elena Klementieva, Arne Köhn, Kamil Kopacewicz, Natalia Kotsyba, Jolanta Kovalevskait˙e, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Lucia Lam, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, PhÆˇrÆ ˛ang Lê H`ông, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Maria Liovina, Yuan Li, Nikola Ljubeši´c, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, C˘at˘alina M˘ar˘anduc, David Mareˇcek, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Sarah McGuinness, Gustavo Mendonça, Niko Miekka, Margarita Misirpashayeva, Anna Missilä, C˘at˘alin Mititelu, Maria Mitrofan, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Tomohiko Morioka, Shinsuke Mori, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Robert Munro, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-B¯erzkalne, LÆˇrÆ ˛ang Nguy˜ên Thi., Huy`ên Nguy˜ên Thi. Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Atul Kr. Ojha, Adédayo.Ì ˘A Olúòkun, Mai Omura, Petya Osenova, Robert Östling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Angelika Peljak-Łapi´nska, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Daria Petrova, Slav Petrov, Jason Phelan, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler, Barbara Plank, Thierry Poibeau, Larisa Ponomareva, Martin Popel, Lauma Pretkalnin, a, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Peng Qi, Andriela Rääbis, Alexandre Rademaker, Loganathan Ra4620 masamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Ivan Riabov, Michael Rießler, Erika Rimkut˙e, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Ros,ca, Olga Rudina, Jack Rueter, Shoval Sadde, Benoît Sagot, Shadi Saleh, Alessio Salomoni, Tanja Samardži´c, Stephanie Samson, Manuela Sanguinetti, Dage Särg, Baiba Saul¯ıte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Muh Shohibussirri, Dmitry Sichinava, Aline Silveira, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Shingo Suzuki, Zsolt Szántó, Dima Taji, Yuta Takahashi, Fabio Tamburini, Takaaki Tanaka, Isabelle Tellier, Guillaume Thomas, Liisi Torga, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeˇnka Urešová, Larraitz Uria, Hans Uszkoreit, Andrius Utka, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Maximilan Wendt, Seyi Williams, Mats Wirén, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wróblewska, Mary Yako, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Zhuoran Yu, Zdenˇek Žabokrtský, Amir Zeldes, Manying Zhang, and Hanzhi Zhu. 2019. Universal dependencies 2.5. LINDAT/CLARIAHCZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University. Kelly Zhang and Samuel Bowman. 2018. Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 359–361, Brussels, Belgium. Association for Computational Linguistics. 4621 A Variational Bounds Theorem 2. The estimation error between Gqθ(T, R, e) and the true gain can be upper- and lower-bounded by two distinct Kullback–Leibler divergences. Proof. We first find the error given by our estimate, which is a difference between two KL divergences— as shown in eq. (22) in Figure 1. Making use of this error, we trivially find an upper-bound on the estimation error as ∆= KLqθ1(T | R) −KLqθ2(T, c(R)) (23) ≤KLqθ1(T | R) which follows since KL divergences are never negative. Analogously, we find a lower-bound as ∆= KLqθ1(T | R) −KLqθ2(T, c(R)) (24) ≥−KLqθ2(T, c(R)) B Further Results In this section, we present accuracies for the models trained using BERT, fastText and one-hot embeddings, and the full results on random embeddings. These random embeddings are generated once before the task, at the type level, and kept fixed without training. Table 3 shows that both BERT and fastText present high accuracies at POS labeling in all languages, except Tamil and Marathi. One-hot and random results are considerably worse, as expected, since they could not do more than take random guesses (e.g. guessing the most frequent label in the training test) in any word which was not seen during training. Table 4 presents similar results for dependency labeling, although accuracies for this task are considerably lower. These tables also show how ambiguous the linguistic task is given the word types (H(T | id(R))). These values were calculated using a plug-in estimator on the treebanks—which are known to underestimate entropies when used in undersampled regimes (Archer et al., 2014)—so they should not be considered as good approximations. Even so, we can see that most of the analysed languages are not very ambiguous with respect to POS labeling, and that there is a large variability of uncertainty across languages with respect to both tasks. 4622 G(T, R, e) := H(T; c(R)) −H(T | R) (22) = Hqθ2(T | c(R)) − E r∼p(·)KL(p(· | c(r)) || qθ2(· | c(r))) −Hqθ1(T | R) + E r∼p(·)KL(p(· | r) || qθ1(· | r)) = Hqθ2(T | c(R)) −KLqθ2(T, c(R)) −Hqθ1(T | R) + KLqθ1(T | R) = Hqθ2(T | c(R)) −Hqθ1(T | R) + KLqθ1(T | R) −KLqθ2(T, c(R)) = Gqθ(T, R, e) | {z } estimated gain + KLqθ1(T | R) −KLqθ2(T, c(R)) | {z } estimation error Figure 1: Derivation of the estimation error. accuracies base entropies random Language BERT fastText one-hot random H(T) H(T | id(R)) H(T | c(R)) G(T, R, c) Basque 0.92 0.93 0.82 0.82 3.17 0.13 0.83 0.48 (15.0%) Czech 0.98 0.98 0.91 0.87 3.33 0.06 0.57 0.47 (14.0%) English 0.95 0.90 0.85 0.83 3.61 0.26 0.72 0.48 (13.4%) Finnish 0.95 0.96 0.82 0.81 3.17 0.06 0.87 0.62 (19.6%) Indonesian 0.92 0.92 0.86 0.84 3.24 0.16 0.68 0.30 ( 9.2%) Korean 0.92 0.85 0.73 0.70 3.04 0.14 1.33 1.01 (33.1%) Marathi 0.83 0.79 0.68 0.69 3.17 0.48 1.43 0.67 (21.1%) Tamil 0.88 0.89 0.64 0.68 3.15 0.09 1.41 0.82 (26.2%) Telugu 0.91 0.92 0.78 0.82 2.73 0.07 0.86 0.44 (16.2%) Turkish 0.92 0.95 0.79 0.80 3.03 0.08 0.81 0.45 (14.7%) Urdu 0.92 0.91 0.88 0.87 3.23 0.29 0.59 0.27 ( 8.3%) Table 3: Accuracies of the models trained on BERT, fastText, one-hot and random embeddings for the POS tagging task. accuracies base entropies random Language BERT fastText one-hot random H(T) H(T | id(R)) H(T | c(R)) G(T, R, c) Basque 0.87 0.83 0.71 0.65 4.03 0.55 1.71 1.08 (26.9%) Czech 0.91 0.88 0.80 0.68 4.24 0.78 1.58 1.16 (27.3%) English 0.91 0.78 0.72 0.68 4.48 1.01 1.61 1.16 (25.8%) Finnish 0.87 0.85 0.65 0.56 4.42 0.52 2.21 1.59 (36.1%) Indonesian 0.85 0.76 0.69 0.64 4.16 0.83 1.76 0.99 (23.9%) Korean 0.92 0.84 0.68 0.56 4.17 0.35 2.08 1.68 (40.4%) Marathi 0.75 0.70 0.61 0.62 4.01 0.81 2.12 0.73 (18.2%) Tamil 0.76 0.74 0.51 0.54 3.78 0.31 2.32 1.15 (30.5%) Telugu 0.80 0.78 0.67 0.69 3.64 0.31 1.96 0.88 (24.1%) Turkish 0.77 0.75 0.59 0.54 3.95 0.54 2.14 1.02 (25.8%) Urdu 0.87 0.80 0.76 0.73 3.83 1.02 1.26 0.63 (16.4%) Table 4: Accuracies of the models trained on BERT, fastText, one-hot and random embeddings for the dependency labeling task.
2020
420
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4623 On the Cross-lingual Transferability of Monolingual Representations Mikel Artetxe†∗, Sebastian Ruder‡ , Dani Yogatama‡ †HiTZ Center, University of the Basque Country (UPV/EHU) ‡DeepMind [email protected] {ruder,dyogatama}@google.com Abstract State-of-the-art unsupervised multilingual models (e.g., multilingual BERT) have been shown to generalize in a zero-shot crosslingual setting. This generalization ability has been attributed to the use of a shared subword vocabulary and joint training across multiple languages giving rise to deep multilingual abstractions. We evaluate this hypothesis by designing an alternative approach that transfers a monolingual model to new languages at the lexical level. More concretely, we first train a transformer-based masked language model on one language, and transfer it to a new language by learning a new embedding matrix with the same masked language modeling objective—freezing parameters of all other layers. This approach does not rely on a shared vocabulary or joint training. However, we show that it is competitive with multilingual BERT on standard cross-lingual classification benchmarks and on a new Cross-lingual Question Answering Dataset (XQuAD). Our results contradict common beliefs of the basis of the generalization ability of multilingual models and suggest that deep monolingual models learn some abstractions that generalize across languages. We also release XQuAD as a more comprehensive cross-lingual benchmark, which comprises 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 translated into ten languages by professional translators. 1 Introduction Multilingual pre-training methods such as multilingual BERT (mBERT, Devlin et al., 2019) have been successfully used for zero-shot cross-lingual transfer (Pires et al., 2019; Conneau and Lample, 2019). These methods work by jointly training a ∗Work done as an intern at DeepMind. transformer model (Vaswani et al., 2017) to perform masked language modeling (MLM) in multiple languages, which is then fine-tuned on a downstream task using labeled data in a single language— typically English. As a result of the multilingual pre-training, the model is able to generalize to other languages, even if it has never seen labeled data in those languages. Such a cross-lingual generalization ability is surprising, as there is no explicit cross-lingual term in the underlying training objective. In relation to this, Pires et al. (2019) hypothesized that: . . .having word pieces used in all languages (numbers, URLs, etc), which have to be mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close to a shared space. ...mBERT’s ability to generalize cannot be attributed solely to vocabulary memorization, and that it must be learning a deeper multilingual representation. Cao et al. (2020) echoed this sentiment, and Wu and Dredze (2019) further observed that mBERT performs better in languages that share many subwords. As such, the current consensus of the crosslingual generalization ability of mBERT is based on a combination of three factors: (i) shared vocabulary items that act as anchor points; (ii) joint training across multiple languages that spreads this effect; which ultimately yields (iii) deep cross-lingual representations that generalize across languages and tasks. In this paper, we empirically test this hypothesis by designing an alternative approach that violates all of these assumptions. As illustrated in Figure 1, our method starts with a monolingual transformer trained with MLM, which we transfer to a new language by learning a new embedding matrix through MLM in the new language while freezing parameters of all other layers. This approach only learns new lexical parameters and does not rely on shared 4624 Python [MASK] an interpreted [MASK] language Python is an interpreted programming language pos0 pos1 tok1 pos2 MASK tok2 posN tokN segA segA segA segB ... ... ... CLS EN EN EN (a) English pre-training Seattle es la [MASK] más [MASK] de Washington Seattle es la ciudad más grande de Washington pos0 pos1 tok1 pos2 MASK tok2 posN tokN segA segA segA segB ... ... ... CLS XX XX XX (b) L2 embedding learning males playing soccer [SEP] some men play a sport entailment pos0 pos1 tok1 pos2 label posN tokN segA segA segA segB ... ... ... CLS tok2 EN EN EN (c) English fine-tuning la gente se partía de risa [SEP] a nadie le hizo gracia contradiction pos0 pos1 tok1 pos2 label posN tokN segA segA segA segB ... ... ... CLS tok2 XX XX XX (d) Zero-shot transfer to L2 Figure 1: Four steps for zero-shot cross-lingual transfer: (i) pre-train a monolingual transformer model in English akin to BERT; (ii) freeze the transformer body and learn new token embeddings from scratch for a second language using the same training objective over its monolingual corpus; (iii) fine-tune the model on English while keeping the embeddings frozen; and (iv) zero-shot transfer it to the new language by swapping the token embeddings. vocabulary items nor joint learning. However, we show that it is competitive with joint multilingual pre-training across standard zero-shot cross-lingual transfer benchmarks (XNLI, MLDoc, and PAWSX). We also experiment with a new Cross-lingual Question Answering Dataset (XQuAD), which consists of 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 (Rajpurkar et al., 2016) translated into ten languages by professional translators. Question answering as a task is a classic probe for language understanding. It has also been found to be less susceptible to annotation artifacts commonly found in other benchmarks (Kaushik and Lipton, 2018; Gururangan et al., 2018). We believe that XQuAD can serve as a more comprehensive cross-lingual benchmark and make it publicly available at https://github. com/deepmind/xquad. Our results on XQuAD show that the monolingual transfer approach can be made competitive with mBERT by learning second language-specific transformations via adapter modules (Rebuffiet al., 2017). Our contributions in this paper are as follows: (i) we propose a method to transfer monolingual representations to new languages in an unsupervised fashion (§2)1; (ii) we show that neither a shared subword vocabulary nor joint multilingual training is necessary for zero-shot transfer and find that the effective vocabulary size per language is an important factor for learning multilingual models (§3 and §4); (iii) we show that monolingual models learn abstractions that generalize across languages (§5); and (iv) we present a new cross-lingual question answering dataset (§4). 1This is particularly useful for low-resource languages, since many pre-trained models are currently in English. 2 Cross-lingual Transfer of Monolingual Representations In this section, we propose an approach to transfer a pre-trained monolingual model in one language L1 (for which both task supervision and a monolingual corpus are available) to a second language L2 (for which only a monolingual corpus is available). The method serves as a counterpoint to existing joint multilingual models, as it works by aligning new lexical parameters to a monolingually trained deep model. As illustrated in Figure 1, our proposed method consists of four steps: 1. Pre-train a monolingual BERT (i.e. a transformer) in L1 with masked language modeling (MLM) and next sentence prediction (NSP) objectives on an unlabeled L1 corpus. 2. Transfer the model to a new language by learning new token embeddings while freezing the transformer body with the same training objectives (MLM and NSP) on an unlabeled L2 corpus. 3. Fine-tune the transformer for a downstream task using labeled data in L1, while keeping the L1 token embeddings frozen. 4. Zero-shot transfer the resulting model to L2 by swapping the L1 token embeddings with the L2 embeddings learned in Step 2. We note that, unlike mBERT, we use a separate subword vocabulary for each language, which is trained on its respective monolingual corpus, so the model has no notion of shared subwords. However, the special [CLS], [SEP], [MASK], 4625 [PAD], and [UNK] symbols are shared across languages, and fine-tuned in Step 3.2 We observe further improvements on several downstream tasks using the following extensions to the above method. Language-specific position embeddings. The basic approach does not take into account different word orders commonly found in different languages, as it reuses the position embeddings in L1 for L2. We relax this restriction by learning a separate set of position embeddings for L2 in Step 2 (along with L2 token embeddings).3 We treat the [CLS] symbol as a special case. In the original implementation, BERT treats [CLS] as a regular word with its own position and segment embeddings, even if it always appears in the first position. However, this does not provide any extra capacity to the model, as the same position and segment embeddings are always added up to the [CLS] embedding. Following this observation, we do not use any position and segment embeddings for the [CLS] symbol. Noised fine-tuning. The transformer body in our proposed method is only trained with L1 embeddings as its input layer, but is used with L2 embeddings at test time. To make the model more robust to this mismatch, we add Gaussian noises sampled from the standard normal distribution to the word, position, and segment embeddings during the finetuning step (Step 3). Adapters. We also investigate the possibility of allowing the model to learn better deep representations of L2, while retaining the alignment with L1 using residual adapters (Rebuffiet al., 2017). Adapters are small task-specific bottleneck layers that are added between layers of a pre-trained model. During fine-tuning, the original model parameters are frozen, and only parameters of the adapter modules are learned. In Step 2, when we transfer the L1 transformer to L2, we add a feedforward adapter module after the projection following multi-headed attention and after the two feedforward layers in each transformer layer, similar to Houlsby et al. (2019). Note that the original transformer body is still frozen, and only parameters of 2The rationale behind this is that special symbols are generally task dependent, and given that the fine-tuning in downstream tasks is done exclusively in English, we need to share these symbols to zero-shot transfer to other languages. 3We also freeze the L1 position embeddings in Step 3 accordingly, and the L2 position embeddings are plugged in together with the token embeddings in Step 4. the adapter modules are trainable (in addition to the embedding matrix in L2). 3 Experiments Our goal is to evaluate the performance of different multilingual models in the zero-shot cross-lingual setting to better understand the source of their generalization ability. We describe the models that we compare (§3.1), the experimental setting (§3.2), and the results on three classification datasets: XNLI (§3.3), MLDoc (§3.4) and PAWS-X (§3.5). We discuss experiments on our new XQuAD dataset in §4. In all experiments, we fine-tune a pre-trained model using labeled training examples in English, and evaluate on test examples in other languages via zero-shot transfer. 3.1 Models We compare four main models in our experiments: Joint multilingual models (JOINTMULTI). A multilingual BERT model trained jointly on 15 languages4. This model is analogous to mBERT and closely related to other variants like XLM. Joint pairwise bilingual models (JOINTPAIR). A multilingual BERT model trained jointly on two languages (English and another language). This serves to control the effect of having multiple languages in joint training. At the same time, it provides a joint system that is directly comparable to the monolingual transfer approach in §2, which also operates on two languages. Cross-lingual word embedding mappings (CLWE). The method we described in §2 operates at the lexical level, and can be seen as a form of learning cross-lingual word embeddings that are aligned to a monolingual transformer body. In contrast to this approach, standard cross-lingual word embedding mappings first align monolingual lexical spaces and then learn a multilingual deep model on top of this space. We also include a method based on this alternative approach where we train skip-gram embeddings for each language, and map them to a shared space using VecMap (Artetxe et al., 2018).5 We then train an English BERT model using MLM and NSP on top of the frozen mapped embeddings. The model is 4We use all languages that are included in XNLI (Conneau et al., 2018b). 5We use the orthogonal mode in VecMap and map all languages into English. 4626 then fine-tuned using English labeled data while keeping the embeddings frozen. We zero-shot transfer to a new language by plugging in its respective mapped embeddings. Cross-lingual transfer of monolingual models (MONOTRANS). Our method described in §2. We use English as L1 and try multiple variants with different extensions. 3.2 Setting Vocabulary. We perform subword tokenization using the unigram model in SentencePiece (Kudo and Richardson, 2018). In order to understand the effect of sharing subwords across languages and the size of the vocabulary, we train each model with various settings. We train 4 different JOINTMULTI models with a vocabulary of 32k, 64k, 100k, and 200k subwords. For JOINTPAIR, we train one model with a joint vocabulary of 32k subwords, learned separately for each language pair, and another one with a disjoint vocabulary of 32k subwords per language, learned on its respective monolingual corpus. The latter is directly comparable to MONOTRANS in terms of vocabulary, in that it is restricted to two languages and uses the exact same disjoint vocabulary with 32k subwords per language. For CLWE, we use the same subword vocabulary and investigate two choices: (i) the number of embedding dimensions—300d (the standard in the crosslingual embedding literature) and 768d (equivalent to the rest of the models); and (ii) the self-learning initialization—weakly supervised (based on identically spelled words, Søgaard et al., 2018) and unsupervised (based on the intralingual similarity distribution, Artetxe et al., 2018). Pre-training data. We use Wikipedia as our training corpus, similar to mBERT and XLM (Conneau and Lample, 2019), which we extract using the WikiExtractor tool.6 We do not perform any lowercasing or normalization. When working with languages of different corpus sizes, we use the same upsampling strategy as Conneau and Lample (2019) for both the subword vocabulary learning and the pre-training. Training details. Our implementation is based on the BERT code from Devlin et al. (2019). For adapters, we build on the code by Houlsby et al. (2019). We use the model architecture of 6https://github.com/attardi/ wikiextractor BERTBASE, similar to mBERT. We use the LAMB optimizer (You et al., 2020) and train on 64 TPUv3 chips for 250,000 steps using the same hyperparameters as You et al. (2020). We describe other training details in Appendix A. Our hyperparameter configuration is based on preliminary experiments on the development set of the XNLI dataset. We do not perform any exhaustive hyperparameter search, and use the exact same settings for all model variants, languages, and tasks. Evaluation setting. We perform a single training and evaluation run for each model, and report results in the corresponding test set for each downstream task. For MONOTRANS, we observe stability issues when learning language-specific position embeddings for Greek, Thai and Swahili. The second step would occasionally fail to converge to a good solution. For these three languages, we run Step 2 of our proposed method (§2) three times and pick the best model on the XNLI development set. 3.3 XNLI: Natural Language Inference In natural language inference (NLI), given two sentences (a premise and a hypothesis), the goal is to decide whether there is an entailment, contradiction, or neutral relationship between them (Bowman et al., 2015). We train all models on the MultiNLI dataset (Williams et al., 2018) in English and evaluate on XNLI (Conneau et al., 2018b)—a cross-lingual NLI dataset consisting of 2,500 development and 5,000 test instances translated from English into 14 languages. We report our results on XNLI in Table 1 together with the previous results from mBERT and XLM.7 We summarize our main findings below. JOINTMULTI is comparable with the literature. Our best JOINTMULTI model is substantially better than mBERT, and only one point worse (on average) than the unsupervised XLM model, which is larger in size. A larger vocabulary is beneficial. JOINTMULTI variants with a larger vocabulary perform better. More languages do not improve performance. JOINTPAIR models with a joint vocabulary perform comparably with JOINTMULTI. 7mBERT covers 102 languages and has a shared vocabulary of 110k subwords. XLM covers 15 languages and uses a larger model size with a shared vocabulary of 95k subwords, which contributes to its better performance. 4627 en fr es de el bg ru tr ar vi th zh hi sw ur avg Prev work mBERT 81.4 74.3 70.5 62.1 63.8 58.3 XLM (MLM) 83.2 76.5 76.3 74.2 73.1 74.0 73.1 67.8 68.5 71.2 69.2 71.9 65.7 64.6 63.4 71.5 CLWE 300d ident 82.1 67.6 69.0 65.0 60.9 59.1 59.5 51.2 55.3 46.6 54.0 58.5 48.4 35.3 43.0 57.0 300d unsup 82.1 67.4 69.3 64.5 60.2 58.4 59.2 51.5 56.2 36.4 54.7 57.7 48.2 36.2 33.8 55.7 768d ident 82.4 70.7 71.1 67.6 64.2 61.4 63.3 55.0 58.6 50.7 58.0 60.2 54.8 34.8 48.1 60.1 768d unsup 82.4 70.4 71.2 67.4 63.9 62.8 63.3 54.8 58.3 49.1 57.2 55.7 54.9 35.0 33.9 58.7 JOINT MULTI 32k voc 79.0 71.5 72.2 68.5 66.7 66.9 66.5 58.4 64.4 66.0 62.3 66.4 59.1 50.4 56.9 65.0 64k voc 80.7 72.8 73.0 69.8 69.6 69.5 68.8 63.6 66.1 67.2 64.7 66.7 63.2 52.0 59.0 67.1 100k voc 81.2 74.5 74.4 72.0 72.3 71.2 70.0 65.1 69.7 68.9 66.4 68.0 64.2 55.6 62.2 69.0 200k voc 82.2 75.8 75.7 73.4 74.0 73.1 71.8 67.3 69.8 69.8 67.7 67.8 65.8 60.9 62.3 70.5 JOINT PAIR Joint voc 82.2 74.8 76.4 73.1 72.0 71.8 70.2 67.9 68.5 71.4 67.7 70.8 64.5 64.2 60.6 70.4 Disjoint voc 83.0 76.2 77.1 74.4 74.4 73.7 72.1 68.8 71.3 70.9 66.2 72.5 66.0 62.3 58.0 71.1 MONO TRANS Token emb 83.1 73.3 73.9 71.0 70.3 71.5 66.7 64.5 66.6 68.2 63.9 66.9 61.3 58.1 57.3 67.8 + pos emb 83.8 74.3 75.1 71.7 72.6 72.8 68.8 66.0 68.6 69.8 65.7 69.7 61.1 58.8 58.3 69.1 + noising 81.7 74.1 75.2 72.6 72.9 73.1 70.2 68.1 70.2 69.1 67.7 70.6 62.5 62.5 60.2 70.0 + adapters 81.7 74.7 75.4 73.0 72.0 73.7 70.4 69.9 70.6 69.5 65.1 70.3 65.2 59.6 51.7 69.5 Table 1: XNLI results (accuracy). mBERT results are taken from the official BERT repository, while XLM results are taken from Conneau and Lample (2019). We bold the best result in each section and underline the overall best. A shared subword vocabulary is not necessary for joint multilingual pre-training. The equivalent JOINTPAIR models with a disjoint vocabulary for each language perform better. CLWE performs poorly. Even if it is competitive in English, it does not transfer as well to other languages. Larger dimensionalities and weak supervision improve CLWE, but its performance is still below other models. MONOTRANS is competitive with joint learning. The basic version of MONOTRANS is 3.3 points worse on average than its equivalent JOINTPAIR model. Language-specific position embeddings and noised fine-tuning reduce the gap to only 1.1 points. Adapters mostly improve performance, except for low-resource languages such as Urdu, Swahili, Thai, and Greek. In subsequent experiments, we include results for all variants of MONOTRANS and JOINTPAIR, the best CLWE variant (768d ident), and JOINTMULTI with 32k and 200k voc. 3.4 MLDoc: Document Classification In MLDoc (Schwenk and Li, 2018), the task is to classify documents into one of four different genres: corporate/industrial, economics, government/social, and markets. The dataset is an improved version of the Reuters benchmark (Klementiev et al., 2012), and consists of 1,000 training and 4,000 test documents in 7 languages. We show the results of our MLDoc experiments in Table 2. In this task, we observe that simpler models tend to perform better, and the best overall results are from CLWE. We believe that this can be attributed to: (i) the superficial nature of the task itself, as a model can rely on a few keywords to identify the genre of an input document without requiring any high-level understanding and (ii) the small size of the training set. Nonetheless, all of the four model families obtain generally similar results, corroborating our previous findings that joint multilingual pre-training and a shared vocabulary are not needed to achieve good performance. 3.5 PAWS-X: Paraphrase Identification PAWS is a dataset that contains pairs of sentences with a high lexical overlap (Zhang et al., 2019). The task is to predict whether each pair is a paraphrase or not. While the original dataset is only in English, PAWS-X (Yang et al., 2019) provides human translations into six languages. We evaluate our models on this dataset and show our results in Table 2. Similar to experiments on other datasets, MONOTRANS is competitive with the best joint variant, with a difference of only 0.6 points when we learn language-specific position embeddings. 4 XQuAD: Cross-lingual Question Answering Dataset Our classification experiments demonstrate that MONOTRANS is competitive with JOINTMULTI and JOINTPAIR, despite being multilingual at the embedding layer only (i.e. the transformer body is trained 4628 MLDoc PAWS-X en fr es de ru zh avg en fr es de zh avg Prev work mBERT 83.0 75.0 82.4 71.6 66.2 93.5 85.2 86.0 82.2 75.8 84.5 CLWE 768d ident 94.7 87.3 77.0 88.7 67.6 78.3 82.3 92.8 85.2 85.5 81.6 72.5 83.5 JOINT MULTI 32k voc 92.6 81.7 75.8 85.4 71.5 66.6 78.9 91.9 83.8 83.3 82.6 75.8 83.5 200k voc 91.9 82.1 80.9 89.3 71.8 66.2 80.4 93.8 87.7 87.5 87.3 78.8 87.0 JOINT PAIR Joint voc 93.1 81.3 74.7 87.7 71.5 80.7 81.5 93.3 86.1 87.2 86.0 79.9 86.5 Disjoint voc 93.5 83.1 78.0 86.6 65.5 78.1 80.8 94.0 88.4 88.6 87.5 79.3 87.5 MONO TRANS Token emb 93.5 84.0 76.9 88.7 60.6 83.6 81.2 93.6 87.0 87.1 84.2 78.2 86.0 + pos emb 93.6 79.7 75.7 86.6 61.6 83.0 80.0 94.3 87.3 87.6 86.3 79.0 86.9 + noising 88.2 81.3 72.2 89.4 63.9 65.1 76.7 88.0 83.3 83.2 81.8 77.5 82.7 + adapters 88.2 81.4 76.4 89.6 63.1 77.3 79.3 88.0 84.1 83.0 81.5 73.5 82.0 Table 2: MLDoc and PAWS-X results (accuracy). mBERT results are from Eisenschlos et al. (2019) for MLDoc and from Yang et al. (2019) for PAWS-X, respectively. We bold the best result in each section with more than two models and underline the overall best result. exclusively on English). One possible explanation for this behaviour is that existing cross-lingual benchmarks are flawed and solvable at the lexical level. For example, previous work has shown that models trained on MultiNLI—from which XNLI was derived—learn to exploit superficial cues in the data (Gururangan et al., 2018). To better understand the cross-lingual generalization ability of these models, we create a new Crosslingual Question Answering Dataset (XQuAD). Question answering is a classic probe for natural language understanding (Hermann et al., 2015) and has been shown to be less susceptible to annotation artifacts than other popular tasks (Kaushik and Lipton, 2018). In contrast to existing classification benchmarks, extractive question answering requires identifying relevant answer spans in longer context paragraphs, thus requiring some degree of structural transfer across languages. XQuAD consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.18 together with their translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Both the context paragraphs and the questions are translated by professional human translators from Gengo9. In order to facilitate easy annotations of answer spans, we choose the most frequent answer for each question and mark its beginning and end in the context paragraph using special symbols, instructing translators to keep these symbols in the relevant positions in 8We choose SQuAD 1.1 to avoid translating unanswerable questions. 9https://gengo.com their translations. Appendix B discusses the dataset in more details. We show F1 scores on XQuAD in Table 3 (we include exact match scores in Appendix C). Similar to our findings in the XNLI experiment, the vocabulary size has a large impact on JOINTMULTI, and JOINTPAIR models with disjoint vocabularies perform the best. The gap between MONOTRANS and joint models is larger, but MONOTRANS still performs surprisingly well given the nature of the task. We observe that learning language-specific position embeddings is helpful in most cases, but completely fails for Turkish and Hindi. Interestingly, the exact same pre-trained models (after Steps 1 and 2) do obtain competitive results in XNLI (§3.3). In contrast to results on previous tasks, adding adapters to allow a transferred monolingual model to learn higher level abstractions in the new language significantly improves performance, resulting in a MONOTRANS model that is comparable to the best joint system. 5 Discussion Joint multilingual training. We demonstrate that sharing subwords across languages is not necessary for mBERT to work, contrary to a previous hypothesis by Pires et al. (2019). We also do not observe clear improvements by scaling the joint training to a large number of languages. Rather than having a joint vs. disjoint vocabulary or two vs. multiple languages, we find that an important factor is the effective vocabulary size per language. When using a joint vocabulary, only a subset of the tokens is effectively shared, while the 4629 en es de el ru tr ar vi th zh hi avg mBERT 88.9 75.5 70.6 62.6 71.3 55.4 61.5 69.5 42.7 58.0 59.2 65.0 CLWE 768d ident 84.2 58.0 51.2 41.1 48.3 24.2 32.8 29.7 23.8 19.9 21.7 39.5 JOINT MULTI 32k voc 79.3 59.5 60.3 49.6 59.7 42.9 52.3 53.6 49.3 50.2 42.3 54.5 200k voc 82.7 74.3 71.3 67.1 70.2 56.6 64.8 67.6 58.6 51.5 58.3 65.7 JOINT PAIR Joint voc 82.8 68.3 73.6 58.8 69.8 53.8 65.3 69.5 56.3 58.8 57.4 64.9 Disjoint voc 83.3 72.5 72.8 67.3 71.7 60.5 66.5 68.9 56.1 60.4 56.7 67.0 MONO TRANS Token emb 83.9 67.9 62.1 63.0 64.2 51.2 61.0 64.1 52.6 51.4 50.9 61.1 + pos emb 84.7 73.1 65.9 66.5 66.2 16.2 59.5 65.8 51.5 56.4 19.3 56.8 + noising 82.1 68.4 68.2 67.3 67.5 17.5 61.2 65.9 57.5 58.5 21.5 57.8 + adapters 82.1 70.8 70.6 67.9 69.1 61.3 66.0 67.0 57.5 60.5 61.9 66.8 Table 3: XQuAD results (F1). We bold the best result in each section and underline the overall best result. mono xx→en aligned en en fr es de el bg ru tr ar vi zh avg Semantic WiC 59.1 58.2 62.5 59.6 58.0 59.9 56.9 57.7 58.5 59.7 57.8 56.7 58.7 SCWS 45.9 44.3 39.7 34.1 39.1 38.2 28.9 32.6 42.1 45.5 35.3 31.8 37.4 Syntactic Subject-verb agreement 86.5 58.2 64.0 65.7 57.6 67.6 58.4 73.6 59.6 61.2 62.1 61.1 62.7 Reflexive anaphora 79.2 60.2 60.7 66.6 53.3 63.6 56.0 75.4 69.4 81.6 58.4 55.2 63.7 Table 4: Semantic and syntactic probing results of a monolingual model and monolingual models transferred to English. Results are on the Word-in-Context (WiC) dev set, the Stanford Contextual Word Similarity (SCWS) test set, and the syntactic evaluation (syn) test set (Marvin and Linzen, 2018). Metrics are accuracy (WiC), Spearman’s r (SCWS), and macro-averaged accuracy (syn). rest tends to occur in only one language. As a result, multiple languages compete for allocations in the shared vocabulary. We observe that multilingual models with larger vocabulary sizes obtain consistently better results. It is also interesting that our best results are generally obtained by the JOINTPAIR systems with a disjoint vocabulary, which guarantees that each language is allocated 32k subwords. As such, we believe that future work should treat the effective vocabulary size as an important factor. Transfer of monolingual representations. MONOTRANS is competitive even in the most challenging scenarios. This indicates that joint multilingual pre-training is not essential for cross-lingual generalization, suggesting that monolingual models learn linguistic abstractions that generalize across languages. To get a better understanding of this phenomenon, we probe the representations of MONOTRANS. As existing probing datasets are only available in English, we train monolingual representations in non-English languages and transfer them to English. We probe representations from the resulting English models with the Word in Context (WiC; Pilehvar and Camacho-Collados, 2019), Stanford Contextual Word Similarity (SCWS; Huang et al., 2012), and the syntactic evaluation (Marvin and Linzen, 2018) datasets. We provide details of our experimental setup in Appendix D and show a summary of our results in Table 4. The results indicate that monolingual semantic representations learned from non-English languages transfer to English to a degree. On WiC, models transferred from non-English languages are comparable with models trained on English. On SCWS, while there are more variations, models trained on other languages still perform surprisingly well. In contrast, we observe larger gaps in the syntactic evaluation dataset. This suggests that transferring syntactic abstractions is more challenging than semantic abstractions. We leave a more thorough investigation of whether joint multilingual pre-training reduces to learning a lexical-level alignment for future work. CLWE. CLWE models—although similar in spirit to MONOTRANS—are only competitive on the easiest and smallest task (MLDoc), and perform poorly on the more challenging ones (XNLI and XQuAD). While previous work has questioned evaluation methods in this research area (Glavaˇs et al., 2019; 4630 Artetxe et al., 2019), our results provide evidence that existing methods are not competitive in challenging downstream tasks and that mapping between two fixed embedding spaces may be overly restrictive. For that reason, we think that designing better integration techniques of CLWE to downstream models is an important future direction. Lifelong learning. Humans learn continuously and accumulate knowledge throughout their lifetime. In contrast, existing multilingual models focus on the scenario where all training data for all languages is available in advance. The setting to transfer a monolingual model to other languages is suitable for the scenario where one needs to incorporate new languages into an existing model, while no longer having access to the original data. Such a scenario is of significant practical interest, since models are often released without the data they are trained on. In that regard, our work provides a baseline for multilingual lifelong learning. 6 Related Work Unsupervised lexical multilingual representations. A common approach to learn multilingual representations is based on cross-lingual word embedding mappings. These methods learn a set of monolingual word embeddings for each language and map them to a shared space through a linear transformation. Recent approaches perform this mapping with an unsupervised initialization based on heuristics (Artetxe et al., 2018) or adversarial training (Zhang et al., 2017; Conneau et al., 2018a), which is further improved through self-learning (Artetxe et al., 2017). The same approach has also been adapted for contextual representations (Schuster et al., 2019). Unsupervised deep multilingual representations. In contrast to the previous approach, which learns a shared multilingual space at the lexical level, state-of-the-art methods learn deep representations with a transformer. Most of these methods are based on mBERT. Extensions to mBERT include scaling it up and incorporating parallel data (Conneau and Lample, 2019), adding auxiliary pretraining tasks (Huang et al., 2019), and encouraging representations of translations to be similar (Cao et al., 2020). Concurrent to this work, Tran (2020) propose a more complex approach to transfer a monolingual BERT to other languages that achieves results similar to ours. However, they find that post-hoc embedding learning from a random initialization does not work well. In contrast, we show that monolingual representations generalize well to other languages and that we can transfer to a new language by learning new subword embeddings. Contemporaneous work also shows that a shared vocabulary is not important for learning multilingual representations (K et al., 2020; Wu et al., 2019), while Lewis et al. (2019) propose a question answering dataset that is similar in spirit to ours but covers fewer languages and is not parallel across all of them. 7 Conclusions We compared state-of-the-art multilingual representation learning models and a monolingual model that is transferred to new languages at the lexical level. We demonstrated that these models perform comparably on standard zero-shot crosslingual transfer benchmarks, indicating that neither a shared vocabulary nor joint pre-training are necessary in multilingual models. We also showed that a monolingual model trained on a particular language learns some semantic abstractions that are generalizable to other languages in a series of probing experiments. Our results and analysis contradict previous theories and provide new insights into the basis of the generalization abilities of multilingual models. To provide a more comprehensive benchmark to evaluate cross-lingual models, we also released the Cross-lingual Question Answering Dataset (XQuAD). Acknowledgements We thank Chris Dyer and Phil Blunsom for helpful comments on an earlier draft of this paper and Tyler Liechty for assistance with datasets. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long 4631 Papers), pages 789–798, Melbourne, Australia. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. Bilingual lexicon induction through unsupervised machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5002–5007, Florence, Italy. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual Alignment of Contextual Word Representations. In Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems 32, pages 7059–7069. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018a. Word Translation Without Parallel Data. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kadras, Sylvain Gugger, and Jeremy Howard. 2019. MultiFiT: Efficient Multi-lingual Language Model Fine-tuning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5702–5707, Hong Kong, China. Association for Computational Linguistics. Goran Glavaˇs, Robert Litschko, Sebastian Ruder, and Ivan Vuli´c. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 710–721, Florence, Italy. Association for Computational Linguistics. Yoav Goldberg. 2019. Assessing BERT’s Syntactic Abilities. CoRR, abs/1901.05287. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, pages 1693–1701. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790–2799, Long Beach, California, USA. PMLR. Eric Huang, Richard Socher, Christopher Manning, and Andrew Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 873–882, Jeju Island, Korea. Association for Computational Linguistics. Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pretraining with multiple cross-lingual tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2485–2494, Hong Kong, China. Association for Computational Linguistics. Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-Lingual Ability of Multilingual BERT: An Empirical Study. In Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Divyansh Kaushik and Zachary C. Lipton. 2018. How much reading does reading comprehension require? A critical investigation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4632 5010–5015, Brussels, Belgium. Association for Computational Linguistics. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of the 24th International Conference on Computational Linguistics, pages 1459–1474, Mumbai, India. The COLING 2012 Organizing Committee. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Patrick Lewis, Barlas O˘guz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. MLQA: Evaluating Cross-lingual Extractive Question Answering. arXiv preprint arXiv:1910.07475. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Association for Computational Linguistics. Mohammad Taher Pilehvar and Jose CamachoCollados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267–1273, Minneapolis, Minnesota. Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996– 5001, Florence, Italy. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In Advances in Neural Information Processing Systems 30, pages 506–516. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1599–1613, Minneapolis, Minnesota. Association for Computational Linguistics. Holger Schwenk and Xian Li. 2018. A corpus for multilingual document classification in eight languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA). Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788, Melbourne, Australia. Association for Computational Linguistics. Ke Tran. 2020. From English to Foreign Languages: Transferring Pre-trained Language Models. arXiv preprint arXiv:2002.07306. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Emerging cross-lingual structure in pretrained language models. arXiv preprint arXiv:1911.01464. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics. Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-x: A cross-lingual adversarial dataset for paraphrase identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687–3692, Hong Kong, China. Association for Computational Linguistics. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 4633 2020. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes. In Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959–1970, Vancouver, Canada. Association for Computational Linguistics. Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298–1308, Minneapolis, Minnesota. Association for Computational Linguistics. 4634 A Training details In contrast to You et al. (2020), we train with a sequence length of 512 from the beginning, instead of dividing training into two stages. For our proposed approach, we pre-train a single English model for 250k steps, and perform another 250k steps to transfer it to every other language. For the fine-tuning, we use Adam with a learning rate of 2e-5, a batch size of 32, and train for 2 epochs. The rest of the hyperparameters follow Devlin et al. (2019). For adapters, we follow the hyperparameters employed by Houlsby et al. (2019). For our proposed model using noised fine-tuning, we set the standard deviation of the Gaussian noise to 0.075 and the mean to 0. B XQuAD dataset details XQuAD consists of a subset of 240 context paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their translations into 10 other languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Table 5 comprises some statistics of the dataset, while Table 6 shows one example from it. So as to guarantee the diversity of the dataset, we selected 5 context paragraphs at random from each of the 48 documents in the SQuAD 1.1 development set, and translate both the context paragraphs themselves as well as all their corresponding questions. The translations were done by professional human translators through the Gengo10 service. The translation workload was divided into 10 batches for each language, which were submitted separately to Gengo. As a consequence, different parts of the dataset might have been translated by different translators. However, we did guarantee that all paragraphs and questions from the same document were submitted in the same batch to make sure that their translations were consistent. Translators were specifically instructed to transliterate all named entities to the target language following the same conventions used in Wikipedia, from which the English context paragraphs in SQuAD originally come. In order to facilitate easy annotations of answer spans, we chose the most frequent answer for each question and marked its beginning and end in the context paragraph through placeholder symbols 10https://gengo.com (e.g. “this is *0* an example span #0# delimited by placeholders”). Translators were instructed to keep the placeholders in the relevant position in their translations, and had access to an online validator to automatically verify that the format of their output was correct. C Additional results We show the complete results for cross-lingual word embedding mappings and joint multilingual training on MLDoc and PAWS-X in Table 7. Table 8 reports exact match results on XQuAD, while Table 9 reports results for all cross-lingual word embedding mappings and joint multilingual training variants. D Probing experiments As probing tasks are only available in English, we train monolingual models in each L2 of XNLI and then align them to English. To control for the amount of data, we use 3M sentences both for pretraining and alignment in every language.11 Semantic probing We evaluate the representations on two semantic probing tasks, the Word in Context (WiC; Pilehvar and Camacho-Collados, 2019) and Stanford Contextual Word Similarity (SCWS; Huang et al., 2012) datasets. WiC is a binary classification task, which requires the model to determine if the occurrences of a word in two contexts refer to the same or different meanings. SCWS requires estimating the semantic similarity of word pairs that occur in context. For WiC, we train a linear classifier on top of the fixed sentence pair representation. For SCWS, we obtain the contextual representations of the target word in each sentence by averaging its constituent word pieces, and calculate their cosine similarity. Syntactic probing We evaluate the same models in the syntactic probing dataset of Marvin and Linzen (2018) following the same setup as Goldberg (2019). Given minimally different pairs of English sentences, the task is to identify which of them is grammatical. Following Goldberg (2019), we feed each sentence into the model masking the word in which it differs from its pair, and pick the one to which the masked language model assigns the highest probability mass. Similar to Goldberg 11We leave out Thai, Hindi, Swahili, and Urdu as their corpus size is smaller than 3M. 4635 en es de el ru tr ar vi th zh hi Paragraph 142.4 160.7 139.5 149.6 133.9 126.5 128.2 191.2 158.7 147.6 232.4 Question 11.5 13.4 11.0 11.7 10.0 9.8 10.7 14.8 11.5 10.5 18.7 Answer 3.1 3.6 3.0 3.3 3.1 3.1 3.1 4.5 4.1 3.5 5.6 Table 5: Average number of tokens for each language in XQuAD. The statistics were obtained using Jieba for Chinese and the Moses tokenizer for the rest of the languages. Lang Context paragraph w/ answer spans Questions en The heat required for boiling the water and supplying the steam can be derived from various sources, most commonly from [burning combustible materials]1 with an appropriate supply of air in a closed space (called variously [combustion chamber]2, firebox). In some cases the heat source is a nuclear reactor, geothermal energy, [solar]3 energy or waste heat from an internal combustion engine or industrial process. In the case of model or toy steam engines, the heat source can be an [electric]4 heating element. 1. What is the usual source of heat for boiling water in the steam engine? 2. Aside from firebox, what is another name for the space in which combustible material is burned in the engine? 3. Along with nuclear, geothermal and internal combustion engine waste heat, what sort of energy might supply the heat for a steam engine? 4. What type of heating element is often used in toy steam engines? es El calor necesario para hervir el agua y suministrar el vapor puede derivarse de varias fuentes, generalmente de [la quema de materiales combustibles]1 con un suministro adecuado de aire en un espacio cerrado (llamado de varias maneras: [c´amara de combusti´on]2, chimenea...). En algunos casos la fuente de calor es un reactor nuclear, energ´ıa geot´ermica, [energ´ıa solar]3 o calor residual de un motor de combusti´on interna o proceso industrial. En el caso de modelos o motores de vapor de juguete, la fuente de calor puede ser un calentador [el´ectrico]4. 1. ¿Cu´al es la fuente de calor habitual para hacer hervir el agua en la m´aquina de vapor? 2. Aparte de c´amara de combusti´on, ¿qu´e otro nombre que se le da al espacio en el que se quema el material combustible en el motor? 3. Junto con el calor residual de la energ´ıa nuclear, geot´ermica y de los motores de combusti´on interna, ¿qu´e tipo de energ´ıa podr´ıa suministrar el calor para una m´aquina de vapor? 4. ¿Qu´e tipo de elemento calefactor se utiliza a menudo en las m´aquinas de vapor de juguete? zh 让水沸腾以提供蒸汽所需热量有多种来源,最常见 的是在封闭空间(别称有[燃 燃 燃烧 烧 烧室 室 室]2 、火箱)中供 应适量空气来[燃 燃 燃烧 烧 烧可 可 可燃 燃 燃材 材 材料 料 料]1 。在某些情况下, 热源是核反应堆、地热能、[太 太 太阳 阳 阳能 能 能]3 或来自内燃 机或工业过程的废气。如果是模型或玩具蒸汽发动 机,还可以将[电 电 电]4 加热元件作为热源。 1. 蒸汽机中让水沸腾的常用热源是什么? 2. 除了火箱之外,发动机内燃烧可燃材料的空 间的别名是什么? 3. 除了核能、地热能和内燃机废气以外,还有 什么热源可以为蒸汽机供能? 4. 玩具蒸汽机通常使用什么类型的加热元件? Table 6: An example from XQuAD. The full dataset consists of 240 such parallel instances in 11 languages. (2019), we discard all sentence pairs from the Marvin and Linzen (2018) dataset that differ in more than one subword token. Table 10 reports the resulting coverage split into different categories, and we show the full results in Table 11. 4636 MLDoc PAWS-X en fr es de ru zh avg en fr es de zh avg CLWE 300d ident 93.1 85.2 74.8 86.5 67.4 72.7 79.9 92.8 83.9 84.7 81.1 72.9 83.1 300d unsup 93.1 85.0 75.0 86.1 68.8 76.0 80.7 92.8 83.9 84.2 81.3 73.5 83.1 768d ident 94.7 87.3 77.0 88.7 67.6 78.3 82.3 92.8 85.2 85.5 81.6 72.5 83.5 768d unsup 94.7 87.5 76.9 88.1 67.6 72.7 81.2 92.8 84.3 85.5 81.8 72.1 83.3 JOINT MULTI 32k voc 92.6 81.7 75.8 85.4 71.5 66.6 78.9 91.9 83.8 83.3 82.6 75.8 83.5 64k voc 92.8 80.8 75.9 84.4 67.4 64.8 77.7 93.7 86.9 87.8 85.8 80.1 86.8 100k voc 92.2 74.0 77.2 86.1 66.8 63.8 76.7 93.1 85.9 86.5 84.1 76.3 85.2 200k voc 91.9 82.1 80.9 89.3 71.8 66.2 80.4 93.8 87.7 87.5 87.3 78.8 87.0 Table 7: MLDoc and PAWS-X results (accuracy) for all CLWE and JOINTMULTI variants. en es de el ru tr ar vi th zh hi avg CLWE 300d ident 72.5 39.7 33.6 23.5 29.9 11.8 18.5 16.1 16.5 17.9 10.0 26.4 300d unsup 72.5 39.2 34.5 24.8 30.4 12.2 14.7 6.5 16.0 16.1 10.4 25.2 768d ident 73.1 40.6 32.9 20.1 30.7 10.8 14.2 11.8 12.3 14.0 9.1 24.5 768d unsup 73.1 41.5 31.8 21.0 31.0 12.1 14.1 10.5 10.0 13.2 10.2 24.4 JOINT MULTI 32k voc 68.3 41.3 44.3 31.8 45.0 28.5 36.2 36.9 39.2 40.1 27.5 39.9 64k voc 71.3 48.2 49.9 40.2 50.9 33.7 41.5 45.0 43.7 36.9 36.8 45.3 100k voc 71.5 49.8 51.2 41.1 51.8 33.0 43.7 45.3 44.5 40.8 36.6 46.3 200k voc 72.1 55.3 55.2 48.0 52.7 40.1 46.6 47.6 45.8 38.5 42.3 49.5 JOINT PAIR Joint voc 71.7 47.8 57.6 38.2 53.4 35.0 47.4 49.7 44.3 47.1 38.8 48.3 Disjoint voc 72.2 52.5 56.5 47.8 55.0 43.7 49.0 49.2 43.9 50.0 39.1 50.8 MONO TRANS Subword emb 72.3 47.4 42.4 43.3 46.4 30.1 42.6 45.1 39.0 39.0 32.4 43.6 + pos emb 72.9 54.3 48.4 47.3 47.6 6.1 41.1 47.6 38.6 45.0 9.0 41.6 + noising 69.6 51.2 52.4 50.2 51.0 6.9 43.0 46.3 46.4 48.1 10.7 43.2 + adapters 69.6 51.4 51.4 50.2 51.4 44.5 48.8 47.7 45.6 49.2 45.1 50.5 Table 8: XQuAD results (exact match). en es de el ru tr ar vi th zh hi avg CLWE 300d ident 84.1 56.8 51.3 43.4 47.4 25.5 35.5 34.5 28.7 25.3 22.1 41.3 300d unsup 84.1 56.8 51.8 42.7 48.5 24.4 31.5 20.5 29.8 26.6 23.1 40.0 768d ident 84.2 58.0 51.2 41.1 48.3 24.2 32.8 29.7 23.8 19.9 21.7 39.5 768d unsup 84.2 58.9 50.3 41.0 48.5 25.8 31.3 27.3 24.4 20.9 21.6 39.5 JOINT MULTI 32k voc 79.3 59.5 60.3 49.6 59.7 42.9 52.3 53.6 49.3 50.2 42.3 54.5 64k voc 82.3 66.5 67.1 60.9 67.0 50.3 59.4 62.9 55.1 49.2 52.2 61.2 100k voc 82.6 68.9 68.9 61.0 67.8 48.1 62.1 65.6 57.0 52.3 53.5 62.5 200k voc 82.7 74.3 71.3 67.1 70.2 56.6 64.8 67.6 58.6 51.5 58.3 65.7 Table 9: XQuAD results (F1) for all CLWE and JOINTMULTI variants. 4637 coverage Subject-verb agreement Simple 80 / 140 (57.1%) In a sentential complement 960 / 1680 (57.1%) Short VP coordination 480 / 840 (57.1%) Long VP coordination 320 / 400 (80.0%) Across a prepositional phrase 15200 / 22400 (67.9%) Across a subject relative clause 6400 / 11200 (57.1%) Across an object relative clause 17600 / 22400 (78.6%) Across an object relative (no that) 17600 / 22400 (78.6%) In an object relative clause 5600 / 22400 (25.0%) In an object relative (no that) 5600 / 22400 (25.0%) Reflexive anaphora Simple 280 / 280 (100.0%) In a sentential complement 3360 / 3360 (100.0%) Across a relative clause 22400 / 22400 (100.0%) Table 10: Coverage of our systems for the syntactic probing dataset. We report the number of pairs in the original dataset by Marvin and Linzen (2018), those covered by the vocabulary of our systems and thus used in our experiments, and the corresponding percentage. mono xx→en aligned en en fr es de el bg ru tr ar vi zh avg Subject-verb agreement Simple 91.2 76.2 90.0 93.8 56.2 97.5 56.2 78.8 72.5 67.5 81.2 71.2 76.5 In a sentential complement 99.0 65.7 94.0 92.1 62.7 98.3 80.7 74.1 89.7 71.5 78.9 79.6 80.7 Short VP coordination 100.0 64.8 66.9 69.8 64.4 77.9 60.2 88.8 76.7 73.3 62.7 64.4 70.0 Long VP coordination 96.2 58.8 53.4 60.0 67.5 62.5 59.4 92.8 62.8 75.3 62.5 64.4 65.4 Across a prepositional phrase 89.7 56.9 54.6 52.8 53.4 53.4 54.6 79.6 54.3 59.9 57.9 56.5 57.6 Across a subject relative clause 91.6 49.9 51.9 48.3 52.0 53.2 56.2 78.1 48.6 58.9 55.4 52.3 55.0 Across an object relative clause 79.2 52.9 56.2 53.3 52.4 56.6 57.0 63.1 52.3 59.0 54.9 54.5 55.7 Across an object relative (no that) 77.1 54.1 55.9 55.9 53.1 56.2 59.7 63.3 53.1 54.9 55.9 56.8 56.3 In an object relative clause 74.6 50.6 59.9 66.4 59.4 61.1 49.8 60.4 42.6 45.3 56.9 56.3 55.3 In an object relative (no that) 66.6 51.7 57.1 64.9 54.9 59.4 49.9 57.0 43.7 46.6 54.9 55.4 54.1 Macro-average 86.5 58.2 64.0 65.7 57.6 67.6 58.4 73.6 59.6 61.2 62.1 61.1 62.7 Reflexive anaphora Simple 90.0 69.3 63.6 67.9 55.0 69.3 56.4 89.3 75.0 87.1 58.6 60.7 68.4 In a sentential complement 82.0 56.3 63.9 73.2 52.7 65.7 59.1 70.8 71.7 84.5 59.8 53.9 64.7 Across a relative clause 65.6 55.0 54.5 58.6 52.3 55.8 52.5 66.1 61.4 73.3 56.9 50.9 57.9 Macro-average 79.2 60.2 60.7 66.6 53.3 63.6 56.0 75.4 69.4 81.6 58.4 55.2 63.7 Table 11: Complete syntactic probing results (accuracy) of a monolingual model and monolingual models transferred to English on the syntactic evaluation test set (Marvin and Linzen, 2018).
2020
421
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4638–4655 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4638 Similarity Analysis of Contextual Word Representation Models John M. Wu∗1 Yonatan Belinkov*12 Hassan Sajjad3 Nadir Durrani3 Fahim Dalvi3 James Glass1 1MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA, USA 2Harvard John A. Paulson School of Engineering and Applied Sciences, Cambridge, MA, USA 3Qatar Computing Research Institute, HBKU Research Complex, Doha 5825, Qatar {johnmwu,belinkov,glass}@csail.mit.edu {hsajjad,ndurrani,faimaduddin}@qf.org.qa Abstract This paper investigates contextual word representation models from the lens of similarity analysis. Given a collection of trained models, we measure the similarity of their internal representations and attention. Critically, these models come from vastly different architectures. We use existing and novel similarity measures that aim to gauge the level of localization of information in the deep models, and facilitate the investigation of which design factors affect model similarity, without requiring any external linguistic annotation. The analysis reveals that models within the same family are more similar to one another, as may be expected. Surprisingly, different architectures have rather similar representations, but different individual neurons. We also observed differences in information localization in lower and higher layers and found that higher layers are more affected by fine-tuning on downstream tasks.1 1 Introduction Contextual word representations such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) have led to impressive improvements in a variety of tasks. With this progress in breaking the state of the art, interest in the community has expanded to analyzing such models in an effort to illuminate their inner workings. A number of studies have analyzed the internal representations in such models and attempted to assess what linguistic properties they capture. A prominent methodology for this is to train supervised classifiers based on the models’ learned representations, and predict various linguistic properties. For instance, Liu et al. (2019a) train such classifiers on 16 linguistic tasks, including part-of-speech tagging, chunking, named ∗Equal contribution 1The code is available at https://github.com/ johnmwu/contextual-corr-analysis. entity recognition, and others. Such an approach may reveal how well representations from different models, and model layers, capture different properties. This approach, known as analysis by probing classifiers, has been used in numerous other studies (Belinkov and Glass, 2019). While the above approach yields compelling insights, its applicability is constrained by the availability of linguistic annotations. In addition, comparisons of different models are indirect, via the probing accuracy, making it difficult to comment on the similarities and differences of different models. In this paper, we develop complementary methods for analyzing contextual word representations based on their inter- and intra-similarity. While this similarity analysis does not tell us absolute facts about a model, it allows comparing representations without subscribing to one type of information. We consider several kinds of similarity measures based on different levels of localization/distributivity of information: from neuron-level pairwise comparisons of individual neurons to representation-level comparisons of full word representations. We also explore similarity measures based on models’ attention weights, in the case of Transformer models (Vaswani et al., 2017). This approach enables us to ask questions such as: Do different models behave similarly on the same inputs? Which design choices determine whether models behave similarly or differently? Are certain model components more similar than others across architectures? Is the information in a given model more or less localized (encoded in individual components) compared to other models?2 2Hinton (1984) defines a localist representation as one using one computing element for each represented entity. In a language model, this definition would depend on what linguistic concepts we deem important, and is thus somewhat arbitrary. We develop a measure that aims to capture this notion of localization without recourse to a specific set of linguistic properties. 4639 We choose a collection of pre-trained models that aim to capture diverse aspects of modeling choices, including the building blocks (Recurrent Networks, Transformers), language modeling objective (unidirectional, bidirectional, masked, permutation-based), and model depth (from 3 to 24 layers). More specifically, we experiment with variants of ELMo, BERT, GPT (Radford et al., 2018), GPT2 (Radford et al., 2019), and XLNet (Yang et al., 2019). Notably, we use the same methods to investigate the effect that fine-tuning on downstream tasks has on the model similarities. Our analysis yields the following insights: • Different architectures may have similar representations, but different individual neurons. Models within the same family are more similar to one another in terms of both their neurons and full representations. • Lower layers are more similar than higher layers across architectures. • Higher layers have more localized representations than lower layers. • Higher layers are more affected by fine-tuning than lower layers, in terms of their representations and attentions, and thus are less similar to the higher layers of pre-trained models. • Fine-tuning affects the localization of information, causing high layers to be less localized. Finally, we show how the similarity analysis can motivate a simple technique for efficient finetuning, where freezing the bottom layers of models still maintains comparable performance to finetuning the full network, while reducing the finetuning time. 2 Related Work The most common approach for analyzing neural network models in general, and contextual word representations in particular, is by probing classifiers (Ettinger et al., 2016; Belinkov et al., 2017; Adi et al., 2017; Conneau et al., 2018; Hupkes et al., 2018), where a classifier is trained on a corpus of linguistic annotations using representations from the model under investigation. For example, Liu et al. (2019a) used this methodology for investigating the representations of contextual word representations on 16 linguistic tasks. One limitation of this approach is that it requires specifying linguistic tasks of interest and obtaining suitable annotations. This potentially limits the applicability of the approach. An orthogonal analysis method relies on similarities between model representations. Bau et al. (2019) used this approach to analyze the role of individual neurons in neural machine translation. They found that individual neurons are important and interpretable. However, their work was limited to a certain kind of architecture (specifically, a recurrent one). In contrast, we compare models of various architectures and objective functions. Other work used similarity measures to study learning dynamics in language models by comparing checkpoints of recurrent language models (Morcos et al., 2018), or a language model and a part-ofspeech tagger (Saphra and Lopez, 2019). Our work adopts a similar approach, but explores a range of similarity measures over different contextual word representation models. Questions of localization and distributivity of information have been under investigation for a long time in the connectionist cognitive science literature (Page, 2000; Bowers, 2002; Gayler and Levy, 2011). While neural language representations are thought to be densely distributed, several recent studies have pointed out the importance of individual neurons (Qian et al., 2016; Shi et al., 2016; Radford et al., 2017; Lakretz et al., 2019; Bau et al., 2019; Dalvi et al., 2019; Baan et al., 2019). Our study contributes to this line of work by designing measures of localization and distributivity of information in a collection of models. Such measures may facilitate incorporating neuron interactions in new training objectives (Li et al., 2020). 3 Similarity Measures We present five groups of similarity measures, each capturing a different similarity notion. Consider a collection of M models {f(m)}M m=1, yielding word representations h(m) l and potentially attention weights α(m) l at each layer l. Let k index neurons h(m) l [k] or attention heads α(m) l [k]. h(m) l [k], α(m) l [k] are real (resp. matrix) valued, ranging over words (resp. sentences) in a corpus. Our similarity measures are of the form sim(h(m) l , h(m′) l′ ) or sim(α(m) l , α(m′) l′ ), that is, they find similarities between layers. We present the full mathematical details in appendix A. 4640 3.1 Neuron-level similarity A neuron-level similarity measure captures similarity between pairs of individual neurons. We consider one such measure, neuronsim, following Bau et al. (2019). For every neuron k in layer l, neuronsim finds the maximum correlation between it and another neuron in another layer l′. Then, it averages over neurons in layer l.3 This measure aims to capture localization of information. It is high when two layers have pairs of neurons with similar behavior. This is far more likely when the models have local, rather than distributed representations, because for distributed representations to have similar pairs of neurons the information must be distributed similarly. 3.2 Mixed neuron–representation similarity A mixed neuron–representation similarity measure captures a similarity between a neuron in one model with a layer in another. We consider one such measure, mixedsim: for every neuron k in layer l, regress to it from all neurons in layer l′ and measure the quality of fit. Then, average over neurons in l. It is possible that some information is localized in one layer but distributed in another layer. mixedsim captures such a phenomenon. 3.3 Representation-level similarity A representation-level measure finds correlations between a full model (or layer) simultaneously. We consider three such measures: two based on canonical correlation analysis (CCA), namely singular vector CCA (svsim; Raghu et al. 2017) and projection weighted CCA (pwsim; Morcos et al. 2018), in addition to linear centered kernel alignment (ckasim; Kornblith et al. 2019).4 These measures emphasize distributivity of information— if two layers behave similarly over all of their neurons, the similarity will be higher, even if no individual neuron has a similar matching pair or is represented well by all neurons in the other layer. Other representation-level similarity measures may be useful, such as representation similarity analysis (RSA; Kriegeskorte et al. 2008), which 3In this and other measures that allowed it, we also experimented with averaging just the top k neurons (or canonical correlations, in Section 3.3 measures) in case most of the layer is noise. Heatmaps are in the online repository. We did not notice major differences. 4We also experimented with the RBF variant, which is computationally demanding. We found similar patterns in preliminary experiments, so we focus on the linear variant. has been used to analyze neural network representations (Bouchacourt and Baroni, 2018; Chrupała and Alishahi, 2019; Chrupała, 2019), or other variants of CCA, such as deep CCA (Andrew et al., 2013). We leave the explorations of such measures to future work. 3.4 Attention-level similarity Previous work analyzing network similarity has mostly focused on representation-based similarities (Morcos et al., 2018; Saphra and Lopez, 2019; Voita et al., 2019a). Here we consider similarity based on attention weights in Transformer models. Analogous to a neuron-level similarity measure, an attention-level similarity measure finds the most “correlated” other attention head. We consider three methods to correlate heads, based on the norm of two attention matrices α(m) l [k], α(m′) l′ [k′], their Pearson correlation, and their Jensen–Shannon divergence.5 We then average over heads k in layer l, as before. These measures are similar to neuronsim in that they emphasize localization of information—if two layers have pairs of heads that are very similar in their behavior, the similarity will be higher. 3.5 Distributed attention-level similarity We consider parallels of the representation-level similarity. To compare the entire attention heads in two layers, we concatenate all weights from all heads in one layer to get an attention representation. That is, we obtain attention representations α(m) l [h], a random variable ranging over pairs of words in the same sentence, such that α(m) l,(i,j)[h] is a scalar value. It is a matrix where the first axis is indexed by word pairs, and the second by heads. We flatten these matrices and use svsim, pwsim, and ckasim as above for comparing these attention representations. These measures should be high when the entire set of heads in one layer is similar to the set of heads in another layer. 4 Experimental Setup Models We choose a collection of pre-trained models that aim to capture diverse aspects of modeling choices, including the building blocks (RNNs, Transformers), language modeling objective (unidirectional, bidirectional, masked, permutationbased), and model depth (from 3 to 24 layers). 5Other recent work has used the Jensen–Shannon divergence to measure distances between attention heads (Clark et al., 2019; Jain and Wallace, 2019). 4641 (a) neuronsim (b) ckasim Figure 1: Similarity heatmaps of layers in various models under neuron- and representation-level similarities. ELMo variants We use the original ELMo (Peters et al., 2018a), a bidirectional RNN model with two hidden layers, as well as two variants – a deeper and larger 4-layer model and a Transformer-equivalent variant (Peters et al., 2018b). GPT variants We use both the original OpenAI Transformer (GPT; Radford et al. 2018) and its successor GPT2 (Radford et al., 2019), in the small and medium model sizes. These are all unidirectional Transformer LMs. BERT We use BERT-base/large (12/24 layers; Devlin et al. 2019): Transformer LMs trained with a masked LM objective function.6 XLNet We use XLNet-base/large (12/24 layers; Yang et al. 2019). Both are Transformer LM with a permutation-based objective function. Data For analyzing the models, we run them on the Penn Treebank development set (Marcus et al., 1993), following the setup taken by Liu et al. (2019a) in their probing classifier experiments.7 We collect representations and attention weights from each layer in each model for computing the similarity measures. We obtain representations for models used in Liu et al. (2019a) from their implementation and use the transformers library (Wolf et al., 2019) to extract other representations. We aggregate sub-word representations by taking the representation of the last sub-word, following Liu et al. (2019a), and sub-word attentions by summing up at6BERT is also trained with a next sentence prediction objective, although this may be redundant (Liu et al., 2019b). 7As suggested by a reviewer, we verified that the results are consistent when using another dataset (Appendix B.1). tention to sub-words and averaging attention from sub-words, following Clark et al. (2019), which guarantees that the attention from each word sums to one. 5 Similarity of Pre-trained Models 5.1 Neuron and representation levels Figure 1 shows heatmaps of similarities between layers of different models, according to neuronsim and ckasim. Heatmaps for the other measures are provided in Appendix B. The heatmaps reveal the following insights. Different architectures may have similar representations, but different individual neurons Comparing the heatmaps, the most striking distinction is that neuronsim induces a distinctly blockdiagonal heatmap, reflecting high intra-model similarities and low inter-model similarities. As neuronsim is computed by finding pairs of very similar neurons, this means that within a model, different layers have similar individual neurons, but across models, neurons are very different. In contrast, ckasim- show fairly significant similarities across models (high values off the main diagonal), indicating that different models generate similar representations. The most similar cross-model similarities are found by mixedsim (Figure 8d in Appendix B), which suggests that individual neurons in one model may be well represented by a linear combination of neurons in another layer. The other representation-level similarities (ckasim, svsim, and pwsim), also show cross-model similarities, albeit to a lesser extent. 4642 Models within the same family are more similar The heatmaps show greater similarity within a model than across models (bright diagonal). Different models sharing the same architecture and objective function, but different depths, also exhibit substantial representation-level similarities – for instance, compare BERT-base and BERTlarge or ELMo-original and ELMo-4-layers, under ckasim (Figure 1b). The Transformer-ELMo presents an instructive case, as it shares ELMo’s bidirectional objective function but with Transformers rather than RNNs. Its layers are mostly similar to themselves and the other ELMo models, but also to GPT, more so than to BERT or XLNet, which use masked and permutation language modeling objectives, respectively. Thus it seems that the objective has a considerable impact on representation similarity.8 The fact that models within the same family are more similar to each other supports the choice of Saphra and Lopez (2019) to use models of similar architecture when probing models via similarity measures across tasks.9 A possible confounder is that models within the same family are trained on the same data, but cross-family models are trained on different data. It is difficult to control for this given the computational demands of training such models and the current practice in the community of training models on ever increasing sizes of data, rather than a standard fixed dataset. However, Figure 2 shows similarity heatmaps of layers from pre-trained and randomly initialized models using ckasim, exhibiting high intra-model similarities, as before. Interestingly, models within the same family (either GPT2 or XLNet) are more similar than across families, even with random models, indicating that intrinsic aspects of models in a given family make them similar, regardless of the training data or process.10 As may be expected, in most cases, the similarity between random and pretrained models is small. One exception is the vertical bands in the lower triangle, which indicate that the bottom layers of trained models are similar to many layers of random models. This may be due to random models merely transferring information from bottom to top, without meaningful processing. 8Voita et al. (2019a) found that differences in the training objective result in more different representations (according to pwsim) than differences in random initialization. 9We thank a reviewer for pointing out this connection. 10Relatedly, Morcos et al. (2018) found similar CCA coefficients in representations from recurrent language models trained on different datasets. Figure 2: ckasim similarity heatmap of layers in base and random models. Still, it may explain why random models sometimes generate useful features (Wieting and Kiela, 2019). Meanwhile, as pointed out by a reviewer, lower layers converge faster, leaving them closer to their initial random state (Raghu et al., 2017; Shwartz-Ziv and Tishby, 2017). Lower layers are more similar across architectures The representation-level heatmaps (Figure 1) all exhibit horizontal stripes at lower layers, especially with ckasim, indicating that lower layers are more similar than higher layers when comparing across models. This pattern can be explained by lower layers being closer to the input, which is always the same words. A similar observation has been made for vision networks (Raghu et al., 2017).11 Voita et al. (2019a) found a similar pattern comparing Transformer models with different objective functions. Adjacent layers are more similar All heatmaps in Figure 1 exhibit a very bright diagonal and bright lines slightly off the main diagonal, indicating that adjacent layers are more similar. This is even true when comparing layers of different models (notice the diagonal nature of BERT-base vs. BERT-large in Figure 1b), indicating that layers at the same relative depth are more similar than layers at different relative depths. A similar pattern was found in vision networks (Kornblith et al., 2019). Some patterns are unexpected. For instance, comparing 11Raghu et al. (2017) also used svsim to study recurrent language models, showing that lower layers converge faster. Although they have not looked at cross-model comparisons, faster convergence may be consistent with fewer changes during training, which can explain why lower layers are more similar across architectures. 4643 XLNet with the BERT models, it appears that lower layers of XLNet are more similar to higher layers of BERT. We speculate that this is an artifact of the permutation-based objective in XLNet. We found corroborating evidence for this observation in ongoing parallel work, where we compare BERT and XLNet at different layers through word(Liu et al., 2019a) and sentence-level tasks (Wang et al., 2019): while BERT requires mostly features from higher layers to achieve state-of-the-art results, in XLNet lower and middle layers suffice. Higher layers are more localized than lower ones The different similarity measures capture different levels of localization vs. distributivity of information. neuronsim captures cases of localized information, where pairs of neurons in different layers behave similarly. svsim captures cases of distributed information, where the full layer representation is similar. To quantify these differences, we compute the average similarity according to each measure when comparing each layer to all other layers. In effect, we take the column-wise mean of each heatmap. We do this separately for svsim as the distributed measure and neuronsim as the localized measure, and we subtract the svsim means from the neuronsim means. This results in a measure of localization per layer. Figure 3 shows the results. In all models, the localization score mostly increases with layers, indicating that information tends to become more localized at higher layers.12 This pattern is quite consistent, but may be surprising given prior observations on lower layers capturing phenomena that operate at a local context (Tenney et al., 2019), which presumably require fewer neurons. However, this pattern is in line with observations made by Ethayarajh (2019), who reported that upper layers of pre-trained models produce more context-specific representations. There appears to be a correspondence between our localization score and Ethayarajh’s context-specificity score, which is based on the cosine similarity of representations of the same word in different contexts. Thus, more localized representations are also more context-specific. A direct comparison between context-specificity and localization may be fruitful avenue for future work. Some models seem less localized than others, 12Recurrent models are more monotonous than Transformers, echoing results by Liu et al. (2019a) on language modeling perplexity in different layers. Figure 3: Localization score of various model layers. especially the ELMo variants, although this may be confounded by their being shallower models. BERT and XLNet models first decrease in localization and then increase. Interestingly, XLNet’s localization score decreases towards the end, suggesting that its top layer representations are less context-specific. 5.2 Attention level Figure 4 shows similarity heatmaps using two of the attention-level similarity measures—Jensen– Shannon and ckasim—for layers from 6 models: BERT-base/large, GPT2-small/medium, and XLNet-base/large. Layers within the same model or model family exhibit higher similarities (bright block diagonal), in line with results from the representation-level analysis. In particular, under both measures, GPT2 layers are all very similar to each other, except for the bottom ones. Comparing the two heatmaps, the localized Jensen– Shannon similarity (Figure 4a) shows higher similarities off the main diagonal than the distributed ckasim measure (Figure 4b), indicating that different models have pairs of attention heads that behave similarly, although the collection of heads from two different models is different in the aggregate. Heatmaps for the other measures are provided in Appendix C, following primarily the same patterns. It is difficult to identify patterns within a given model family. However, under the attention-based svsim (Figure 10d in Appendix C), and to a lesser extent pwsim (Figure 10e), we see bright diagonals when comparing different GPT2 (and to a lesser extent XLNet and BERT) models, such that layers at the same relative depth are similar in their attention patterns. We have seen such a result also in the representation-based similarities. 4644 (a) Jensen–Shannon (b) ckasim Figure 4: Similarity heatmaps of layers in various models under two attention-level similarity measures. Adjacent layers seem more similar in some cases, but these patterns are often swamped by the large intra-model similarity. This result differs from our results for representational similarity. GPT2 models, at all layers, are similar to the bottom layers of BERT-large, expressed in bright vertical bands. In contrast, GPT2 models do not seem to be especially similar to XLNet. Comparing XLNet and BERT, we find that lower layers of XLNet are quite similar to higher layers of BERT-base and middle layers of BERT-large. This parallels the findings from comparing representations of XLNet and BERT, which we conjecture is the result of the permutation-based objective in XLNet. In general, we find the attention-based similarities to be mostly in line with the neuron- and representation-level similarities. Nevertheless, they appear to be harder to interpret, as fine-grained patterns are less noticeable. One might mention in this context concerns regarding the reliability of attention weights for interpreting the importance of input words in a model (Jain and Wallace, 2019; Serrano and Smith, 2019; Brunner et al., 2020). However, characterizing the effect of such concerns on our attention-based similarity measures is beyond the current scope. 6 Similarity of Fine-tuned Models How does fine-tuning on downstream tasks affect model similarity? In this section, we compare pretrained models and their fine-tuned versions. We use four of the GLUE tasks (Wang et al., 2019): MNLI A multi-genre natural language inference dataset (Williams et al., 2018), where the task is to predict whether a premise entails a hypothesis. QNLI A conversion of the Stanford question answering dataset (Rajpurkar et al., 2016), where the task is to determine whether a sentence contains the answer to a question. QQP A collection of question pairs from the Quora website, where the task is to determine whether two questions are semantically equivalent. SST-2 A binary sentiment analysis task using the Stanford sentiment treebank (Socher et al., 2013). 6.1 Results Top layers are more affected by fine-tuning Figure 5 shows representation-level ckasim similarity heatmaps of pre-trained (not fine-tuned) and fine-tuned versions of BERT and XLNet. The most striking pattern is that the top layers are more affected by fine-tuning than the bottom layers, as evidenced by the low similarity of high layers of the pre-trained models with their fine-tuned counterparts. Hao et al. (2019) also observed that lower layers of BERT are less affected by fine-tuning than top layers, by visualizing the training loss surfaces.13 In Appendix D, we demonstrate that this observation can motivate a more efficient finetuning process, where some of the layers are frozen while others are fine-tuned. There are some task-specific differences. In BERT, the top layers of the SST-2-fine-tuned model 13A reviewer commented that this pattern seems like a natural consequence of back-propagation, which we concur with, although in on-going work we found that middle layers of XLNet lead to more gains when fine-tuned. Future work can also explore the effect of optimization on the similarity measures. 4645 (a) BERT (b) XLNet Figure 5: ckasim similarity heatmaps of layers in base (pre-trained, not fine-tuned) and fine-tuned models. (a) BERT (b) XLNet Figure 6: Jensen–Shannon attention similarity heatmaps of layers in base (pre-trained, not fine-tuned) and finetuned models. are affected more than other layers. This may be because SST-2 is a sentence classification task, while the other tasks are sentence-pair classification. A potential implication of this is that non-SST-2 tasks can contribute to one another in a multi-task finetuning setup. In contrast, in XLNet, fine-tuning on any task leads to top layers being very different from all layers of models fine-tuned on other tasks. This suggests that XLNet representations become very task-specific, and thus multi-task fine-tuning may be less effective with XLNet than with BERT. Observing the attnsim similarity based on Jensen–Shannon divergence for base and fine-tuned models (Figure 6), we again see that top layers have lower similarities, implying that they undergo greater changed during fine-tuning. Other attentionbased measures behaved similarly (not shown). Kovaleva et al. (2019) made a similar observation by comparing the cosine similarity of attention matrices in BERT, although they did not perform crosstask comparisons. In fact, the diagonals within each block indicate that bottom layers remain similar to one another even when fine-tuning on different tasks, while top layers diverge after finetuning. The vertical bands at layers 0 mean that many higher layers have a head that is very similar to a head from the first layer, that is, a form of redundancy, which can explain why many heads can be pruned (Michel et al., 2019; Voita et al., 2019b; Kovaleva et al., 2019). Comparing BERT and XLNet, the vertical bands at the top layers of BERT (especially in MNLI, QQI, and SST-2) suggest that some top layers are very similar to any other layer. In XLNet, top MNLI layers are quite 4646 (a) BERT (b) XLNet Figure 7: Localization scores per layer in base and fine-tuned models. different from any other layer. Thus different objective functions impact the attention heads differently under fine-tuning. Fine-tuning affects localization Figure 7 shows localization scores for different layers in pretrained and fine-tuned models. In contrast to the pre-trained models, the fine-tuned ones decrease in localization at the top layers. This decrease may be the result of top layers learning high-level tasks, which require multiple neurons to capture properly. 7 Conclusion In this work, we analyzed various prominent contextual word representations from the perspective of similarity analysis. We compared different layers of pre-trained models using both localized and distributed measures of similarity, at neuron, representation, and attention levels. We found that different architectures often have similar internal representations, but differ at the level of individual neurons. We also observed that higher layers are more localized than lower ones. Comparing finetuned and pre-trained models, we found that higher layers are more affected by fine-tuning in their representations and attention weights, and become less localized. These findings motivated experimenting with layer-selective fine-tuning, where we were able to obtain good performance while freezing the lower layers and only fine-tuning the top ones. Our approach is complementary to the linguistic analysis of models via probing classifiers. An exciting direction for future work is to combine the two approaches in order to identify which linguistic properties are captured in model components that are similar to one another, or explicate how localization of information contributes to the learnability of particular properties. It may be insightful to compare the results of our analysis to the loss surfaces of the same models, especially before and after fine-tuning (Hao et al., 2019). One could also study whether a high similarity entail that two models converged to a similar solution. Our localization score can also be compared to other aspects of neural representations, such as gradient distributions and their relation to memorization/generalization (Arpit et al., 2017). Finally, the similarity analysis may also help improve model efficiency, for instance by pointing to components that do not change much during fine-tuning and can thus be pruned. Acknowledgements We thank Nelson Liu for providing some of the representations analyzed in this work. We also thank the anonymous reviewers for their many valuable comments. This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Y.B. is also supported by the Harvard Mind, Brain, and Behavior Initiative (MBB). References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In Proceedings of the Internaltional Confernece for Learning Representations (ICLR). Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. 2013. Deep canonical correlation analysis. In Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of 4647 Machine Learning Research, pages 1247–1255, Atlanta, Georgia, USA. PMLR. Devansh Arpit, Stanisław Jastrz˛ebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon LacosteJulien. 2017. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 233– 242, International Convention Centre, Sydney, Australia. PMLR. Joris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, and Elia Bruni. 2019. On the realization of compositionality in neural networks. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 127– 137, Florence, Italy. Association for Computational Linguistics. D. Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2019. Identifying and controlling important neurons in neural machine translation. In International Conference on Learning Representations (ICLR). Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do Neural Machine Translation Models Learn about Morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver. Association for Computational Linguistics. Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49–72. Diane Bouchacourt and Marco Baroni. 2018. How agents see things: On visual representations in an emergent language game. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 981–985, Brussels, Belgium. Association for Computational Linguistics. Jeffrey S Bowers. 2002. Challenging the widespread assumption that connectionism and distributed representations go hand-in-hand. Cognitive Psychology, 45(3):413 – 445. Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wattenhofer. 2020. On identifiability in transformers. In International Conference on Learning Representations. Grzegorz Chrupała. 2019. Symbolic inductive bias for visually grounded learning of spoken language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6452–6462, Florence, Italy. Association for Computational Linguistics. Grzegorz Chrupała and Afra Alishahi. 2019. Correlating neural and symbolic representations of language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2952–2962, Florence, Italy. Association for Computational Linguistics. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics. Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics. Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, D Anthony Bau, and James Glass. 2019. What is one grain of sand in the desert? analyzing individual neurons in deep NLP models. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Association for Computational Linguistics. Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134–139, Berlin, Germany. Association for Computational Linguistics. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 4648 1615–1625, Copenhagen, Denmark. Association for Computational Linguistics. Ross W. Gayler and Simon D. Levy. 2011. Compositional connectionism in cognitive science ii: the localist/distributed dimension. Connection Science, 23(2):85–89. Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019. Visualizing and understanding the effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4141– 4150, Hong Kong, China. Association for Computational Linguistics. Geoffrey E Hinton. 1984. Distributed representations. Technical Report CMU-CS-84-157, Carnegie Mellon University. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907–926. Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543–3556, Minneapolis, Minnesota. Association for Computational Linguistics. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural network representations revisited. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3519–3529, Long Beach, California, USA. PMLR. Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4364–4373, Hong Kong, China. Association for Computational Linguistics. Nikolaus Kriegeskorte, Marieke Mur, and Peter Bandettini. 2008. Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2:4. Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Baroni. 2019. The emergence of number and syntax units in LSTM language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 11–20, Minneapolis, Minnesota. Association for Computational Linguistics. Jian Li, Xing Wang, Baosong Yang, Shuming Shi, Michael R Lyu, and Zhaopeng Tu. 2020. Neuron interaction based representation composition for neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094, Minneapolis, Minnesota. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 14014– 14024. Curran Associates, Inc. Ari Morcos, Maithra Raghu, and Samy Bengio. 2018. Insights on representational similarity in neural networks with canonical correlation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 5727– 5736. Curran Associates, Inc. Mike Page. 2000. Connectionist modelling in psychology: A localist manifesto. Behavioral and Brain Sciences, 23(4):443â ˘A¸S467. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. 4649 Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Brussels, Belgium. Association for Computational Linguistics. Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016. Analyzing linguistic knowledge in sequential model of sentence. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 826–835, Austin, Texas. Association for Computational Linguistics. Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. 2017. SVCCA: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6078– 6087. Curran Associates, Inc. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Naomi Saphra and Adam Lopez. 2019. Understanding learning dynamics of language models with SVCCA. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3257–3267, Minneapolis, Minnesota. Association for Computational Linguistics. Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931–2951, Florence, Italy. Association for Computational Linguistics. Xing Shi, Kevin Knight, and Deniz Yuret. 2016. Why neural translations are the right length. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2278–2282, Austin, Texas. Association for Computational Linguistics. Ravid Shwartz-Ziv and Naftali Tishby. 2017. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810. Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 2897– 2904, Reykjavik, Iceland. European Language Resources Association (ELRA). Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pages 1631–1642. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4395–4405, Hong Kong, China. Association for Computational Linguistics. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019b. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy. Association for Computational Linguistics. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. John Wieting and Douwe Kiela. 2019. No training required: Exploring random encoders for sentence classification. In International Conference on Learning Representations. 4650 Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 5753– 5763. Curran Associates, Inc. 4651 A Mathematical Details of Similarity Measures We assume a fixed corpus with W = P i Wi total words, and W (2) = P i W 2 i total pairs. Here Wi is the number of words in sentence i. A representational layer h(m) l may be seen as a W × Nm matrix, where Nm is the number of neurons (per layer) in model m. A single neuron h(m) l [k] (really h(m) l [:, k]) is a W × 1 column vector. An attention head α(m) l [k] may be seen as a random variable ranging over sentences si and taking matrix values α(m) l [k](si) ∈Rti×ti, ti = len(si). A.1 Neuron-level similarity For a given neuron h(m) l [k], we define neuronsim(h(m) l [k], h(m′) l′ ) = max k′ |ρ(h(m′) l′ [k′], h(m) l [k])| as the maximum correlation between it and another neuron in some layer (Bau et al., 2019). Here ρ is the Pearson correlation. This naturally gives rise to an aggregate measure at the layer level: neuronsim(h(m) l , h(m′) l′ ) = 1 Nm X k neuronsim(h(m) l [k], h(m′) l′ ) A.2 Mixed neuron–representation similarity We define mixedsim(h(m) l [k], h(m′) l′ ) := lstsq(h(m′) l′ , h(m) l [k]).r where .r is the r-value associated with the regression, the norm of the prediction divided by the norm of the regressand. As before, this is extended to the layer level: mixedsim(h(m) l , h(m′) l′ ) = 1 Nm X k mixedsim(h(m) l [k], h(m′) l′ ) A.3 Representation-level similarity In the following, let Z denote a column centering transformation. For a given matrix A, the sum of each column in ZA is zero. SVCCA Given two layers X, Y = Zh(mx) lx , Zh(my) ly we compute the truncated principal components X′, Y′ = Ux[:, : lx], Uy[:, : ly] where Ux are the left singular vectors of X, and lx is the index required to account for 99% of the variance. Uy and ly are defined analogously. The SVCCA correlations, ρSV CCA, are defined as: u, ρSV CCA, v = SVD(X′T Y′) The SVCCA similarity, svsim(h(mx) lx , h(my) ly ), is the mean of ρSV CCA. PWCCA Identical to SVCCA, except the computation of similarity is a weighted mean. Using the same notation as above, we define canonical vectors, HX := X′u HY := Y′v We define alignments AX := abs HT XX  AY := abs HT YY  where abs is the element-wise absolute value. The weights are αx := weights(AX1), αy := weights(AY1) where 1 is the column vector of all ones, and weights normalizes a vector to sum to 1. The PWCCA similarity is pwsim(h(mx) lx , h(my) ly ) := αT x ρSV CCA pwsim(h(my) ly , h(mx) lx ) := αT y ρSV CCA It is asymmetric. CKA We use the same notation as above. Given two layers, X, Y = Zh(mx) lx , Zh(my) ly the CKA similarity is ckasim(h(mx) lx , h(my) ly ) := XT Y 2 ∥XT X∥∥YT Y∥ where ∥·∥is the Frobenius norm. It is symmetric. A.4 Attention-level similarity We define attnsim(α(m) l [k], α(m′) l′ ) = max k′ h Sim(α(m′) l′ [k′], α(m) l [k]) i We consider three such values of Sim. • Matrix norm: for each sentence si, compute the Frobenius norm α(m′) l′ [h′](si) −α(m) l [h](si) . Then average over sentences in the corpus. • Pearson correlation: for every word xi, compare the attention distributions the two heads 4652 induce from xi to all words under Pearson correlation: ρ  α(m′) l′,i [h′], α(m) l,i [h]  . Then average over words in the corpus. • Jensen–Shannon divergence: for every word xi, compare the attention distributions under Jensen–Shannon divergence: 1 2 KL(α(m′) l′,i [h′] β) + 1 2 KL(α(m) l,i [h] β), where KL is the KLdivergence and β is the average of the two attention distributions. Then average of words in the corpus. As before, this gives rise to aggregate measures at the layer level by averaging over heads h. B Additional Representation-level Similarity Heatmaps Figure 8 shows additional representation-level similarity heatmaps. B.1 Effect of Data Used for Similarity Measures The majority of the experiments reported in the paper are using the Penn Treebank for calculating the similarity measures. Here we show that the results are consistent when using a different dataset, namely the Universal Dependencies English Web Treebank (Silveira et al., 2014). We repeat the experiment reported in Section 5.1. The resulting heatmaps, shown in Figure 9, are highly similar to those generated using the Penn Treebank, shown in Figure 8. C Additional Attention-level Similarity Heatmaps Figure 10 shows additional attention-level similarity heatmaps. D Efficient Fine-tuning The analysis results showed that lower layers of the models go through limited changes during finetuning compared to higher layers. We use this insight to improve the efficiency of the fine-tuning process. In standard fine-tuning, back-propagation is done on the full network. We hypothesize that we can reduce the number of these operations by freezing the lower layers of the model since they are the least affected during the fine-tuning process. We experiment with freezing top and bottom layers of the network during the fine-tuning process. Different from prior work (Raghu et al., 2017; Felbo Froze SST-2 MNLI QNLI QQP BERT 0 92.43 84.05 91.40 91.00 Top 4 91.86 82.86 91.09 90.97 Bot. 4 92.43 84.16 91.85 90.86 Top 6 91.97 82.53 90.13 90.61 Bot. 6 93.00 84.00 91.80 90.71 XLNet 0 93.92 85.97 90.35 90.55 Top 4 92.89 85.55 87.96 90.92 Bot. 4 93.12 86.04 90.65 89.36 Top 6 93.12 84.84 87.88 90.75 Bot. 6 93.92 85.64 90.99 89.02 Table 1: Freezing top/bottom 4/6 layers of BERT and XLNet during fine-tuning. et al., 2017; Howard and Ruder, 2018), we freeze the selected layers for the complete fine-tuning process in contrast to freezing various layers for a fraction of the training time. We use the default parameters settings provided in the Transformer library (Wolf et al., 2019): batch size = 8, learning rate = 5e−5, Adam optimizer with epsilon = 1e−8, and number of epochs = 3. Table 1 presents the results on BERT and XLNet. On all of the tasks except QQP, freezing the bottom layers resulted in better performance than freezing the top layers. One interesting observation is that as we increase the number of bottom layers for freezing to six, the performance marginally degrades while saving a lot more computation. Surprisingly, on SST-2 and QNLI, freezing the bottom six layers resulted in better or equal performance than not freezing any layers of both models. With freezing the bottom six layers, one can save backpropagation computation by more than 50%. 4653 (a) ckasim (b) svsim (c) pwsim (d) mixedsim Figure 8: Similarity heatmaps of layers in various models under different representation-level similarity measures. 4654 (a) neuronsim (b) ckasim (c) svsim (d) pwsim (e) mixedsim Figure 9: Similarity heatmaps of layers in various models under neuron-level and representation-level similarity measures, using the English Web Treebank corpus. 4655 (a) Matrix norm (b) Jensen–Shannon (c) Pearson (d) svsim (e) pwsim (f) ckasim Figure 10: Similarity heatmaps of layers in various models under different attention-level similarity measures.
2020
422
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4656–4667 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4656 SenseBERT: Driving Some Sense into BERT Yoav Levine Barak Lenz Or Dagan Ori Ram Dan Padnos Or Sharir Shai Shalev-Shwartz Amnon Shashua Yoav Shoham AI21 Labs, Tel Aviv, Israel {yoavl,barakl,ord,orir,...}@ai21.com Abstract The ability to learn from large unlabeled corpora has allowed neural language models to advance the frontier in natural language understanding. However, existing self-supervision techniques operate at the word form level, which serves as a surrogate for the underlying semantic content. This paper proposes a method to employ weak-supervision directly at the word sense level. Our model, named SenseBERT, is pre-trained to predict not only the masked words but also their WordNet supersenses. Accordingly, we attain a lexicalsemantic level language model, without the use of human annotation. SenseBERT achieves significantly improved lexical understanding, as we demonstrate by experimenting on SemEval Word Sense Disambiguation, and by attaining a state of the art result on the ‘Word in Context’ task. 1 Introduction Neural language models have recently undergone a qualitative leap forward, pushing the state of the art on various NLP tasks. Together with advances in network architecture (Vaswani et al., 2017), the use of self-supervision has proven to be central to these achievements, as it allows the network to learn from massive amounts of unannotated text. The self-supervision strategy employed in BERT (Devlin et al., 2019) involves masking some of the words in an input sentence, and then training the model to predict them given their context. Other proposed approaches for self-supervised objectives, including unidirectional (Radford et al., 2019), permutational (Yang et al., 2019), or word insertionbased (Chan et al., 2019) methods, operate similarly, over words. However, since a given word form can possess multiple meanings (e.g., the word ‘bass’ can refer to a fish, a guitar, a type of singer, etc.), the word itself is merely a surrogate of its actual meaning in a given context, referred to as its sense. Indeed, the word-form level is viewed as a surface level which often introduces challenging ambiguity (Navigli, 2009). In this paper, we bring forth a novel methodology for applying weak-supervision directly on the level of a word’s meaning. By infusing wordsense information into BERT’s pre-training signal, we explicitely expose the model to lexical semantics when learning from a large unannotated corpus. We call the resultant sense-informed model SenseBERT. Specifically, we add a maskedword sense prediction task as an auxiliary task in BERT’s pre-training. Thereby, jointly with the standard word-form level language model, we train a semantic-level language model that predicts the missing word’s meaning. Our method does not require sense-annotated data; self-supervised learning from unannotated text is facilitated by using WordNet (Miller, 1998), an expert constructed inventory of word senses, as weak supervision. We focus on a coarse-grained variant of a word’s sense, referred to as its WordNet supersense, in order to mitigate an identified brittleness of finegrained word-sense systems, caused by arbitrary sense granularity, blurriness, and general subjectiveness (Kilgarriff, 1997; Schneider, 2014). WordNet lexicographers organize all word senses into 45 supersense categories, 26 of which are for nouns, 15 for verbs, 3 for adjectives and 1 for adverbs (see full supersense table in the supplementary materials). Disambiguating a word’s supersense has been widely studied as a fundamental lexical categorization task (Ciaramita and Johnson, 2003; Basile, 2012; Schneider and Smith, 2015). We employ the masked word’s allowed supersenses list from WordNet as a set of possible labels for the sense prediction task. The labeling of words with a single supersense (e.g., ‘sword’ has only the supersense noun.artifact) is straightforward: We 4657 train the network to predict this supersense given the masked word’s context. As for words with multiple supersenses (e.g., ‘bass’ can be: noun.food, noun.animal, noun.artifact, noun.person, etc.), we train the model to predict any of these senses, leading to a simple yet effective soft-labeling scheme. We show that SenseBERTBASE outscores both BERTBASE and BERTLARGE by a large margin on a supersense variant of the SemEval Word Sense Disambiguation (WSD) data set standardized in Raganato et al. (2017). Notably, SenseBERT receives competitive results on this task without funetuning, i.e., when training a linear classifier over the pretrained embeddings, which serves as a testament for its self-acquisition of lexical semantics. Furthermore, we show that SenseBERTBASE surpasses BERTLARGE in the Word in Context (WiC) task (Pilehvar and Camacho-Collados, 2019) from the SuperGLUE benchmark (Wang et al., 2019), which directly depends on word-supersense awareness. A single SenseBERTLARGE model achieves state of the art performance on WiC with a score of 72.14, improving the score of BERTLARGE by 2.5 points. 2 Related Work Neural network based word embeddings first appeared as a static mapping (non-contextualized), where every word is represented by a constant pretrained embedding (Mikolov et al., 2013; Pennington et al., 2014). Such embeddings were shown to contain some amount of word-sense information (Iacobacci et al., 2016; Yuan et al., 2016; Arora et al., 2018; Le et al., 2018). Additionally, sense embeddings computed for each word sense in the word-sense inventory (e.g. WordNet) have been employed, relying on hypernymity relations (Rothe and Sch¨utze, 2015) or the gloss for each sense (Chen et al., 2014). These approaches rely on static word embeddings and require a large amount of annotated data per word sense. The introduction of contextualized word embeddings (Peters et al., 2018), for which a given word’s embedding is context-dependent rather than precomputed, has brought forth a promising prospect for sense-aware word embeddings. Indeed, visualizations in Reif et al. (2019) show that sense sensitive clusters form in BERT’s word embedding space. Nevertheless, we identify a clear gap in this abilty. We show that a vanilla BERT model trained with the current word-level self-supervision, burdened with the implicit task of disambiguating word meanings, often fails to grasp lexical semantics, exhibiting high supersense misclassification rates. Our suggested weakly-supervised word-sense signal allows SenseBERT to significantly bridge this gap. Moreover, SenseBERT exhibits an improvement in lexical semantics ability (reflected by the Word in Context task score) even when compared to models with WordNet infused linguistic knowledge. Specifically we compare to Peters et al. (2019) who re-contextualize word embeddings via a wordto-entity attention mechanism (where entities are WordNet lemmas and synsets), and to Loureiro and Jorge (2019) which construct sense embeddings from BERT’s word embeddings and use the WordNet graph to enhance coverage (see quantitative comparison in table 3). 3 Incorporating Word-Supersense Information in Pre-training In this section, we present our proposed method for integrating word sense-information within SenseBERT’s pre-training. We start by describing the vanilla BERT architecture in subsection 3.1. We conceptually divide it into an internal transformer encoder and an external mapping W which translates the observed vocabulary space into and out of the transformer encoder space [see illustration in figure 1(a)]. In the subsequent subsections, we frame our contribution to the vanilla BERT architecture as an addition of a parallel external mapping to the words supersenses space, denoted S [see illustration in figure 1(b)]. Specifically, in section 3.2 we describe the loss function used for learning S in parallel to W, effectively implementing word-form and wordsense multi-task learning in the pre-training stage. Then, in section 3.3 we describe our methodology for adding supersense information in S to the initial Transformer embedding, in parallel to word-level information added by W. In section 3.4 we address the issue of supersense prediction for out-ofvocabulary words, and in section 3.5 we describe our modification of BERT’s masking strategy, prioritizing single-supersensed words which carry a clearer semantic signal. 3.1 Background The input to BERT is a sequence of words {x(j) ∈ {0, 1}DW }N j=1 where 15% of the words are re4658 + Wx(1) Wx(j) ywords Wx(N) p(1) p(j) p(N) (a) BERT (b) SenseBERT WT + + Wx(1) Wx(j) ywords Wx(N) SMx(1) SMx(j) SMx(N) p(1) p(j) p(N) W W WT ysenses ST S x(1) 1 j N x(N) [MASK] x(1) x(N) [MASK] Transformer encoder 1 j N Transformer encoder Figure 1: SenseBERT includes a masked-word supersense prediction task, pre-trained jointly with BERT’s original masked-word prediction task (Devlin et al., 2019) (see section 3.2). As in the original BERT, the mapping from the Transformer dimension to the external dimension is the same both at input and at output (W for words and S for supersenses), where M denotes a fixed mapping between word-forms and their allowed WordNet supersenses (see section 3.3). The vectors p(j) denote positional embeddings. For clarity, we omit a reference to a sentence-level Next Sentence Prediction task trained jointly with the above. placed by a [MASK] token (see treatment of subword tokanization in section 3.4). Here N is the input sentence length, DW is the word vocabulary size, and x(j) is a 1-hot vector corresponding to the jth input word. For every masked word, the output of the pretraining task is a word-score vector ywords ∈RDW containing the per-word score. BERT’s architecture can be decomposed to (1) an internal Transformer encoder architecture (Vaswani et al., 2017) wrapped by (2) an external mapping to the word vocabulary space, denoted by W.1 The Transformer encoder operates over a sequence of word embeddings v(j) input ∈Rd, where d is the Transformer encoder’s hidden dimension. These are passed through multiple attention-based Transformer layers, producing a new sequence of contextualized embeddings at each layer. The Transformer encoder output is the final sequence of contextualized word embeddings v(j) output ∈Rd. The external mapping W ∈Rd×DW is effectively a translation between the external word vocabulary dimension and the internal Transformer dimension. Original words in the input sentence are translated into the Transformer block by applying this mapping (and adding positional encoding vectors p(j) ∈Rd): v(j) input = Wx(j) + p(j) (1) 1For clarity, we omit a description of the Next Sentence Prediction task which we employ as in Devlin et al. (2019). The word-score vector for a masked word at position j is extracted from the Transformer encoder output by applying the transpose: ywords = W ⊤v(j) output [see illustration in figure 1(a)]. The use of the same matrix W as the mapping in and out of the transformer encoder space is referred to as weight tying (Inan et al., 2017; Press and Wolf, 2017). Given a masked word in position j, BERT’s original masked-word prediction pre-training task is to have the softmax of the word-score vector ywords = W ⊤v(j) output get as close as possible to a 1-hot vector corresponding to the masked word. This is done by minimizing the cross-entropy loss between the softmax of the word-score vector and a 1-hot vector corresponding to the masked word: LLM = −log p(w|context), (2) where w is the masked word, the context is composed of the rest of the input sequence, and the probability is computed by: p(w|context) = exp ywords w  P w′ exp ywords w′ , (3) where ywords w denotes the wth entry of the wordscore vector. 4659 3.2 Weakly-Supervised Supersense Prediction Task Jointly with the above procedure for training the word-level language model of SenseBERT, we train the model to predict the supersense of every masked word, thereby training a semantic-level language model. This is done by adding a parallel external mapping to the words supersenses space, denoted S ∈Rd×DS [see illustration in figure 1(b)], where DS = 45 is the size of supersenses vocabulary. Ideally, the objective is to have the softmax of the sense-score vector ysenses ∈RDS := S⊤v(j) output get as close as possible to a 1-hot vector corresponding to the word’s supersense in the given context. For each word w in our vocabulary, we employ the WordNet word-sense inventory for constructing A(w), the set of its “allowed” supersenses. Specifically, we apply a WordNet Lemmatizer on w, extract the different synsets that are mapped to the lemmatized word in WordNet, and define A(w) as the union of supersenses coupled to each of these synsets. As exceptions, we set A(w) = ∅for the following: (i) short words (up to 3 characters), since they are often treated as abbreviations, (ii) stop words, as WordNet does not contain their main synset (e.g. ‘he’ is either the element helium or the hebrew language according to WordNet), and (iii) tokens that represent part-of-word (see section 3.4 for further discussion on these tokens). Given the above construction, we employ a combination of two loss terms for the supersense-level language model. The following allowed-senses term maximizes the probability that the predicted sense is in the set of allowed supersenses of the masked word w: Lallowed SLM = −log p (s ∈A(w)|context) = −log X s∈A(w) p(s|context), (4) where the probability for a supersense s is given by: p(s|context) = exp(ysenses s ) P s′ exp(ysenses s′ ). (5) The soft-labeling scheme given above, which treats all the allowed supersenses of the masked word equally, introduces noise to the supersense labels. We expect that encountering many contexts in a sufficiently large corpus will reinforce the correct labels whereas the signal of incorrect labels will diminish. To illustrate this, consider the following examples for the food context: 1. “This bass is delicious” (supersenses: noun.food, noun.artifact, etc.) 2. “This chocolate is delicious” (supersenses: noun.food, noun.attribute, etc.) 3. “This pickle is delicious” (supersenses: noun.food, noun.state, etc.) Masking the marked word in each of the examples results in three identical input sequences, each with a different sets of labels. The ground truth label, noun.food, appears in all cases, so that its probability in contexts indicating food is increased whereas the signals supporting other labels cancel out. While Lallowed SLM pushes the network in the right direction, minimizing this loss could result in the network becoming overconfident in predicting a strict subset of the allowed senses for a given word, i.e., a collapse of the prediction distribution. This is especially acute in the early stages of the training procedure, when the network could converge to the noisy signal of the soft-labeling scheme. To mitigate this issue, the following regularization term is added to the loss, which encourages a uniform prediction distribution over the allowed supersenses: Lreg SLM = − X s∈A(w) 1 |A(w)| log p(s|context), (6) i.e., a cross-entropy loss with a uniform distribution over the allowed supersenses. Overall, jointly with the regular word level language model trained with the loss in eq. 2, we train the semantic level language model with a combined loss of the form: LSLM = Lallowed SLM + Lreg SLM . (7) 3.3 Supersense Aware Input Embeddings Though in principle two different matrices could have been used for converting in and out of the Tranformer encoder, the BERT architecture employs the same mapping W. This approach, referred to as weight tying, was shown to yield theoretical and pracrical benefits (Inan et al., 2017; Press and Wolf, 2017). Intuitively, constructing the Transformer encoder’s input embeddings from the same mapping with which the scores are computed improves their quality as it makes the input more sensitive to the training signal. 4660 Verb Supersenses Noun Supersenses Other (adv./adj.) Abstract Concrete Concrete - Entities (a) All Supersenses noun.object noun.substance noun.body noun.plant (b) Noun Supersenses noun.person noun.feeling noun.shape noun.attribute noun.location noun.group noun.animal noun.artifact noun.food Figure 2: UMAP visualization of supersense vectors (rows of the classifier S) learned by SenseBERT at pre-training. (a) Clustering by the supersense’s part-of speech. (b) Within noun supersenses, semantically similar supersenses are clustered together (see more details in the supplementary materials). We follow this approach, and insert our newly proposed semantic-level language model matrix S in the input in addition to W [as depicted in figure 1(b)], such that the input vector to the Transformer encoder (eq. 1) is modified to obey: v(j) input = (W + SM)x(j) + p(j), (8) where p(j) are the regular positional embeddings as used in BERT, and M ∈RDS×DW is a static 0/1 matrix converting between words and their allowed WordNet supersenses A(w) (see construction details above). The above strategy for constructing v(j) input allows for the semantic level vectors in S to come into play and shape the input embeddings even for words which are rarely observed in the training corpus. For such a word, the corresponding row in W is potentially less informative, since due to the low word frequency the model did not have sufficient chance to adequately learn it. However, since the model learns a representation of its supersense, the corresponding row in S is informative of the semantic category of the word. Therefore, the input embedding in eq. 8 can potentially help the model to elicit meaningful information even when the masked word is rare, allowing for better exploitation of the training corpus. 3.4 Rare Words Supersense Prediction At the pre-processing stage, when an out-ofvocabulary (OOV) word is encountered in the corpus, it is divided into several in-vocabulary subword tokens. For the self-supervised word prediction task (eq. 2) masked sub-word tokens are straightforwardly predicted as described in section 3.1. In contrast, word-sense supervision is only meaningful at the word level. We compare two alternatives for dealing with tokenized OOV words for the supersense prediction task (eq. 7). In the first alternative, called 60K vocabulary, we augment BERT’s original 30K-token vocabulary (which roughly contained the most frequent words) with additional 30K new words, chosen according to their frequency in Wikipedia. This vocabulary increase allows us to see more of the corpus as whole words for which supersense prediction is a meaningful operation. Additionally, in accordance with the discussion in the previous subsection, our sense-aware input embedding mechanism can help the model extract more information from lowerfrequency words. For the cases where a sub-word token is chosen for masking, we only propagate the regular word level loss and do not train the supersense prediction task. The above addition to the vocabulary results in an increase of approximately 23M parameters over the 110M parameters of BERTBASE and an increase of approximately 30M parameters over the 340M parameters of BERTLARGE (due to different embedding dimensions d = 768 and d = 1024, respectively). It is worth noting that similar vocabulary sizes in leading models have not resulted in increased sense awareness, as reflected for example in the WiC task results (Liu et al., 2019). As a second alternative, referred to as average embedding, we employ BERT’s regular 30K-token 4661 (a) (b) Dan cooked a bass on the grill. The [MASK] fell to the floor. The bass player was exceptional. noun.artifact verb.creation noun.food noun.person noun.person adj.all noun.artifact noun.artifact (sword, chair, ...) noun.person (man, girl, ...) 52% 17% Gill [MASK] the bread. verb.contact (cut, buttered, ...) verb.consumption (ate, chewed, ...) verb.change (heated, baked, ...) verb.possession (took, bought, ...) 33% 20% 11% 6% Figure 3: (a) A demonstration of supersense probabilities assigned to a masked position within context, as given by SenseBERT’s word-supersense level semantic language model (capped at 5%). Example words corresponding to each supersense are presented in parentheses. (b) Examples of SenseBERT’s prediction on raw text, when the unmasked input sentence is given to the model. This beyond word-form abstraction ability facilitates a more natural elicitation of semantic content at pre-training. vocabulary and employ a whole-word-masking strategy. Accordingly, all of the tokens of a tokenized OOV word are masked together. In this case, we train the supersense prediction task to predict the WordNet supersenses of this word from the average of the output embeddings at the location of the masked sub-words tokens. 3.5 Single-Supersensed Word Masking Words that have a single supersense are good anchors for obtaining an unambiguous semantic signal. These words teach the model to accurately map contexts to supersenses, such that it is then able to make correct context-based predictions even when a masked word has several supersenses. We therefore favor such words in the masking strategy, choosing 50% of the single-supersensed words in each input sequence to be masked. We stop if 40% of the overall 15% masking budget is filled with single-supersensed words (this rarly happens), and in any case we randomize the choice of the remaining words to complete this budget. As in the original BERT, 1 out of 10 words chosen for masking is shown to the model as itself rather than replaced with [MASK]. 4 Semantic Language Model Visualization A SenseBERT pretrained as described in section 3 (with training hyperparameters as in Devlin et al. (2019)), has an immediate non-trivial bi-product. The pre-trained mapping to the supersenses space, denoted S, acts as an additional head predicting a word’s supersense given context [see figure 1(b)]. We thereby effectively attain a semantic-level lanSenseBERTBASE SemEval-SS Fine-tuned 30K no OOV 81.9 30K average OOV 82.7 60K no OOV 83 Table 1: Testing variants for predicting supersenses of rare words during SenseBERT’s pretraining, as described in section 5.1. Results are reported on the SemEval-SS task (see section 5.2). 30K/60K stand for vocabulary size, and no/average OOV stand for not predicting senses for OOV words or predicting senses from the average of the sub-word token embeddings, respectively. guage model that predicts the missing word’s meaning jointly with the standard word-form level language model. We illustrate the resultant mapping in figure 2, showing a UMAP dimensionality reduction (McInnes et al., 2018) of the rows of S, which corresponds to the different supersenses. A clear clustering according to the supersense partof-speech is apparent in figure 2(a). We further identify finer-grained semantic clusters, as shown for example in figure 2(b) and given in more detail in the supplementary materials. SenseBERT’s semantic language model allows predicting a distribution over supersenses rather than over words in a masked position. Figure 3(a) shows the supersense probabilities assigned by SenseBERT in several contexts, demonstrating the model’s ability to assign semantically meaningful categories to the masked position. Finally, we demonstrate that SenseBERT enjoys 4662 (a) SemEval-SS (b) WiC The team used a battery of the newly developed “gene probes” BERT SenseBERT noun.artifact noun.group noun.quantity noun.body Same Different Ten shirt-sleeved ringers stand in a circle, one foot ahead of the other in a prize-fighter's stance Sent. A: The kick must be synchronized with the arm movements. Sent. B: A sidecar is a smooth drink but it has a powerful kick. Different Same Sent. A: Plant bugs in the dissident’s apartment. Sent. B: Plant a spy in Moscow. Figure 4: Example entries of (a) the SemEval-SS task, where a model is to predict the supersense of the marked word, and (b) the Word in Context (WiC) task where a model must determine whether the underlined word is used in the same/different supersense within sentences A and B. In all displayed examples, taken from the corresponding development sets, SenseBERT predicted the correct label while BERT failed to do so. A quantitative comparison between models is presented in table 2. an ability to view raw text at a lexical semantic level. Figure 3(b) shows example sentences and their supersense prediction by the pretrained model. Where a vanilla BERT would see only the words of the sentence “Dan cooked a bass on the grill”, SenseBERT would also have access to the supersense abstraction: “[Person] [created] [food] on the [artifact]”. This sense-level perspective can help the model extract more knowledge from every training example, and to generalize semantically similar notions which do not share the same phrasing. 5 Lexical Semantics Experiments In this section, we present quantitative evaluations of SenseBERT, pre-trained as described in section 3. We test the model’s performance on a supersense-based variant of the SemEval WSD test sets standardized in Raganato et al. (2017), and on the Word in Context (WiC) task (Pilehvar and Camacho-Collados, 2019) (included in the recently introduced SuperGLUE benchmark (Wang et al., 2019)), both directly relying on the network’s ability to perform lexical semantic categorization. 5.1 Comparing Rare Words Supersense Prediction Methods We first report a comparison of the two methods described in section 3.4 for predicting the supersenses of rare words which do not appear in BERT’s original vocabulary. The first 60K vocabulary method enriches the vocabulary and the second average embedding method predicts a supersense from the average embeddings of the sub-word tokens comprising an OOV word. During fine-tuning, when encountering an OOV word we predict the supersenses from the rightmost sub-word token in the 60K vocabulary method and from the average of the sub-word tokens in the average embedding method. As shown in table 1, both methods perform comparably on the SemEval supersense disambiguation task (see following subsection), yielding an improvement over the baseline of learning supersense information only for whole words in BERT’s original 30K-token vocabulary. We continue with the 60K-token vocabulary for the rest of the experiments, but note the average embedding option as a viable competitor for predicting word-level semantics. 5.2 SemEval-SS: Supersense Disambiguation We test SenseBERT on a Word Supersense Disambiguation task, a coarse grained variant of the common WSD task. We use SemCor (Miller et al., 1993) as our training dataset (226, 036 annotated examples), and the SenseEval (Edmonds and Cotton, 2001; Snyder and Palmer, 2004) / SemEval (Pradhan et al., 2007; Navigli et al., 2013; Moro and Navigli, 2015) suite for evaluation (overall 7253 annotated examples), following Raganato et al. (2017). For each word in both training and test sets, we change its fine-grained sense label to its corresponding WordNet supersense, and therefore train the network to predict a given word’s supersense. We name this Supersense disambiguation task SemEval-SS. See figure 4(a) for an example 4663 SemEval-SS Frozen SemEval-SS Fine-tuned Word in Context BERTBASE 65.1 79.2 – BERTLARGE 67.3 81.1 69.6 SenseBERTBASE 75.6 83.0 70.3 SenseBERTLARGE 79.5 83.7 72.1 Table 2: Results on a supersense variant of the SemEval WSD test set standardized in Raganato et al. (2017), which we denote SemEval-SS, and on the Word in Context (WiC) dataset (Pilehvar and Camacho-Collados, 2019) included in the recently introduced SuperGLUE benchmark (Wang et al., 2019). These tasks require a high level of lexical semantic understanding, as can be seen in the examples in figure 4. For both tasks, SenseBERT demonstrates a clear improvement over BERT in the regular fine-tuning setup, where network weights are modified during training on the task. Notably, SenseBERTLARGE achieves state of the art performance on the WiC task. In the SemEval-SS Frozen setting, we train a linear classifier over pretrained embeddings, without changing the network weights. The results show that SenseBERT introduces a dramatic improvement in this setting, implying that its word-sense aware pre-training (section 3) yields embeddings that carries lexical semantic information which is easily extractable for the benefits of downstream tasks. Results for BERT on the SemEval-SS task are attained by employing the published pre-trained BERT models, and the BERTLARGE result on WiC is taken from the baseline scores published on the SuperGLUE benchmark (Wang et al., 2019) (no result has been published for BERTBASE). Word in Context ELMo† 57.7 BERT sense embeddings †† 67.7 BERTLARGE‡ 69.6 RoBERTa‡‡ 69.9 KnowBERT-W+W⋄ 70.9 SenseBERT 72.1 Table 3: Test set results for the WiC dataset. †Pilehvar and Camacho-Collados (2019) ††Loureiro and Jorge (2019) ‡Wang et al. (2019) ‡‡Liu et al. (2019) ⋄Peters et al. (2019) from this modified data set. We show results on the SemEval-SS task for two different training schemes. In the first, we trained a linear classifier over the ‘frozen’ output embeddings of the examined model – we do not change the the trained SenseBERT’s parameters in this scheme. This Frozen setting is a test for the amount of basic lexical semantics readily present in the pre-trained model, easily extricable by further downstream tasks (reminiscent of the semantic probes employed in Hewitt and Manning (2019); Reif et al. (2019). In the second training scheme we fine-tuned the examined model on the task, allowing its parameters to change during training (see full training details in the supplementary materials). Results attained by employing this training method reflect the model’s potential to acquire word-supersense information given its pre-training. Table 2 shows a comparison between vanilla BERT and SenseBERT on the supersense disambiguation task. Our semantic level pretraining signal clearly yields embeddings with enhanced word-meaning awareness, relative to embeddings trained with BERT’s vanilla wordlevel signal. SenseBERTBASE improves the score of BERTBASE in the Frozen setting by over 10 points and SenseBERTLARGE improves that of BERTLARGE by over 12 points, demonstrating competitive results even without fine-tuning. In the setting of model fine-tuning, we see a clear demonstration of the model’s ability to learn word-level semantics, as SenseBERTBASE surpasses the score of BERTLARGE by 2 points. 5.3 Word in Context (WiC) Task We test our model on the recently introduced WiC binary classification task. Each instance in WiC has a target word w for which two contexts are provided, each invoking a specific meaning of w. The task is to determine whether the occurrences of w in the two contexts share the same meaning or not, clearly requiring an ability to identify the word’s semantic category. The WiC task is defined over supersenses (Pilehvar and Camacho-Collados, 2019) – the negative examples include a word used in two different supersenses and the positive ones include a word used in the same supersense. See figure 4(b) for an example from this data set. 4664 Score CoLA SST-2 MRPC STS-B QQP MNLI QNLI RTE BERTBASE (OURS) 77.5 50.1 92.6 88.7/84.3 85.7/84.6 71.0/88.9 83.6 89.4 67.9 SenseBERTBASE 77.9 54.6 92.2 89.2/85.2 83.5/82.3 70.3/88.8 83.6 90.6 67.5 Table 4: Results on the GLUE benchmark test set. Results on the WiC task comparing SenseBERT to vanilla BERT are shown in table 2. SenseBERTBASE surpasses a larger vanilla model, BERTLARGE. As shown in table 3, a single SenseBERTLARGE model achieves the state of the art score in this task, demonstrating unprecedented lexical semantic awareness. 5.4 GLUE The General Language Understanding Evaluation (GLUE; Wang et al. (2018)) benchmark is a popular testbed for language understanding models. It consists of 9 different NLP tasks, covering different linguistic phenomena. We evaluate our model on GLUE, in order to verify that SenseBERT gains its lexical semantic knowledge without compromising performance on other downstream tasks. Due to slight differences in the data used for pretraining BERT and SenseBERT (BookCorpus is not publicly available), we trained a BERTBASE model with the same data used for our models. BERTBASE and SenseBERTBASE were both finetuned using the exact same procedures and hyperparameters. The results are presented in table 4. Indeed, SenseBERT performs on par with BERT, achieving an overall score of 77.9, compared to 77.5 achieved by BERTBASE. 6 Conclusion We introduce lexical semantic information into a neural language model’s pre-training objective. This results in a boosted word-level semantic awareness of the resultant model, named SenseBERT, which considerably outperforms a vanilla BERT on a SemEval based Supersense Disambiguation task and achieves state of the art results on the Word in Context task. This improvement was obtained without human annotation, but rather by harnessing an external linguistic knowledge source. Our work indicates that semantic signals extending beyond the lexical level can be similarly introduced at the pre-training stage, allowing the network to elicit further insight without human supervision. Acknowledgments We acknowledge useful comments and assistance from our colleagues at AI21 Labs. We would also like to thank the anonymous reviewers for their valuable feedback. References Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018. Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association for Computational Linguistics, 6:483–495. Pierpaolo Basile. 2012. Super-sense tagging using support vector machines and distributional features. In International Workshop on Evaluation of Natural Language and Speech Tool for Italian, pages 176– 185. Springer. William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, and Jakob Uszkoreit. 2019. KERMIT: Generative insertion-based modeling for sequences. arXiv preprint arXiv:1906.01604. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025–1035, Doha, Qatar. Association for Computational Linguistics. Massimiliano Ciaramita and Mark Johnson. 2003. Supersense tagging of unknown nouns in WordNet. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 168– 175. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Philip Edmonds and Scott Cotton. 2001. SENSEVAL2: Overview. In Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 1–5, Toulouse, France. Association for Computational Linguistics. 4665 John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics. Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 897–907, Berlin, Germany. Association for Computational Linguistics. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2017. Tying word vectors and word classifiers: A loss framework for language modeling. In ICLR. Adam Kilgarriff. 1997. I don’t believe in word senses. Computers and the Humanities, 31(2):91–113. Minh Le, Marten Postma, Jacopo Urbani, and Piek Vossen. 2018. A deep dive into word sense disambiguation with LSTM. In Proceedings of the 27th International Conference on Computational Linguistics, pages 354–365, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Daniel Loureiro and Al´ıpio Jorge. 2019. Language modelling makes sense: Propagating representations through WordNet for full-coverage word sense disambiguation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5682–5691, Florence, Italy. Association for Computational Linguistics. Leland McInnes, John Healy, and James Melville. 2018. UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. George A Miller. 1998. WordNet: An electronic lexical database. MIT press. George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993. Andrea Moro and Roberto Navigli. 2015. SemEval2015 task 13: Multilingual all-words sense disambiguation and entity linking. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 288–297, Denver, Colorado. Association for Computational Linguistics. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Comput. Surv., 41(2). Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. SemEval-2013 task 12: Multilingual word sense disambiguation. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 222–231, Atlanta, Georgia, USA. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 43–54, Hong Kong, China. Association for Computational Linguistics. Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267–1273, Minneapolis, Minnesota. Association for Computational Linguistics. Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. SemEval-2007 task-17: English lexical sample, SRL and all words. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 87–92, Prague, Czech Republic. Association for Computational Linguistics. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings 4666 of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 99–110, Valencia, Spain. Association for Computational Linguistics. Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of BERT. In Advances in Neural Information Processing Systems 32, pages 8594–8603. Curran Associates, Inc. Sascha Rothe and Hinrich Sch¨utze. 2015. AutoExtend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1793–1803, Beijing, China. Association for Computational Linguistics. Nathan Schneider. 2014. Lexical semantic analysis in natural language text. Unpublished Doctoral Dissertation, Carnegie Mellon University. Nathan Schneider and Noah A. Smith. 2015. A corpus and model integrating multiword expressions and supersenses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1537–1547, Denver, Colorado. Association for Computational Linguistics. Benjamin Snyder and Martha Palmer. 2004. The English all-words task. In Proceedings of SENSEVAL-3, the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 41–43, Barcelona, Spain. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems 32, pages 3266–3280. Curran Associates, Inc. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32, pages 5753–5763. Curran Associates, Inc. Dayu Yuan, Julian Richardson, Ryan Doherty, Colin Evans, and Eric Altendorf. 2016. Semi-supervised word sense disambiguation with neural models. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1374–1385, Osaka, Japan. The COLING 2016 Organizing Committee. A Supersenses and Their Representation in SenseBERT We present in table 5 a comprehensive list of WordNet supersenses, as they appear in the WordNet documentation. In fig. 5 we present a Dendrogram of an Agglomerative hierarchical clustering over the supersense embedding vectors learned by SenseBERT in pre-training. The clustering shows a clear separation between Noun senses and Verb senses. Furthermore, we can observe that semantically related supersenses are clustered together (i.e, noun.animal and noun.plant). B Training Details As hyperparameters for the fine-tuning, we used max seq length = 128, chose learning rates from {5e−6, 1e−5, 2e−5, 3e−5, 5e−5}, batch sizes from {16, 32}, and fine-tuned up to 10 epochs for all the datasets. 4667 Nouns Verbs verb.consumption verb.body verb.emotion verb.weather verb.change verb.stative verb.creation verb.perception verb.cognition verb.communication verb.possession verb.social verb.motion verb.competition verb.contact noun.event noun.phenomenon noun.possession noun.feeling noun.shape noun.process adj.ppl noun.motive noun.food noun.object noun.body noun.animal noun.plant noun.time noun.quantity noun.substance noun.artifact noun.act noun.communication adj.all adv.all adj.pert null noun.group noun.location noun.person noun.state noun.cognition noun.attribute noun.relation Figure 5: Dendrogram visualization of an Agglomerative hierarchical clustering over the supersense vectors (rows of the classifier S) learned by SenseBERT. Name Content Name Content adj.all All adjective clusters noun.quantity Nouns denoting quantities and units of measure adj.pert Relational adjectives (pertainyms) noun.relation Nouns denoting relations between people or things or ideas adv.all All adverbs noun.shape Nouns denoting two and three dimensional shapes noun.Tops Unique beginner for nouns noun.state Nouns denoting stable states of affairs noun.act Nouns denoting acts or actions noun.substance Nouns denoting substances noun.animal Nouns denoting animals noun.time Nouns denoting time and temporal relations noun.artifact Nouns denoting man-made objects verb.body Verbs of grooming, dressing and bodily care noun.attribute Nouns denoting attributes of people verb.change Verbs of size, temperature change, and objects intensifying, etc. noun.body Nouns denoting body parts verb.cognition Verbs of thinking, judging, analyzing, doubting noun.cognition Nouns denoting cognitive verb.communication Verbs of telling, asking, ordering, processes and contents singing noun.communication Nouns denoting communicative verb.competition Verbs of fighting, athletic activities processes and contents noun.event Nouns denoting natural events verb.consumption Verbs of eating and drinking noun.feeling Nouns denoting feelings verb.contact Verbs of touching, hitting, tying, and emotions digging noun.food Nouns denoting foods and drinks verb.creation Verbs of sewing, baking, painting, performing noun.group Nouns denoting groupings of people verb.emotion Verbs of feeling or objects noun.location Nouns denoting spatial position verb.motion Verbs of walking, flying, swimming noun.motive Nouns denoting goals verb.perception Verbs of seeing, hearing, feeling noun.object Nouns denoting natural objects verb.possession Verbs of buying, selling, owning (not man-made) noun.person Nouns denoting people verb.social Verbs of political and social activities and events noun.phenomenon Nouns denoting natural phenomena verb.stative Verbs of being, having, spatial relations noun.plant Nouns denoting plants verb.weather Verbs of raining, snowing, thawing, thundering noun.possession Nouns denoting possession adj.ppl Participial adjectives and transfer of possession noun.process Nouns denoting natural processes Table 5: A list of supersense categories from WordNet lexicographer.
2020
423
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4668–4679 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4668 ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations Fernando Alva-Manchego1∗and Louis Martin2,3∗and Antoine Bordes3 Carolina Scarton1 and Benoˆıt Sagot2 and Lucia Specia1,4 1University of Sheffield, 2Inria, 3Facebook AI Research, 4Imperial College London [email protected], [email protected], [email protected] [email protected], [email protected] [email protected] Abstract In order to simplify a sentence, human editors perform multiple rewriting transformations: they split it into several shorter sentences, paraphrase words (i.e. replacing complex words or phrases by simpler synonyms), reorder components, and/or delete information deemed unnecessary. Despite these varied range of possible text alterations, current models for automatic sentence simplification are evaluated using datasets that are focused on a single transformation, such as lexical paraphrasing or splitting. This makes it impossible to understand the ability of simplification models in more realistic settings. To alleviate this limitation, this paper introduces ASSET, a new dataset for assessing sentence simplification in English. ASSET is a crowdsourced multi-reference corpus where each simplification was produced by executing several rewriting transformations. Through quantitative and qualitative experiments, we show that simplifications in ASSET are better at capturing characteristics of simplicity when compared to other standard evaluation datasets for the task. Furthermore, we motivate the need for developing better methods for automatic evaluation using ASSET, since we show that current popular metrics may not be suitable when multiple simplification transformations are performed. 1 Introduction Sentence Simplification (SS) consists in modifying the content and structure of a sentence to make it easier to understand, while retaining its main idea and most of its original meaning (Alva-Manchego et al., 2020). Simplified texts can benefit non-native speakers (Paetzold, 2016), people suffering from aphasia (Carroll et al., 1998), dyslexia (Rello et al., 2013) or autism (Evans et al., 2014). They also help language processing tasks, such as parsing (Chandrasekar et al., 1996), summarisation (Silveira and ∗Equal Contribution Branco, 2012), and machine translation (Hasler et al., 2017). In order simplify a sentence, several rewriting transformations can be performed: replacing complex words/phrases with simpler synonyms (i.e. lexical paraphrasing), changing the syntactic structure of the sentence (e.g. splitting), or removing superfluous information that make the sentence more complicated (Petersen, 2007; Alu´ısio et al., 2008; Bott and Saggion, 2011). However, models for automatic SS are evaluated on datasets whose simplifications are not representative of this variety of transformations. For instance, TurkCorpus (Xu et al., 2016), a standard dataset for assessment in SS, contains simplifications produced mostly by lexical paraphrasing, while reference simplifications in HSplit (Sulem et al., 2018a) focus on splitting sentences. The Newsela corpus (Xu et al., 2015) contains simplifications produced by professionals applying multiple rewriting transformations, but sentence alignments are automatically computed and thus imperfect, and its data can only be accessed after signing a restrictive publicsharing licence and cannot be redistributed, hampering reproducibility. These limitations in evaluation data prevent studying models’ capabilities to perform a broad range of simplification transformations. Even though most SS models are trained on simplification instances displaying several text transformations (e.g. WikiLarge (Zhang and Lapata, 2017)), we currently do not measure their performance in more abstractive scenarios, i.e. cases with substantial modifications to the original sentences. In this paper we introduce ASSET (Abstractive Sentence Simplification Evaluation and Tuning), a new dataset for tuning and evaluation of automatic SS models. ASSET consists of 23,590 human simplifications associated with the 2,359 original sentences from TurkCorpus (10 simplifications per 4669 original sentence). Simplifications in ASSET were collected via crowdsourcing (§ 3), and encompass a variety of rewriting transformations (§ 4), which make them simpler than those in TurkCorpus and HSplit (§ 5), thus providing an additional suitable benchmark for comparing and evaluating automatic SS models. In addition, we study the applicability of standard metrics for evaluating SS using simplifications in ASSET as references (§ 6). We analyse whether BLEU (Papineni et al., 2002) or SARI (Xu et al., 2016) scores correlate with human judgements of fluency, adequacy and simplicity, and find that neither of the metrics shows a strong correlation with simplicity ratings. This motivates the need for developing better metrics for assessing SS when multiple rewriting transformations are performed. We make the following contributions: • A high quality large dataset for tuning and evaluation of SS models containing simplifications produced by applying multiple rewriting transformations.1 • An analysis of the characteristics of the dataset that turn it into a new suitable benchmark for evaluation. • A study questioning the suitability of popular metrics for evaluating automatic simplifications in a multiple-transformation scenario. 2 Related Work 2.1 Studies on Human Simplification A few corpus studies have been carried out to analyse how humans simplify sentences, and to attempt to determine the rewriting transformations that are performed. Petersen and Ostendorf (2007) analysed a corpus of 104 original and professionally simplified news articles in English. Sentences were manually aligned and each simplification instance was categorised as dropped (1-to-0 alignment), split (1-to-N), total (1-to-1) or merged (2-to-1). Some splits were further sub-categorised as edited (i.e. the sentence was split and some part was dropped) or different (i.e. same information but very different wording). This provides evidence that sentence splitting and deletion of information can be performed simultaneously. 1ASSET is released with a CC-BY-NC license at https://github.com/facebookresearch/ asset. Alu´ısio et al. (2008) studied six corpora of simple texts (different genres) and a corpus of complex news texts in Brazilian Portuguese, to produce a manual for Portuguese text simplification (Specia et al., 2008). It contains several rules to perform the task focused on syntactic alterations: to split adverbial/coordinated/subordinated sentences, to reorder clauses to a subject-verb-object structure, to transform passive to active voice, among others. Bott and Saggion (2011) worked with a dataset of 200 news articles in Spanish with their corresponding manual simplifications. After automatically aligning the sentences, the authors determined the simplification transformations performed: change (e.g. difficult words, pronouns, voice of verb), delete (words, phrases or clauses), insert (word or phrases), split (relative clauses, coordination, etc.), proximisation (add locative phrases, change from third to second person), reorder, select, and join (sentences). From all these studies, it can be argued that the scope of rewriting transformations involved in the simplification process goes beyond only replacing words with simpler synonyms. In fact, human perception of complexity is most affected by syntactic features related to sentence structure (Brunato et al., 2018). Therefore, since human editors make several changes to both the lexical content and syntactic structure of sentences when simplifying them, we should expect that models for automatic sentence simplification can also make such changes. 2.2 Evaluation Data for SS Most datasets for SS (Zhu et al., 2010; Coster and Kauchak, 2011; Hwang et al., 2015) consist of automatic sentence alignments between related articles in English Wikipedia (EW) and Simple English Wikipedia (SEW). In SEW, contributors are asked to write texts using simpler language, such as by shortening sentences or by using words from Basic English (Ogden, 1930). However, Yasseri et al. (2012) found that the syntactic complexity of sentences in SEW is almost the same as in EW. In addition, Xu et al. (2015) determined that automaticallyaligned simple sentences are sometimes just as complex as their original counterparts, with only a few words replaced or dropped and the rest of the sentences left unchanged. More diverse simplifications are available in the Newsela corpus (Xu et al., 2015), a dataset of 1,130 news articles that were each manually simplified 4670 to up to 5 levels of simplicity. The parallel articles can be automatically aligned at the sentence level to train and test simplification models (AlvaManchego et al., 2017; ˇStajner et al., 2018). However, the Newsela corpus can only be accessed after signing a restrictive license that prevents publicly sharing train/test splits of the dataset, which impedes reproducibility. Evaluating models on automatically-aligned sentences is problematic. Even more so if only one (potentially noisy) reference simplification for each original sentence is available. With this concern in mind, Xu et al. (2016) collected the TurkCorpus, a dataset with 2,359 original sentences from EW, each with 8 manual reference simplifications. The dataset is divided into two subsets: 2,000 sentences for validation and 359 for testing of sentence simplification models. TurkCorpus is suitable for automatic evaluation that involves metrics requiring multiple references, such as BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016). However, Xu et al. (2016) focused on simplifications through lexical paraphrasing, instructing annotators to rewrite sentences by reducing the number of difficult words or idioms, but without deleting content or splitting the sentences. This prevents evaluating a model’s ability to perform a more diverse set of rewriting transformations when simplifying sentences. HSplit (Sulem et al., 2018a), on the other hand, provides simplifications involving only splitting for sentences in the test set of TurkCorpus. We build on TurkCorpus and HSplit by collecting a dataset that provides several manuallyproduced simplifications involving multiple types of rewriting transformations. 2.3 Crowdsourcing Manual Simplifications A few projects have been carried out to collect manual simplifications through crowdsourcing. Pellow and Eskenazi (2014a) built a corpus of everyday documents (e.g. driving test preparation materials), and analysed the feasibly of crowdsourcing their sentence-level simplifications. Of all the quality control measures taken, the most successful was providing a training session to workers, since it allowed to block spammers and those without the skills to perform the task. Additionally, they proposed to use workers’ self-reported confidence scores to flag submissions that could be discarded or reviewed. Later on, Pellow and Eskenazi (2014b) presented a preliminary study on producing simplifications through a collaborative process. Groups of four workers were assigned one sentence to simplify, and they had to discuss and agree on the process to perform it. Unfortunately, the data collected in these studies is no longer publicly available. Simplifications in TurkCorpus were also collected through crowdsourcing. Regarding the methodology followed, Xu et al. (2016) only report removing bad workers after manual check of their first several submissions. More recently, Scarton et al. (2018) used volunteers to collect simplifications for SimPA, a dataset with sentences from the Public Administration domain. One particular characteristic of the methodology followed is that lexical and syntactic simplifications were performed independently. 3 Creating ASSET We extended TurkCorpus (Xu et al., 2016) by using the same original sentences, but crowdsourced manual simplifications that encompass a richer set of rewriting transformations. Since TurkCorpus was adopted as the standard dataset for evaluating SS models, several system outputs on this data are already publicly available (Zhang and Lapata, 2017; Zhao et al., 2018; Martin et al., 2020). Therefore, we can now assess the capabilities of these and other systems in scenarios with varying simplification expectations: lexical paraphrasing with TurkCorpus, sentence splitting with HSplit, and multiple transformations with ASSET. 3.1 Data Collection Protocol Manual simplifications were collected using Amazon Mechanical Turk (AMT). AMT allows us to publish HITs (Human Intelligence Tasks), which workers can choose to work on, submit an answer, and collect a reward if the work is approved. This was also the platform used for TurkCorpus. Worker Requirements. Participants were workers who: (1) have a HIT approval rate >= 95%; (2) have a number of HITs approved > 1000; (3) are residents of the United States of America, the United Kingdom or Canada; and (4) passed the corresponding Qualification Test designed for our task (more details below). The first two requirements are measured by the AMT platform and ensure that the workers have experience on different tasks and have had most of their work approved by previous requesters. The last two requirements are intended 4671 Original Their eyes are quite small, and their visual acuity is poor. TurkCorpus Their eyes are very little, and their sight is inferior. HSplit Their eyes are quite small. Their visual acuity is poor as well. ASSET They have small eyes and poor eyesight. Original His next work, Saturday, follows an especially eventful day in the life of a successful neurosurgeon. TurkCorpus His next work at Saturday will be a successful Neurosurgeon. HSplit His next work was Saturday. It follows an especially eventful day in the life of a successful Neurosurgeon. ASSET ”Saturday” records a very eventful day in the life of a successful neurosurgeon. Original He settled in London, devoting himself chiefly to practical teaching. TurkCorpus He rooted in London, devoting himself mainly to practical teaching. HSplit He settled in London. He devoted himself chiefly to practical teaching. ASSET He lived in London. He was a teacher. Table 1: Examples of simplifications collected for ASSET together with their corresponding version from TurkCorpus and HSplit for the same original sentences. to ensure that the workers have a proficient level of English, and are capable of performing the simplification task. Qualification Test. We provided a training session to workers in the form of a Qualification Test (QT). Following Pellow and Eskenazi (2014a), we showed them explanations and examples of multiple simplification transformations (see details below). Each HIT consisted of three sentences to simplify, and all submissions were manually checked to filter out spammers and workers who could not perform the task correctly. The sentences used in this stage were extracted from the QATS dataset (ˇStajner et al., 2016). We had 100 workers take the QT, out of which 42 passed the test (42%) and worked on the task. Annotation Round. Workers who passed the QT had access to this round. Similar to Pellow and Eskenazi (2014a), each HIT now consisted of four original sentences that needed to be simplified. In addition to the simplification of each sentence, workers were asked to submit confidence scores on their simplifications using a 5-point likert scale (1:Very Low, 5:Very High). We collected 10 simplifications (similar to Pellow and Eskenazi (2014a)) for each of the 2,359 original sentences in TurkCorpus. Simplification Instructions. For both the QT and the Annotation Round, workers received the same set of instructions about how to simplify a sentence. We provided examples of lexical paraphrasing (lexical simplification and reordering), sentence splitting, and compression (deleting unimportant information). We also included an example where all transformations were performed. However, we clarified that it was at their discretion to decide which types of rewriting to execute in any given original sentence.2 Table 1 presents a few examples of simplifications in ASSET, together with references from TurkCorpus and HSplit, randomly sampled for the same original sentences. It can be noticed that annotators in ASSET had more freedom to change the structure of the original sentences. 3.2 Dataset Statistics ASSET contains 23,590 human simplifications associated with the 2,359 original sentences from TurkCorpus (2,000 from the validation set and 359 from the test set). Table 2 presents some general statistics from simplifications in ASSET. We show the same statistics for TurkCorpus and HSplit for comparison.3 In addition to having more references per original sentence, ASSET’s simplifications offer more variability, for example containing many more instances of natural sentence splitting than TurkCorpus. In addition, reference simplifications are shorter on average in ASSET, given that we allowed annotators to delete information that they considered unnecessary. In the next section, we further compare these datasets with more detailed text features. 4 Rewriting Transformations in ASSET We study the simplifications collected for ASSET through a series of text features to measure the 2Full instructions are available in the dataset’s repository. 3HSplit is composed of two sets of simplifications: one where annotators were asked to split sentences as much as they could, and one where they were asked to split the original sentence only if it made the simplification easier to read and understand. However, we consider HSplit as a whole because differences between datasets far outweigh differences between these two sets. 4672 ASSET TurkCorpus HSplit Original Sentences 2,359 2,359 359 Num. of References 10 8 4 Type of Simp. Instances 1-to-1 17,245 18,499 408 1-to-N 6,345 373 1,028 Tokens per Reference 19.04 21.29 25.49 Table 2: General surface statistics for ASSET compared with TurkCorpus and HSplit. A simplification instance is an original-simplified sentence pair. abstractiveness of the rewriting transformations performed by the annotators. From here on, the analysis and statistics reported refer to the test set only (i.e. 359 original sentences), so that we can fairly compare ASSET, TurkCorpus and HSplit. 4.1 Text Features In order to quantify the rewriting transformations, we computed several low-level features for all simplification instances using the tseval package (Martin et al., 2018): • Number of sentence splits: Corresponds to the difference between the number of sentences in the simplification and the number of sentences in the original sentence. In tseval, the number of sentences is calculated using NLTK (Loper and Bird, 2002). • Compression level: Number of characters in the simplification divided by the number of characters in the original sentence. • Replace-only Levenshtein distance: Computed as the normalised character-level Levenshtein distance (Levenshtein, 1966) for replace operations only, between the original sentence and the simplification. Replace-only Levenshtein distance is computed as follows (with o the original sentence and s the simplification): replace ops(o, s) min(len(o), len(s)) We do not consider insertions and deletions in the Levenshtein distance computation so that this feature is independent from the compression level. It therefore serves as a proxy for measuring the lexical paraphrases of the simplification. • Proportion of words deleted, added and reordered: Number of words deleted/reordered from the original sentence divided by the number of words in the original sentence; and the number of words that were added to the original sentence divided by the number of words in the simplification. • Exact match: Boolean feature that equals to true when the original sentence and the simplification are exactly the same, to account for unchanged sentences. • Word deletion only: Boolean feature that equals to true when the simplification is obtained only by deleting words from the original sentence. This feature captures extractive compression. • Lexical complexity score ratio: We compute the score as the mean squared log-ranks of content words in a sentence (i.e. without stopwords). We use the 50k most frequent words of the FastText word embeddings vocabulary (Bojanowski et al., 2016). This vocabulary was originally sorted with frequencies of words in the Common Crawl. This score is a proxy to the lexical complexity of the sentence given that word ranks (in a frequency table) have been shown to be best indicators of word complexity (Paetzold and Specia, 2016). The ratio is then the value of this score on the simplification divided by that of the original sentence. • Dependency tree depth ratio: We compute the ratio of the depth of the dependency parse tree of the simplification relative to that of the original sentence. When a simplification is composed by more than one sentence, we choose the maximum depth of all dependency trees. Parsing is performed using spaCy.4 This feature serves as a proxy to measure improvements in structural simplicity. Each feature was computed for all simplification instances in the dataset and then aggregated as a histogram (Figure 1) and as a percentage (Table 3). 4.2 Results and Analysis Figure 1 shows the density of all features in ASSET, and compares them with those in TurkCorpus and 4github.com/explosion/spaCy 4673 0 1 2 3 sentence splits 0 2 4 6 density Sentence splits 0.5 1.0 1.5 compression level 0 2 4 6 Compression levels 0 20 40 60 Levenshtein distance 0.00 0.05 0.10 0.15 replace-only Levenshtein Distance 0.0 0.2 0.4 0.6 0.8 deleted words percentage 0 2 4 6 8 Deleted words (%) 0.0 0.2 0.4 0.6 added words percentage 0 2 4 6 density Added words (%) 0.0 0.5 1.0 1.5 2.0 reordered words percentage 0 2 4 6 Reordered words (%) 0.8 0.9 1.0 1.1 ratio 0 10 20 30 Lexical complexity score ratio 0.5 1.0 1.5 ratio 0 2 4 6 Dep. tree depth ratio HSplit TurkCorpus ASSET Figure 1: Density of text features in simplifications from HSplit, TurkCorpus, and ASSET. ASSET TurkCorpus HSplit Sentence Splitting 20.2% 4.6% 68.2% Compression (<75%) 31.2% 9.9% 0.1% Word Reordering 28.3% 19.4% 10.1% Exact Match 0.4% 16.3% 26.5% Word Deletion Only 4.5% 3.9% 0.0% Table 3: Percentage of simplifications featuring one of different rewriting transformations operated in ASSET, TurkCorpus and HSplit. A simplification is considered as compressed when its character length is less than 75% of that of the original sentence. HSplit. Table 3 highlights some of these statistics. In particular, we report the percentage of sentences that: have at least one sentence split, have a compression level of 75% or lower, have at least one reordered word, are exact copies of the original sentences, and operated word deletion only (e.g. by removing only an adverb). Sentence splits are practically non-existent in TurkCorpus (only 4.6% have one split or more), and are more present and distributed in HSplit. In ASSET, annotators tended to not split sentences, and those who did mostly divided the original sentence into just two sentences (1 split). Compression is a differentiating feature of ASSET. Both TurkCorpus and HSplit have high density of a compression ratio of 1.0, which means that no compression was performed. In fact, HSplit has several instances with compression levels greater than 1.0, which could be explained by splitting requiring adding words to preserve fluency. In contrast, ASSET offers more variability, perhaps signalling that annotators consider deleting information as an important simplification operation. By analysing replace-only Levenshtein distance, we can see that simplifications in ASSET paraphrase the input more. For TurkCorpus and HSplit, most simplifications are similar to their original counterparts (higher densities closer to 0). On the other hand, ASSET’s simplifications are distributed in all levels, indicating more diversity in the rewordings performed. This observation is complemented by the distributions of deleted, added and reordered words. Both TurkCorpus and HSplit have high densities of ratios close to 0.0 in all these features, while ASSET’s are more distributed. Moreover, these ratios are rarely equal to 0 (low density), meaning that for most simplifications, at least some effort was put into rewriting the original sentence. This is comfirmed by the low percentage of exact matches in ASSET (0.4%) with respect to TurkCorpus (16.3%) and HSplit (26.5%). Once again, it suggests that more rewriting transformations are being performed in ASSET. In terms of lexical complexity, HSplit has a high density of ratios close to 1.0 due to its simplifications being structural and not lexical. TurkCorpus offers more variability, as expected, but still their simplifications contain a high number of words that are equally complex, perhaps due to most simplifications just changing a few words. On the other hand, ASSET’s simplifications are more distributed across different levels of reductions in lexical complexity. Finally, all datasets show high densities of a 1.0 ratio in dependency tree depth. This could mean that significant structural changes were not made, which is indicated by most instances corresponding 4674 to operations other than splitting. However, ASSET still contains more simplifications that reduce syntactic complexity than TurkCorpus and HSplit. 5 Rating Simplifications in ASSET Here we measure the quality of the collected simplifications using human judges. In particular, we study if the abstractive simplifications in ASSET (test set) are preferred over lexical-paraphrase-only or splitting-only simplifications in TurkCorpus (test set) and HSplit, respectively. 5.1 Collecting Human Preferences Preference judgments were crowdsourced with a protocol similar to that of the simplifications (§ 3.1). Selecting Human Judges. Workers needed to comply with the same basic requirements as described in § 3.1. For this task, the Qualification Test (QT) consisted in rating the quality of simplifications based on three criteria: fluency (or grammaticality), adequacy (or meaning preservation), and simplicity. Each HIT consisted of six originalsimplified sentence pairs, and workers were asked to use a continuous scale (0-100) to submit their level of agreement (0: Strongly disagree, 100: Strongly agree) with the following statements: 1. The Simplified sentence adequately expresses the meaning of the Original, perhaps omitting the least important information. 2. The Simplified sentence is fluent, there are no grammatical errors. 3. The Simplified sentence is easier to understand than the Original sentence. Using continuous scales when crowdsourcing human evaluations is common practice in Machine Translation (Bojar et al., 2018; Barrault et al., 2019), since it results in higher levels of interannotator consistency (Graham et al., 2013). The six sentence pairs for the Rating QT consisted of: • Three submissions to the Annotation QT, manually selected so that one contains splitting, one has a medium level of compression, and one contains grammatical and spelling mistakes. These allowed to check that the particular characteristics of each sentence pair affect the corresponding evaluation criteria. • One sentence pair extracted from WikiLarge (Zhang and Lapata, 2017) that contains several sentence splits. This instance appeared twice in the HIT and allowed checking for intra-annotator consistency. • One sentence pair from WikiLarge where the Original and the Simplification had no relation to each other. This served to check the attention level of the worker. All submitted ratings were manually reviewed to validate the quality control established and to select the qualified workers for the task. Preference Task. For each of the 359 original sentences in the test set, we randomly sampled one reference simplification from ASSET and one from TurkCorpus, and then asked qualified workers to choose which simplification answers best each of the following questions: • Fluency: Which sentence is more fluent? • Meaning: Which sentence expresses the original meaning the best? • Simplicity: Which sentence is easier to read and understand? Workers were also allowed to judge simplifications as “similar” when they could not determine which one was better. The same process was followed to compare simplifications in ASSET against those in HSplit. Each HIT consisted of 10 sentence pairs. 5.2 Results and Analysis Table 4 (top section) presents, for each evaluation dimension, the percentage of times a simplification from ASSET or TurkCorpus was preferred over the other, and the percentage of times they were judged as “similar”. In general, judges preferred ASSET’s simplifications in terms of fluency and simplicity. However, they found TurkCorpus’ simplifications more meaning preserving. This is expected since they were produced mainly by replacing words/phrases with virtually no deletion of content. A similar behaviour was observed when comparing ASSET to HSplit (bottom section of Table 4). In this case, however, the differences in preferences are greater than with TurkCorpus. This could indicate that changes in syntactic structure are not enough for a sentence to be consider simpler. 4675 Fluency Meaning Simplicity ASSET 38.4%* 23.7% 41.2%* TurkCorpus 22.8% 37.9%* 20.1% Similar 38.7% 38.4% 38.7% ASSET 53.5%* 17.0% 59.0%* HSplit 19.5% 51.5%* 14.8% Similar 27.0% 31.5% 26.2% Table 4: Percentages of human judges who preferred simplifications in ASSET or TurkCorpus, and ASSET or HSplit, out of 359 comparisons. * indicates a statistically significant difference between the two datasets (binomial test with p-value < 0.001). 6 Evaluating Evaluation Metrics In this section we study the behaviour of evaluation metrics for SS when using ASSET’s simplifications (test set) as references. In particular, we measure the correlation of standard metrics with human judgements of fluency, adequacy and simplicity, on simplifications produced by automatic systems. 6.1 Experimental Setup Evaluation Metrics. We analysed the behaviour of two standard metrics in automatic evaluation of SS outputs: BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016). BLEU is a precision-oriented metric that relies on the number of n-grams in the output that match n-grams in the references, independently of position. SARI measures improvement in the simplicity of a sentence based on the n-grams added, deleted and kept by the simplification system. It does so by comparing the output of the simplification model to multiple references and the original sentence, using both precision and recall. BLEU has shown positive correlation with human judgements of grammaticality and meaning preservation (ˇStajner et al., 2014; Wubben et al., 2012; Xu et al., 2016), while SARI has high correlation with judgements of simplicity gain (Xu et al., 2016). In our experiments, we used the implementations of these metrics available in the EASSE package for automatic sentence simplification evaluation (Alva-Manchego et al., 2019).5 We computed all the scores at sentence-level as in the experiment by Xu et al. (2016), where they compared sentencelevel correlations of FKGL, BLEU and SARI with human ratings. We used a smoothed sentence-level version of BLEU so that comparison is possible, 5https://github.com/feralvam/easse even though BLEU was designed as a corpus-level metric. System Outputs. We used publicly-available simplifications produced by automatic SS systems: PBSMT-R (Wubben et al., 2012), which is a phrase-based MT model; Hybrid (Narayan and Gardent, 2014), which uses phrase-based MT coupled with semantic analysis; SBSMT-SARI (Xu et al., 2016), which relies on syntax-based MT; NTSSARI (Nisioi et al., 2017), a neural sequence-tosequence model with a standard encoder-decoder architecture; and ACCESS (Martin et al., 2020), an encoder-decoder architecture conditioned on explicit attributes of sentence simplification. Collection of Human Ratings. We randomly chose 100 original sentences from ASSET and, for each of them, we sampled one system simplification. The automatic simplifications were selected so that the distribution of simplification transformations (e.g. sentence splitting, compression, paraphrases) would match that from human simplifications in ASSET. That was done so that we could obtain a sample that has variability in the types of rewritings performed. For each sentence pair (original and automatic simplification), we crowdsourced 15 human ratings on fluency (i.e. grammaticality), adequacy (i.e. meaning preservation) and simplicity, using the same worker selection criteria and HIT design of the Qualification Test as in § 5.1. 6.2 Inter-Annotator Agreement We followed the process suggested in (Graham et al., 2013). First, we normalised the scores of each rater by their individual mean and standard deviation, which helps eliminate individual judge preferences. Then, the normalised continuous scores were converted to five interval categories using equally spaced bins. After that, we followed Pavlick and Tetreault (2016) and computed quadratic weighted Cohen’s κ (Cohen, 1968) simulating two raters: for each sentence, we chose one worker’s rating as the category for annotator A, and selected the rounded average scores for the remaining workers as the category for annotator B. We then computed κ for this pair over the whole dataset. We repeated the process 1,000 times to compute the mean and variance of κ. The resulting values are: 0.687 ± 0.028 for Fluency, 0.686 ± 0.030 for Meaning and 0.628 ± 0.032 for Simplicity. All values point to a moderate level 4676 Metric References Fluency Meaning Simplicity BLEU ASSET 0.42* 0.61* 0.31* TurkCorpus 0.35* 0.59* 0.18 SARI ASSET 0.16 0.13 0.28* TurkCorpus 0.14 0.10 0.17 Table 5: Pearson correlation of human ratings with automatic metrics on system simplifications. * indicates a significance level of p-value < 0.05. of agreement, which is in line with the subjective nature of the simplification task. 6.3 Correlation with Evaluation Metrics We computed the Pearson correlation between the normalised ratings and the evaluation metrics of our interest (BLEU and SARI) using ASSET or TurkCorpus as the set of references. We refrained from experimenting with HSplit since neither BLEU nor SARI correlate with human judgements when calculated using that dataset as references (Sulem et al., 2018a). Results are reported in Table 5. BLEU shows a strong positive correlation with Meaning Preservation using either simplifications from ASSET or TurkCorpus as references. There is also some positive correlation with Fluency judgements, but that is not always the case for Simplicity: no correlation when using TurkCorpus and moderate when using ASSET. This is in line with previous studies that have shown that BLEU is not a good estimate for simplicity (Wubben et al., 2012; Xu et al., 2016; Sulem et al., 2018b). In the case of SARI, correlations are positive but low with all criteria and significant only for simplicity with ASSET’s references. Xu et al. (2016) showed that SARI correlated with human judgements of simplicity gain, when instructing judges to “grade the quality of the variations by identifying the words/phrases that are altered, and counting how many of them are good simplifications”.6 The judgements they requested differ from the ones we collected, since theirs were tailored to rate simplifications produced by lexical paraphrasing only. These results show that SARI might not be suitable for the evaluation of automatic simplifications with multiple rewrite operations. In Table 6, we further analyse the human ratings collected, and compute their correlations with similar text features as in § 4. The results shown re6https://github.com/cocoxu/ simplification/tree/master/HIT_MTurk_ crowdsourcing Feature Fluency Meaning Simplicity Length 0.12 0.31* 0.03 Sentence Splits -0.13 -0.06 -0.08 Compression Level 0.26* 0.46* 0.04 Levenshtein Distance -0.40* -0.67* -0.18 Replace-only Lev. Dist. -0.04 -0.17 -0.06 Prop. Deleted Words -0.43* -0.67* -0.19 Prop. Added Words -0.19 -0.38* -0.12 Prop. Reordered Words -0.37* -0.57* -0.18 Dep. Tree Depth Ratio 0.20 0.24 0.06 Word Rank Ratio 0.04 0.08 -0.05 Table 6: Pearson correlation of human ratings with text features on system simplifications. * indicates a significance level of p-value < 0.01. inforce our previous observations that judgements on Meaning correlate with making few changes to the sentence: strong negative correlation with Levenshtein distance, and strong negative correlation with proportion of words added, deleted, and reordered. No conclusions could be drawn with respect to Simplicity. 7 Conclusion We have introduced ASSET, a new dataset for tuning and evaluation of SS models. Simplifications in ASSET were crowdsourced, and annotators were instructed to apply multiple rewriting transformations. This improves current publicly-available evaluation datasets, which are focused on only one type of transformation. Through several experiments, we have shown that ASSET contains simplifications that are more abstractive, and that are consider simpler than those in other evaluation corpora. Furthermore, we have motivated the need to develop new metrics for automatic evaluation of SS models, especially when evaluating simplifications with multiple rewriting operations. Finally, we hope that ASSET’s multi-transformation features will motivate the development of SS models that benefit a variety of target audiences according to their specific needs such as people with low literacy or cognitive disabilities. Acknowledgements This work was partly supported by Benoˆıt Sagot’s chair in the PRAIRIE institute, funded by the French national agency ANR as part of the “Investissements d’avenir” programme under the reference ANR-19-P3IA-0001. 4677 References Sandra M. Alu´ısio, Lucia Specia, Thiago A. S. Pardo, Erick G. Maziero, Helena M. Caseli, and Renata P. M. Fortes. 2008. A corpus analysis of simple account texts and the proposal of simplification strategies: First steps towards text simplification systems. In Proceedings of the 26th Annual ACM International Conference on Design of Communication, SIGDOC ’08, pages 15–22, Lisbon, Portugal. ACM. Fernando Alva-Manchego, Joachim Bingel, Gustavo Paetzold, Carolina Scarton, and Lucia Specia. 2017. Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 295–305, Taipei, Taiwan. Asian Federation of Natural Language Processing. Fernando Alva-Manchego, Louis Martin, Carolina Scarton, and Lucia Specia. 2019. EASSE: Easier automatic sentence simplification evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 49–54, Hong Kong, China. Association for Computational Linguistics. Fernando Alva-Manchego, Carolina Scarton, and Lucia Specia. 2020. Data-driven sentence simplification: Survey and benchmark. Computational Linguistics, 46(1):135–187. Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272–303, Belgium, Brussels. Association for Computational Linguistics. Stefan Bott and Horacio Saggion. 2011. Spanish text simplification: An exploratory study. Procesamiento del Lenguaje Natural, 47:87–95. Dominique Brunato, Lorenzo De Mattei, Felice Dell’Orletta, Benedetta Iavarone, and Giulia Venturi. 2018. Is this sentence difficult? do you agree? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2690–2699, Brussels, Belgium. Association for Computational Linguistics. John Carroll, Guido Minnen, Yvonne Canning, Siobhan Devlin, and John Tait. 1998. Practical simplification of english newspaper text to assist aphasic readers. In Proceedings of AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology, pages 7–10. R. Chandrasekar, Christine Doran, and B. Srinivas. 1996. Motivations and methods for text simplification. In Proceedings of the 16th Conference on Computational Linguistics, volume 2 of COLING ’96, pages 1041–1044, Copenhagen, Denmark. Association for Computational Linguistics. Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4):213–220. William Coster and David Kauchak. 2011. Simple english wikipedia: A new text simplification task. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, HLT ’11, pages 665–669, Stroudsburg, PA, USA. Association for Computational Linguistics. Richard Evans, Constantin Orasan, and Iustin Dornescu. 2014. An evaluation of syntactic simplification rules for people with autism. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations, PIT 2014, pages 131–140, Gothenburg, Sweden. Association for Computational Linguistics. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33–41, Sofia, Bulgaria. Association for Computational Linguistics. Eva Hasler, Adri de Gispert, Felix Stahlberg, Aurelien Waite, and Bill Byrne. 2017. Source sentence simplification for statistical machine translation. Computer Speech & Language, 45(C):221–235. William Hwang, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning Sentences from Standard Wikipedia to Simple Wikipedia. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 211–217, Denver, Colorado. Association for Computational Linguistics. 4678 VI Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Soviet Physics Doklady, 10:707. Edward Loper and Steven Bird. 2002. NLTK: the natural language toolkit. CoRR, cs.CL/0205028. Louis Martin, Samuel Humeau, Pierre-Emmanuel Mazar´e, ´Eric de La Clergerie, Antoine Bordes, and Benoˆıt Sagot. 2018. Reference-less quality estimation of text simplification systems. In Proceedings of the 1st Workshop on Automatic Text Adaptation (ATA), pages 29–38, Tilburg, the Netherlands. ACL. Louis Martin, Benoˆıt Sagot, ´Eric de la Clergerie, and Antoine Bordes. 2020. Controllable sentence simplification. In Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC 2020). Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 435–445, Baltimore, Maryland. Association for Computational Linguistics. Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 85–91, Vancouver, Canada. Association for Computational Linguistics. Charles Kay Ogden. 1930. Basic English: A General Introduction with Rules and Grammar. Kegan Paul, Trench, Trubner & Co. Gustavo Paetzold and Lucia Specia. 2016. SemEval 2016 task 11: Complex word identification. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 560– 569, San Diego, California. Association for Computational Linguistics. Gustavo Henrique Paetzold. 2016. Lexical Simplification for Non-Native English Speakers. Ph.D. thesis, University of Sheffield, Sheffield, UK. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Philadelphia, Pennsylvania. ACL. Ellie Pavlick and Joel Tetreault. 2016. An empirical analysis of formality in online communication. Transactions of the Association for Computational Linguistics, 4:61–74. David Pellow and Maxine Eskenazi. 2014a. An open corpus of everyday documents for simplification tasks. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 84–93, Gothenburg, Sweden. Association for Computational Linguistics. David Pellow and Maxine Eskenazi. 2014b. Tracking human process using crowd collaboration to enrich data. In Human Computation and Crowdsourcing: Works in Progress and Demonstration Abstracts. An Adjunct to the Proceedings of the Second AAAI Conference on Human Computation and Crowdsourcing, pages 52–53. Sarah E. Petersen. 2007. Natural Language Processing Tools for Reading Level Assessment and Text Simplification for Bilingual Education. Ph.D. thesis, University of Washington, Seattle, WA, USA. AAI3275902. Sarah E. Petersen and Mari Ostendorf. 2007. Text simplification for language learners: a corpus analysis. In Proceedings of the Speech and Language Technology for Education Workshop, SLaTE 2007, pages 69–72. Luz Rello, Clara Bayarri, Azuki G`orriz, Ricardo BaezaYates, Saurabh Gupta, Gaurang Kanvinde, Horacio Saggion, Stefan Bott, Roberto Carlini, and Vasile Topac. 2013. ”dyswebxia 2.0!: More accessible text for people with dyslexia”. In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility, W4A ’13, pages 25:1– 25:2, Rio de Janeiro, Brazil. ACM. Carolina Scarton, Gustavo H. Paetzold, and Lucia Specia. 2018. Simpa: A sentence-level simplification corpus for the public administration domain. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Sara Botelho Silveira and Ant´onio Branco. 2012. Enhancing multi-document summaries with sentence simplificatio. In Proceedings of the 14th International Conference on Artificial Intelligence,, ICAI 2012, pages 742–748, Las Vegas, USA. L´ucia Specia, Sandra Maria Alu´ısio, and Thiago A. Salgueiro Pardo. 2008. Manual de simplificac¸˜ao sint´atica para o portuguˆes. Technical Report NILC-TR-08-06, NILC–ICMC–USP, S˜ao Carlos, SP, Brasil. Available in http://www.nilc.icmc. usp.br/nilc/download/NILC_TR_08_06.pdf. Elior Sulem, Omri Abend, and Ari Rappoport. 2018a. Bleu is not suitable for the evaluation of text simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 738–744. Association for Computational Linguistics. Elior Sulem, Omri Abend, and Ari Rappoport. 2018b. Semantic structural evaluation for text simplification. In Proceedings of the 2018 Conference of the 4679 North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 685–696, New Orleans, Louisiana. Association for Computational Linguistics. Sanja ˇStajner, Marc Franco-Salvador, Paolo Rosso, and Simone Paolo Ponzetto. 2018. Cats: A tool for customized alignment of text simplification corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Sanja ˇStajner, Ruslan Mitkov, and Horacio Saggion. 2014. One step closer to automatic evaluation of text simplification systems. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 1–10, Gothenburg, Sweden. Association for Computational Linguistics. Sanja ˇStajner, Maja Popovi´c, Horacio Saggion, Lucia Specia, and Mark Fishel. 2016. Shared task on quality assessment for text simplification. In Proceeding of the Workshop on Quality Assessment for Text Simplification - LREC 2016, QATS 2016, pages 22–31, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 1015–1024, Stroudsburg, PA, USA. Association for Computational Linguistics. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics, 3:283–297. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Taha Yasseri, Andr´as Kornai, and J´anos Kert´esz. 2012. A practical approach to language complexity: A wikipedia case study. PLOS ONE, 7(11):1–8. Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 595–605, Copenhagen, Denmark. Association for Computational Linguistics. Sanqiang Zhao, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018. Integrating transformer and paraphrase rules for sentence simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3164–3173, Brussels, Belgium. Association for Computational Linguistics. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 1353–1361, Stroudsburg, PA, USA. Association for Computational Linguistics.
2020
424
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4680–4686 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4680 Fatality Killed the Cat or: BabelPic, a Multimodal Dataset for Non-Concrete Concepts Agostina Calabrese, Michele Bevilacqua and Roberto Navigli Sapienza NLP Group Department of Computer Science Sapienza University of Rome {calabrese.a,bevilacqua,navigli}@di.uniroma1.it Abstract Thanks to the wealth of high-quality annotated images available in popular repositories such as ImageNet, multimodal language-vision research is in full bloom. However, events, feelings and many other kinds of concepts which can be visually grounded are not well represented in current datasets. Nevertheless, we would expect a wide-coverage language understanding system to be able to classify images depicting RECESS and REMORSE, not just CATS, DOGS and BRIDGES. We fill this gap by presenting BabelPic, a hand-labeled dataset built by cleaning the image-synset association found within the BabelNet Lexical Knowledge Base (LKB). BabelPic explicitly targets nonconcrete concepts, thus providing refreshing new data for the community. We also show that pre-trained language-vision systems can be used to further expand the resource by exploiting natural language knowledge available in the LKB. BabelPic is available for download at http://babelpic.org. 1 Introduction There is growing research interest in developing effective systems capable of achieving some understanding of the content of an image. As in most fields of applied AI, this requires annotated data to train a supervised system on. While ImageNet1 (Deng et al., 2009), one of the most influential projects in computer vision, was undeniably an important milestone towards image understanding, there is still a lot of ground to be covered. ImageNet’s initial aim was to collect pictures for most WordNet synsets (Miller, 1995). Yet, at the time of writing, only some 21,841 nominal synsets are covered according to ImageNet’s official website. One issue with ImageNet and most other image repositories like COCO (Lin et al., 2014) and 1http://www.image-net.org Flickr30kEntities (Plummer et al., 2015) is their focus on concepts denoting concrete, tangible things, such as CAT, TRAFFIC LIGHT and so on. Concepts whose denotation is not clearly identifiable with a set of objects having distinct boundaries, such as events (e.g., FATALITY, COMPETITION), emotions (e.g., SADNESS) and psychological features (e.g., SHARPNESS), have enjoyed less attention. For lack of a better term, we will henceforth refer to them as non-concrete (NC) concepts. On one hand, the inclusion of NC concepts would be an important step towards wide-coverage image semantic understanding. On the other hand, it also goes in the same direction as recent multimodal language-vision approaches, e.g., monoand cross-lingual Visual Sense Disambiguation (Barnard and Johnson, 2005; Loeff et al., 2006; Saenko and Darrell, 2008; Gella et al., 2016, 2019). Taking into account NC concepts could also be of crucial importance for fascinating languagefocused applications, such as Multimodal Machine Translation. Last but not least, NC concepts would represent a significative benchmark for real-world multimodal applications. In fact, traditional computer vision approaches rely on the detection of objects within the image, but many NC concepts are not well described by a bag of objects. Consider, for instance, Figure 1. The two images illustrate different NC concepts (i.e., HIGH JUMP and POLE VAULT) which are different configurations of the same elementary objects (i.e., PERSON, ROD, BLEACHERS). Thus, NC concepts require complex image understanding, integrating a fair amount of common sense knowledge. As a contribution towards this goal of expanding the scope of research, we introduce BabelPic, the first dataset for multimodal language-vision tasks with a focus on NC concepts and that is also linked to WordNet. BabelPic has been built by manually validating synset-image associations available in 4681 Figure 1: Two images described by the same bag of visual words but illustrating different NC concepts (i.e., high jump and pole vault). BabelNet (Navigli and Ponzetto, 2012), a large multilingual resource linking WordNet to Wikipedia and other resources. Furthermore, we provide a methodology to extend the BabelPic coverage to all the BabelNet synsets. To this end, we adapt the recently introduced Vision-Language Pre-training (VLP) model (Zhou et al., 2020). We define the verification of synset-image associations as a Visual Question Answering (VQA) task with two possible answers. The evaluation demonstrates that our methodology achieves high performances on zero-shot classification as well, thus enabling verification across the inventory. Thanks to the automatic production of a silver dataset, BabelPic constitutes a significant extension of ImageNet. A few examples from BabelPic (both gold and silver) are shown in Figure 2. 2 Related Work To the best of our knowledge, no dataset of annotated images exists which has a focus on NC nominal and verbal concepts and is also linked to Lexical Knowledge Bases (LKB) such as WordNet and BabelNet. For example, the very popular ImageNet dataset, which includes images belonging to around 21,800 categories organized according to the WordNet nominal hierarchy, offers only sparse coverage of NC concepts. JFT (Hinton et al., 2015; Chollet, 2017; Sun et al., 2017) is an internal dataset at Google containing 300M images annotated with over 19,000 classes including objects, scenes (e.g., SUNSET), events (e.g., BIRTHDAY) and attributes (e.g., RED). JFT differs from our work in not being linked to an LKB and in not being publicly released. The Open Images dataset (Kuznetsova et al., 2018) contains 9M images annotated with 19,794 classes taken from JFT. While Open Images does contain NC labels, the classes are not linked to an LKB, thus limiting their usefulness. The Tencent ML-Images dataset (Wu et al., 2019) was created starting from a subset of ImageNet and Open Images and includes images annotated with 11,166 categories, which are then linked to WordNet synsets. The dataset differs from our work since any NC label has been explicitly discarded. Our work is in some sense similar to MultiSense (Gella et al., 2019) and VerSe (Gella et al., 2016), two datasets including images annotated with verbal senses. However, MultiSense is not directly linked to an LKB and neither of these two datasets deals with nominal synsets. Finally, we note that datasets including images annotated with objectlevel categories (Lin et al., 2014; Plummer et al., 2015) or videos (Loui et al., 2007; Doll´ar et al., 2009; Moneglia et al., 2014; Heilbron et al., 2015; Abu-El-Haija et al., 2016) are outside the scope of this work, since we are only interested in the main NC concepts depicted within images. 3 Gold Dataset BabelPic is built by exploiting the link between WordNet (Miller, 1995) and Wikipedia within BabelNet2 (Navigli and Ponzetto, 2012). Our approach is organised in a three-step process. First, we select a set of NC synsets from WordNet, on the basis of both their paradigmatic nature and relations in the knowledge base. Second, we gather all the corresponding images in BabelNet, which are themselves mostly taken from Wikipedia pages. Third, we manually validate the synset-images mapping. Note that, having defined the task as a validation of concept-image associations, we do allow images to be mapped to more than one concept and vice versa. For instance, both images in Figure 1 could be mapped to the concept COMPETITION as well. The result is a gold dataset containing 2,733 synsets and 14,931 images. 3.1 Synset selection We decided to build our gold dataset starting from concepts related to events and emotions because these have been shown to be the most appealing NC concepts for the multimodal and vision communities (see Section 2). As a first step towards this goal, we select the nominal synsets belonging to the transitive closure of the hyponymy relation, rooted in the following set of WordNet synsets: {feeling.n.01,event.n.01}. To ensure that only NC concepts are selected, we filter out any synset connected by the hypernymy relation to at least one of the following synsets: physical entity.n.01, 2https://babelnet.org 4682 Figure 2: A few examples from BabelPic, both gold (G) and silver (S). shape.n.02, color.n.01. This is done in order to discard concepts denoting tangible things that inherit from abstraction.n.06 in WordNet (e.g., THUNDERBOLT). Furthermore, we select all the synsets belonging to the following WordNet lexicographer files: verb.competition, verb.motion and verb.social. This is done to create a dataset with an explicit focus on events, properties and verbs. As a second step, we discard all the concepts belonging to either the mathematics or the physics domains since images are often not relevant (e.g., ROUNDING). Finally, we associate each selected synset with the first 15 corresponding images in BabelNet 4.0. Note that, in order to improve the quality of the dataset, we filter out images on the basis of simple heuristics. For example, we filter out all images where transparency is used and at least half of the pixels are white-coloured, as these are not likely be relevant. Most of the noise images from Wikipedia are removed as a result of this step. 3.2 Manual validation The synset-image associations found are manually validated during phase 3. We have decided to use the services of two expert annotators who are familiar with the BabelNet resource, and the whole annotation process is performed through an ad hoc graphical interface. Annotators are shown tuples in the form ⟨s, l, g, i⟩, where s is the target synset, i is a candidate image for s, and l and g are, respectively, the main lemma and gloss (i.e., definition) for s. Annotators are asked to answer the question “is i pertinent to g?”. Possible answers are yes (i.e., i is an illustration of g), no (i.e., i is either not pertinent or in contradiction with g) and discard (i.e., i is a bad image). To maximize coverage, each annotator is assigned roughly half of the concept-image association candidates. However, in order to establish and agree on possible useful guidelines for the evaluation, annotators are asked to collaboratively perform the validation of a first sample of 500 instances. We also provide them with a few extra directions. For instance, we ask them to discard images in which the association cannot be verified without reading text depicted in the image. In addition to this collaboratively annotated sample, we select an intersection of 100 annotation instances which we then use to obtain an inter-annotator agreement figure. The level of agreement achieved is 80.39%, with a κ value of 0.6078 (moderate agreement). As for these shared examples, we include in our gold dataset only those instances that have been approved by both annotators. Our gold dataset is hence composed of all the validated synset-image associations. 4 Model Since manual validation is time consuming, we are interested in developing a methodology for the automatic verification of synset-image associations. In the recent past there has been a great research effort to develop models for vision-language pretraining. Many such models (e.g., VLP (Zhou et al., 2020), VisualBERT (Li et al., 2019), ViLBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019)) are built upon BERT (Devlin et al., 2019), a popular system for contextualized embeddings. BERT-based models achieve state-of-the-art scores on many language-vision tasks, hence they represent a promising resource for our task. 4683 The system that we use to perform classification is the fine-tuned VLP model. Despite the fact that LXMERT (Tan and Bansal, 2019) achieves a slightly higher score on yes/no questions on the VQA 2.0 dataset (Goyal et al., 2017), our preference goes for the VLP system since it is pre-trained on a wider and more general dataset. More specifically, the VLP model is pre-trained on Conceptual Captions (CC) (Sharma et al., 2018), a dataset including more than 3M image-caption pairs, using two unsupervised vision-language tasks: bidirectional and sequence-to-sequence masked language prediction. The input images are preprocessed using Faster R-CNN (Ren et al., 2015) pre-trained on Visual Genome (Krishna et al., 2017; Anderson et al., 2018), hence obtaining 100 object regions per image. The model input consists of both class-aware region embeddings and word embeddings, the former obtained by combining the corresponding region features with the probability of each object label and region geometric information. Furthermore, a Multi-Layer Perceptron (MLP) is trained during the fine-tuning phase in order to select the chosen answer starting from the hidden state of the encoder. In order to adapt the VLP model to extend the BabelPic coverage to all the BabelNet synsets, we define the verification of synset-image associations as a VQA task with two possible answers. More specifically, we define a question template as in the following: “Does the image depict l (g)?” where l is the main lemma and g is the WordNet gloss of the target synset. We instantiate our template for each synset-image pair in the dataset, thus obtaining a textual question for each instance. We set the ground truth answers to either “yes” or “no”, hence reducing our classification task to VQA. 5 Experiments To test the reliability of our approach for the automatic verification of concept-image associations we experiment in a zero-shot setting (see Section 5.3). As a first step toward this goal, we need to augment our dataset with negative instances (see Section 5.1) and select the most suitable VLP version (see Section 5.2). A deeper analysis of how the sampling of negative instances affects the performances of the system is described in Section 5.4. 5.1 Setting In order to evaluate our methodology for the automatic verification of synset-image associations, we need to define a procedure for the generation of negative instances (i.e., irrelevant ⟨synset, image⟩ pairs). More specifically, we define a negative instance ⟨s, i⟩by picking two different synsets s and s′ and an image i associated with s′ from our gold dataset. Negative instances can be distinguished on the basis of the relation connecting s to s′: Sibling: there exists a synset s′′ in BabelNet s.t. both s and s′ are connected to s′′ by the hypernymy relation (e.g., FUN RUN and MARATHON). Polysemy: both s and s′ contain the same lemma (e.g., the synsets of swim.v.01 and swim.v.02). Unrelated: there exists no relation connecting s to s′ in BabelNet (e.g., RACING and GLADFULNESS). Exploiting the WordNet relations as mentioned above is also very effective in handling any potential issue due to images that are instances of multiple concepts. For instance, the images in Figure 1 could never be used as negative examples for COMPETITION because of the hyponymy relation connecting this concept to HIGH JUMP and POLE VAULT. Moreover, we manually validated a sample of the negative examples in order to ensure the reliability of our methodology. The result is a dataset which is perfectly balanced between the two output classes. We split the dataset into training, validation and test sets following the 80%/10%/10% rule. Each class is proportionally distributed between the splits, as well as the relations used to define the negative instances. In order to test the system’s capability to handle previously unseen concepts, we force both the validation and test sets to contain also instances referring to synsets that are not present in the training set. We refer to the subset of the test set given by these instances as the zero-shot test. Statistics are reported in Table 1. 5.2 Pre-Trained vs. Fine-Tuned In this work we refer to the VLP3 model (Zhou et al., 2020) pre-trained on CC and fine-tuned for the VQA task on the VQA 2.0 dataset as, respectively, P-VLP and F-VLP. Note that both P-VLP 3https://github.com/LuoweiZhou/VLP 4684 Split N C I S(%) P(%) Training 23,891 2,618 13,311 10.20 1.95 Validation 2,986 1,442 2,740 10.18 1.98 Test 2,987 1,416 2,715 10.21 1.94 Zero-Shot 502 43 490 11.55 2.19 Table 1: Overview of the BabelPic’s splits: number of instances (N), concepts (C), images (I) and distribution of instances labelled as sibling (S) and polysemy (P). Model Validation Test Zero-Shot P F1 P F1 P F1 P-VLP 71.93 78.97 72.48 79.33 71.43 77.90 F-VLP 76.14 77.50 75.94 75.99 77.67 71.67 Table 2: Precision and F1 scores (as percentages) on the verification of synset-image associations. and F-VLP are then further fine-tuned for the verification of concept-image associations on BabelPic’s training split. Our experiments show that both systems are reliable on our task, achieving precision and F1 scores that are over 70% on all the splits (see Table 2). However, the F-VLP model proves to be the most stable for the task. In fact, in a common use case scenario it is more important to accept only correct synset-image associations than it is to detect all the correct pairs. More specifically, we value precision over recall, and thus prefer the fine-tuned VLP model. 5.3 Zero-Shot Classification Our main interest is in developing a model capable of annotating images with synsets even when the target concept is new to the system (i.e., zero-shot). As shown in the last column of Table 2, both the P-VLP and F-VLP models are robust to zero-shot classification, achieving scores that are comparable to the performances registered on the other splits. The F-VLP system, in particular, is able to verify the associations between unseen synsets and images with precision 77.67%, hence enabling the automatic extension of BabelPic to any other synset. 5.4 Fine-Grained Analysis Finally, we analyse the system performances on the different types of negative instances. The accuracy scores achieved by F-VLP are listed in Table 3. As one would expect, when the input synset-image pair is unrelated, the system is able to correctly Relation Validation Test Zero-Shot Unrelated 83.98 83.63 89.01 Sibling 51.64 53.11 62.07 Polysemy 30.51 44.83 45.45 Table 3: Accuracy scores (as percentages) achieved by F-VLP on all the different types of negative instances. classify most of the instances. When considering the instances labelled as sibling, the difficulty level increases and F-VLP achieves an accuracy score of 62.07%. This is not surprising when it is considered that discriminating between images representing sibling concepts (e.g., DISAPPOINTMENT and BOREDOM) can be tricky for humans as well. Finally, the instances labelled as polysemy prove to be the hardest ones, demonstrating that BabelPic can be an interesting benchmark for Visual Sense Disambiguation as well. The performances achieved by P-VLP follow the same trend. 6 Conclusions In this work we introduced BabelPic, a new resource for language-vision tasks, built by validating the existing image-to-synset associations in the BabelNet resource. BabelPic is innovative in being the first dataset with a focus on nominal and verbal non-concrete concepts linked to the WordNet and BabelNet Lexical Knowledge Bases. Furthermore, we presented a methodology to extend the resource by fine-tuning VLP, a state-of-the-art pre-trained language-vision architecture. In our approach, we automatically verify the synset-image associations by exploiting the natural language definitions in WordNet, showing strong results on zero-shot classification as well. We exploited our method for the automatic generation of a widecoverage silver dataset containing around 10,013 synsets. We make BabelPic (both gold and silver data) available to the community for download at http://babelpic.org. Acknowledgments The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union’s Horizon 2020 research and innovation programme. 4685 References Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. 2016. YouTube-8M: A large-scale video classification benchmark. CoRR, abs/1609.08675. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 6077–6086. IEEE Computer Society. Kobus Barnard and Matthew Johnson. 2005. Word sense disambiguation with pictures. Artif. Intell., 167(1-2):13–30. Franc¸ois Chollet. 2017. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1800–1807. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248–255. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Piotr Doll´ar, Christian Wojek, Bernt Schiele, and Pietro Perona. 2009. Pedestrian detection: A benchmark. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 304–311. Spandana Gella, Desmond Elliott, and Frank Keller. 2019. Cross-lingual visual verb sense disambiguation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1998–2004, Minneapolis, Minnesota. Association for Computational Linguistics. Spandana Gella, Mirella Lapata, and Frank Keller. 2016. Unsupervised visual sense disambiguation for verbs using multimodal embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 182–192, San Diego, California. Association for Computational Linguistics. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6325–6334. Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. 2015. ActivityNet: A large-scale video benchmark for human activity understanding. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 712, 2015, pages 961–970. IEEE Computer Society. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Visual Genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper R. R. Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, and Vittorio Ferrari. 2018. The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale. CoRR, abs/1811.00982. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. VisualBERT: A simple and performant baseline for vision and language. CoRR, abs/1908.03557. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In Proceedings of Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Part V, volume 8693 of Lecture Notes in Computer Science, pages 740–755. Springer. Nicolas Loeff, Cecilia Ovesdotter Alm, and David A. Forsyth. 2006. Discriminating image senses by clustering with multimodal features. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 547–554, Sydney, Australia. Association for Computational Linguistics. Alexander C. Loui, Jiebo Luo, Shih-Fu Chang, Dan Ellis, Wei Jiang, Lyndon S. Kennedy, Keansub Lee, and Akira Yanagawa. 2007. Kodak’s consumer video benchmark data set: concept Definition and annotation. In Proceedings of the 9th ACM SIGMM 4686 International Workshop on Multimedia Information Retrieval, MIR 2007, Augsburg, Bavaria, Germany, September 24-29, 2007, pages 245–254. ACM. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Proceedings of the Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 13–23. George A. Miller. 1995. WordNet: A lexical database for english. Commun. ACM, 38(11):39–41. Massimo Moneglia, Susan Brown, Francesca Frontini, Gloria Gagliardi, Fahad Khan, Monica Monachini, and Alessandro Panunzi. 2014. The IMAGACT visual ontology. An extendable multilingual infrastructure for the representation of lexical encoding of action. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland, May 26-31, 2014, pages 3425–3432. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artif. Intell., 193:217–250. Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In Proceedings of the 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2641–2649. IEEE Computer Society. Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 91–99. Kate Saenko and Trevor Darrell. 2008. Unsupervised learning of visual sense models for polysemous words. In Proceedings of the Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-11, 2008, pages 1393–1400. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual Captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565, Melbourne, Australia. Association for Computational Linguistics. Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. 2017. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 843–852. IEEE Computer Society. Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5099–5110. Association for Computational Linguistics. Baoyuan Wu, Weidong Chen, Yanbo Fan, Yong Zhang, Jinlong Hou, Jie Liu, and Tong Zhang. 2019. Tencent ML-Images: A large-scale multi-label image database for visual representation learning. IEEE Access, 7:172683–172693. Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and VQA. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020.
2020
425
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4687–4692 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4687 Modeling Label Semantics for Predicting Emotional Reactions Radhika Gaonkar, Heeyoung Kwon, Mohaddeseh Bastan, Niranjan Balasubramanian Stony Brook University, Stony Brook, New York {rgaonkar, heekwon, mbastan, niranjan}@cs.stonybrook.edu Nathanael Chambers US Naval Academy, Annapolis, MD [email protected] Abstract Predicting how events induce emotions in the characters of a story is typically seen as a standard multi-label classification task, which usually treats labels as anonymous classes to predict. They ignore information that may be conveyed by the emotion labels themselves. We propose that the semantics of emotion labels can guide a model’s attention when representing the input story. Further, we observe that the emotions evoked by an event are often related: an event that evokes joy is unlikely to also evoke sadness. In this work, we explicitly model label classes via label embeddings, and add mechanisms that track label-label correlations both during training and inference. We also introduce a new semi-supervision strategy that regularizes for the correlations on unlabeled data. Our empirical evaluations show that modeling label semantics yields consistent benefits, and we advance the state-of-theart on an emotion inference task. 1 Introduction Understanding how events in a story affect the characters involved is an integral part of narrative understanding. Rashkin et al. (2018) introduced an emotion inference task on a subset of the ROCStories dataset (Mostafazadeh et al., 2016), labeling entities with the emotions they experience from the short story contexts. Previous work on this and related tasks typically frame them as multi-label classification problems. The standard approach uses an encoder that produces a representation of the target event along with the surrounding story events, and then pushes it through a classification layer to predict the possible emotion labels (Rashkin et al., 2018; Wang et al., 2018). This classification framework ignores the semantics of the emotions themselves. Each emotion label (e.g., joy) is just a binary prediction. However, consider the sentence, “Danielle was really short on money”. The emotional reaction is FEAR of being short on money. First, if a model had lexical foreknowledge of “fear”, we should expect an improved ability to decide if a target event evokes FEAR. Second, such a model might represent relationships between the emotions themselves. For example, an event that evokes FEAR is likely to evoke SADNESS and unlikely to evoke JOY. When previous models frame this as binary label prediction, they miss out on ways to leverage label semantics. In this work, we show that explicitly modeling label semantics improves emotion inference. We describe three main contributions1. First, we show how to use embeddings as the label semantics representation. We then propose a label attention network that produces label-informed representations of the event and the story context to improve prediction accuracy. Second, we add mechanisms that can make use of label-label correlations as part of both training and inference. During training, the correlations are used to add a regularization loss. During inference, the prediction logits for each label are modified to incorporate the correlations, thus allowing the model’s confidence on one label to influence its prediction of other labels. Third, we show that the label correlations can be used as a semi-supervised signal on the unlabeled portion of the ROCStories dataset. Our empirical evaluations show that adding label semantics consistently improves prediction accuracy, and produces labelings that are more consistent than models without label semantics. Our best model outperforms previously reported results and achieves more than 4.9 points absolute improvement over the BERT classification model yielding a new state-of-the-art result for this task. 1https://github.com/StonyBrookNLP/emotion-labelsemantics 4688 2 Emotion Inference The emotion inference task introduced by Rashkin et al. (2018) is defined over a subset of short stories from the ROCStories dataset (Mostafazadeh et al., 2016). It infers the reactions that each event evokes in the characters of the story, given the story context thus far. For each sentence (i.e. event) in a story, the training data includes annotations of eight emotions. Given a sentence xs denoting a single event in a story, the task is to label the possible emotional reactions that an event evokes in each character in the story. Since an event can evoke multiple reactions, the task is formulated as a multi-label classification problem. The standard approach to this task has been as follows. For a given character c and the target sentence xs, collect all previous sentences xc in the story in which the character c is mentioned as the character context. Encode the target sentence, and the character context to obtain a single representation, and use it as input to a multi-label classification layer for prediction. Rashkin et al. (2018) benchmark the performance of multiple encoders (see Section 5). We extend this previous work to integrate label semantics into the model by adding label embeddings (Section 3) and explicitly representing labellabel correlations (Section 4). 3 Label Semantics using Embeddings A simple strategy to model label semantics is to explicitly represent each with an embedding that captures the surface semantics of its label name. Since the emotion labels correspond to actual words (e.g., joy, fear, etc.), we can initialize them with their corresponding word embeddings (learned from a large corpus). We then use these label embeddings in two ways as detailed below. 3.1 Label Attention Network The label embeddings can be used to guide an encoder network to extract emotion-related information from the sentences. We adopted the Label-Embedding Attentive Network (LEAM) architecture to produce label-focused representations (Wang et al., 2018). The main idea behind the LEAM model is to compute attention scores between the label and the representations of the tokens in the input that is to be classified2. This can 2The original model used LEAM directly on top of Glove embeddings (Wang et al., 2018). Figure 1: Label-Embedding Attentive Network using BERT Features. y denotes the label attended story sentence and context representation, where α is the attention score. then be used to appropriately weight the contributions of each token to the final representations. In this work, we use LEAM to compute an attention matrix computed over the hidden states produced by the encoder and the label embeddings. The encoder used is the BERT features for each token Bt in the text and each of the label sentences J. The attention matrix is then used to produce a weighted combination of the contextual representations of the input, using the compatibility matrix H, as computed in (Wang et al., 2018). This gives emotion focused representations y to use for classification: H = (JT Bt) ⊘ˆH (1) Figure 1 illustrates the key steps in the model. 3.2 Labels as Additional Input Rather than learning label embeddings from scratch, we also explore using contextual embeddings from transformer-based models like BERT. This allows us to use richer semantics derived from pre-training and also allows us to exploit the selfattention mechanism to introduce label semantics as part of the input itself. In addition to the target and context sentences, we also include emotionlabel sentences, Ls, of the form “[character] is [emotional state]” as input to the classifier. For each instance, we add eight such sentences covering all emotional labels3. In this paper, we use the final layer of a pretrained Bert-base model to get representations for the input sentence and each of the emotion-label sentences. The selfattention mechanism will automatically learn to attend to these label sentences when constructing the representations for the input text. 3This is similar to how answer options are encoded in multiple choice question answering in transformer-based models. 4689 Figure 2: Emotion correlations as seen in the ground truth labels in the test set 4 Label Semantics using Correlations When more than one emotion is evoked by an event, they aren’t independent. Indeed, as shown in Figure 2, there are strong (positive and negative) correlations between the emotion labels in the ground truth. For instance, there is a high negative correlation (ρ = −0.9) between JOY and SAD labels and a high positive correlation between JOY and TRUST (ρ = 0.9). We propose two ways to incorporate these label correlations to improve prediction. 4.1 Correlations on Labeled Data In a multi-label setting, a good model should respect the label correlations. If it is confident about a particular label, then it should also be confident about other positively correlated labels, and conversely less confident about labels that are negatively correlated. Following Zhao et al. (2019), we add (i) a loss function that penalizes the model for making incongruous predictions, i.e. those that are not compatible with the label correlations, and (ii) a component that multiplies the classification logit vector z with the learned label relations encoded as a learned correlation matrix G. This component transforms the raw prediction score of each label to a weighted sum of the prediction scores of the other labels. For each label, these weights are given by its learned correlation with all the other labels. Therefore, the prediction score of each label is affected by the prediction score of the other labels, based on the correlation between label pairs. The final prediction scores are then calculated as shown in the equation: e = σ(z · G) (2) The overall loss then comprises of two loss functions - the prediction loss (LBCE), and the correlation-loss (Lcorr): L(θ) = LBCE(e, y) + Lcorr(e, y′) (3) Where Lcorr computes BCE Loss with continuous representation of the true labels y, using the learned label correlation G: y′ = y · G (4) 4.2 Semi-supervision on Unlabeled Data We also introduce a new semi-supervision idea to exploit label correlations as a regularization signal on unlabeled data. The multi-label annotations used in this work (Rashkin et al., 2018) only comprises a small fraction of the original ROCStories data. There are ∼40k character-line pairs that have open text descriptions of emotional reactions, but these aren’t annotated with multi-label emotions, and therefore were not used in the above supervised emotion prediction tasks. We propose a new semisupervised method over BERT representations that augments the soft-training objective used in Section 4.1 with a label correlation incompatibility loss defined over the unlabeled portion of the ROCStories dataset. We use two loss functions: the loss computed in Equation 3, and the regularization loss on the unlabeled training data (Equation 5). For the semi-supervised training, we use an iterative batch-wise training. In the first step, all weights of the model are minimized by minimizing the loss in Equation 3. In the next step, the learned label correlations are updated using: Lreg = X i,j Gij · d(ei, ej) (5) d(ei, ej) = ( ∥ei −ej∥ for Gij ≥0, ∥ei −ej∥−1 otherwise. This loss helps the model to produce consistent predictions based on the correlations by forcing positively correlated labels to have similar scores and negatively correlated ones to have dissimilar scores. 4690 Model Precision Recall F1 Rashkin et al. (2018) BiLSTM 25.31 33.44 28.81 CNN 24.47 38.87 30.04 Baselines REN 25.30 37.30 30.15 NPN 24.33 40.10 30.29 Paul and Frank (2019)∗ 59.66 51.33 55.18 BERT 65.63 56.91 60.96 Label Embeddings LEAM w/ GloVe 59.81 54.46 57.03 LEAM w/ BERT Features 67.29 54.48 60.22 Adding BERT + Labels as Input 63.05 61.70 62.36 Label Semantics Label Correlation Learned Correlations 56.50 71.47 63.11 Semi-supervision 57.94 76.35 65.88 Table 1: Comparison Results on ROCStories with Plutchik emotion labels 5 Experimental Setup We compare our proposed models with the models presented in Rashkin et al. (2018), the LEAM architecture of Wang et al. (2018), and fine-tuned BERT models (Devlin et al., 2019) for multi-label classification without label semantics. For all the models we report the micro-averaged Precision, Recall and F1 score of the emotion prediction task. Rashkin et al. (2018) modeled character context and pre-trained on free response data to predict the mental states of characters using different encoderdecoder setups, including BiLSTMs, CNNs, the recurrent entity network (REN) (Henaff et al., 2016), and neural process networks (NPN) (Bosselut et al., 2017). Additionally, we compare with the selfattention architecture proposed in (Paul and Frank, 2019), without the knowledge from ConceptNet (Speer and Havasi, 2012) and ELMo embeddings (Peters et al., 2018). To compare against LEAM, we compare it against our proposal of the LEAM+BERT model, where our label attention is computed from BERT representations of each of the label sentences, and words in the input sentence. We also encode the sentence and context separately in a BiLSTM layer as done in Rashkin et al. (2018). We also fine-tuned a BERT-base-uncased model for emotion classification, using xs, xc and Ls as inputs. This beats the other baselines by a significant margin, and is thus a strong new baseline. All our models are evaluated on the emotion reaction prediction task over the eight emotion labels (Plutchik categories) annotated in the Rashkin et al. (2018) dataset. We follow their evaluation setup, and report the final results on the test set. We use pretrained GloVe embeddings (100d) and BERT-baseuncased representations with the LEAM model. The final classifier used in all models is a feedforward layer, followed by a sigmoid. 6 Results Table 1 compares the performance of the baselines with our models that use label semantics. Among the baselines, the fine-tuned BERT base model obtains the best results. Adding label embeddings (section 3.1) to the basic BiLSTM via LEAM model provides substantial increase, more than 27 absolute points in F1. We swapped in BERT features instead of GloVe and found a further 3 point improvement. The BERT baseline beat both of these, but appending label sentences as additional input to fine-tuned BERT increased its performance by 1.4 F1 points. A further increase of 2 points in F1 is achieved by tracking label-label correlations through training loss and inference logits. In addition, adding semi-supervision yields the best gain of more than 4.9 points in F1 over basic BERT, providing a significant advance in state-of-the-art results for emotion inference in this dataset. We also checked the statistical significance of the Semi-supervision model (Table 1) against the Learned Correlations, BERT+Labels as Input, LEAM w/ BERT Features and the BERT model using the Randomization Test (Smucker et al., 2007). This involved comparing the outputs of the Semi-supervision model with the 4691 Sentence Ground Truth LS NoLS And nobody could give him any direction Sad, Disgust Sad, Disgust Sad Surprise Anger She said Mark can come for free Joy, Trust Joy ,Trust Joy, Anticipation Anticipation Anticipation He is relieved that it was not harmed Joy, Surprise Joy, Surprise Fear, Surprise Anticipation Anticipation The marshmallows were totally smooshed Anger, Sad Anger, Sad Joy, Anticipation Table 2: Prediction of labels with label semantics (LS) versus without label semantics (NoLS). Including label semantics helps the model predict semantically labels (high correlations), with high probability. above mentioned models after creating 100,000 random permutations. The Semi-supervision model achieved statistically significant improvement over all the baselines. We did further qualitative analysis of the results on the dev set to better understand the performance of the Semi-supervised Label Semantics model. Compared to base BERT, this model predicts more emotion classes per instance (8839 vs 5024). The wrong predictions of this model have lower probabilities than the correct labels suggesting that classification could be further improved with proper threshold identification. This model is also better at capturing the semantic relations between labels during prediction. This is highlighted through some examples in Table 2. 7 Related Work One of the most widely-used work in narrative understanding introduced ROCStories, a dataset for evaluating story understanding (Mostafazadeh et al., 2016). On a subset of these stories (Rashkin et al., 2018) added annotations for causal links between events in stories and mental states of characters. They model entity state to predict emotional reactions and motivations for causing events occurring in ROCStories. Additionally, they also introduce a new dataset annotation that tracks emotional reactions and motivations of characters in stories. Other work looked at encoding external knowledge sources to augment motivation inference (Paul and Frank, 2019) on the same dataset. Both treat labels as anonymous classes, whereas this work explores modeling the semantics of the emotion labels explicitly. Recent work in multi-label emotion classification has shown that using the relation information between labels can improve performance. (Kurata et al., 2016) use the label co-occurrence information in the final layer of the neural network to improve multi-label classification. Correlationbased label representations have also been used for music classification styles (Zhao et al., 2019). Our work builds on these and adds a similar result showing that label correlations can have significant impact for emotion label inference. 8 Conclusions We present new results for the multi-label emotion classification task of Rashkin et al. (2018), extending previous reported results by 10.7 F1 points (55.1 to 65.8). The multi-label nature of emotion prediction lends itself naturally to use the correlations between the labels themselves. Further, we showed that modeling the class labels as semantic embeddings helped to learn better representations with more meaningful predictions. As with many tasks, BERT provided additional context, but our integration of these label semantics showed significant improvements. We believe these models can improve many other NLP tasks where the class labels carry inherent semantic meaning in their names. Acknowledgments This work was supported in part by the National Science Foundation under Grant IIS-1617969. This material is also based on research that is in part supported by the Air Force Research Laboratory (AFRL), DARPA, for the KAIROS program under agreement number FA8750-19-2-1003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. 4692 References Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, and Yejin Choi. 2017. Simulating action dynamics with neural process networks. arXiv preprint arXiv:1711.05313. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969. Gakuto Kurata, Bing Xiang, and Bowen Zhou. 2016. Improved neural network-based multi-label classification with better initialization leveraging label cooccurrence. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 521–526. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849, San Diego, California. Association for Computational Linguistics. Debjit Paul and Anette Frank. 2019. Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics, volume 1, pages 3671–3681, Minneapolis, Minnesota, USA. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Hannah Rashkin, Antoine Bosselut, Maarten Sap, Kevin Knight, and Yejin Choi. 2018. Modeling naive psychology of characters in simple commonsense stories. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2289– 2299, Melbourne, Australia. Association for Computational Linguistics. Mark D Smucker, James Allan, and Ben Carterette. 2007. A comparison of statistical significance tests for information retrieval evaluation. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 623–632. Robyn Speer and Catherine Havasi. 2012. Representing general relational knowledge in ConceptNet 5. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), pages 3679–3686, Istanbul, Turkey. European Language Resources Association (ELRA). Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018. Joint embedding of words and labels for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2321–2331, Melbourne, Australia. Association for Computational Linguistics. Guangxiang Zhao, Jingjing Xu, Qi Zeng, Xuancheng Ren, and Xu Sun. 2019. Review-driven multi-label music style classification by exploiting style correlations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2884–2891, Minneapolis, Minnesota. Association for Computational Linguistics.
2020
426
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4693–4714 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4693 CraftAssist Instruction Parsing: Semantic Parsing for a Voxel-World Assistant Kavya Srinet∗ Facebook AI Research [email protected] Yacine Jernite∗† HuggingFace yacine@ huggingface.co Jonathan Gray Facebook AI Research [email protected] Arthur Szlam Facebook AI Research [email protected] Abstract We propose a semantic parsing dataset focused on instruction-driven communication with an agent in the game Minecraft1. The dataset consists of 7K human utterances and their corresponding parses. Given proper world state, the parses can be interpreted and executed in game. We report the performance of baseline models, and analyze their successes and failures. 1 Introduction Semantic parsing is used as a component for natural language understanding in human-robot interaction systems (Lauria et al., 2001; Bos and Oka, 2007; Tellex et al., 2011; Matuszek et al., 2013; Thomason et al., 2019), and for virtual assistants (Campagna et al., 2017; Kollar et al., 2018; Campagna et al., 2019). We would like to be able to apply deep learning methods in this space, as recently researchers have shown success with these methods for semantic parsing more generally, e.g. (Dong and Lapata, 2016; Jia and Liang, 2016; Zhong et al., 2017). However, to fully utilize powerful neural network approaches, it is necessary to have large numbers of training examples. In the space of human-robot (or human-assistant) interaction, the publicly available semantic parsing datasets are small. Furthermore, it can be difficult to reproduce the end-to-end results (from utterance to action in the environment) because of the wide variety of robot setups and proprietary nature of personal assistants. In this work, we introduce a new semantic parsing dataset for human-bot interactions. Our “robot” or “assistant” is embodied in the sandbox construc∗Equal contribution †Work done while at Facebook AI Research 1Minecraft features: c⃝Mojang Synergies AB included courtesy of Mojang AB tion game Minecraft2, a popular multiplayer openworld voxel-based crafting game. We also provide the associated platform for executing the logical forms in game. Situating the assistant in Minecraft has several benefits for studying task oriented natural language understanding (NLU). Compared to physical robots, Minecraft allows less technical overhead irrelevant to NLU, such as difficulties with hardware and large scale data collection. On the other hand, our bot has all the basic in-game capabilities of a player, including movement and placing or removing voxels. Thus Minecraft preserves many of the NLU elements of physical robots, such as discussions of navigation and spatial object reference. Working in Minecraft may enable large scale human interaction because of its large player base, in the tens of millions. Furthermore, although Minecraft’s simulation of physics is simplified, the task space is complex. While there are many atomic objects in the game, such as animals and block-types, that require no perceptual modeling, the player also interacts with complex structures made up of collections of voxels such as a “house” or a “hill”. The assistant cannot apprehend them without a perceptual system, creating an ideal test bed for researchers interested in the interactions between perception and language. Our contributions in the paper are as follows: Grammar: We develop a grammar over a set of primitives that comprise a mid-level interface to Minecraft for machine learning agents. Data: We collect 7K crowd-sourced annotations of commands generated independent of our grammar. In addition to the natural language commands and the associated logical forms, we release the tools used to collect these, which allow 2https://minecraft.net/en-us/. We limit ourselves to creative mode for this work 4694 Figure 1: The basic structure of the ACTION SEQUENCE branch of the assistant’s grammar. The gold octagon is an internal node whose children are ordered, blue rectangles are regular internal nodes, and green rectangles are categorical leaf nodes. Not all combinations of children of ACTION are possible, see the full list of possible productions (and the productions for PUT MEMORY and GET MEMORY) in the Appendix C. crowd-workers to efficiently and accurately annotate parses. Models: We show the results of several neural semantic parsing models trained on our data. Execution: Finally, we also make available the code to execute logical forms in the game, allowing the reproduction of end-to-end results. This also opens the door to using the data for reinforcement and imitation learning with language. We also provide access to an interactive bot using these models for parsing3. 2 The Assistant Grammar In this section we summarize a grammar for generating logical forms that can be interpreted into programs for the agent architecture described in (Gray et al., 2019). 2.1 Agent Action Space The assistant’s basic functions include moving, and placing and destroying blocks. Supporting these basic functions are methods for control flow and memory manipulation. Basic action commands: The assistant can MOVE to a specified location; or DANCE with a specified sequence of steps. It can BUILD an object from a known schematic (or by making a copy of a block-object in the world) at a given location, or DESTROY an existing object. It can DIG a hole of a given shape at a specified location, or FILL one up. The agent can also be asked to complete a partially built structure however it sees fit by FREEBUILD. 3Instructions can be found at http://craftassist. io/acl2020demo, requires a Minecraft license and client. Figure 2: The basic structure of internal nodes in the assistant’s grammar. Blue rectangles are internal nodes, green rectangles are categorical leaf nodes, and red ovals are span nodes. Finally, it can SPAWN a mob (an animate NPC in Minecraft). Control commands: Additionally, the agent can STOP or RESUME an action, or UNDO the result of a recent command. Furthermore, the assistant can LOOP given a task and a stop-condition. Finally, it needs to be able to understand when a sentence does not correspond to any of the above mentioned actions, and map it to a NOOP. Memory interface: Finally, the assistant can interact with its SQL based memory. It can place or update rows or cells, for example for tagging objects. This can be considered a basic version of the self-improvement capabilities in (Kollar et al., 2013; Thomason et al., 2015; Wang et al., 2016, 2017). It can retrieve information for question answering similar to the VQA in (Yi et al., 2018). 2.2 Logical Forms The focus of this paper is an intermediate representation that allows natural language to be interpreted into programs over the basic actions from the previous section. The logical forms (represented as trees) making up this representation consist of three basic types of nodes: “internal nodes” that can have children, “categorical” (leaf) nodes that belong to a fixed set of possibilities, and “span” nodes that point to a region of text in the natural language utterance. The full grammar is shown in the Appendix C; and a partial schematic representation is shown in Figures 1 and 2. In the paragraphs below, we give more detail about some of the kinds of nodes in the grammar. 4695 Figure 3: A representation of the annotation process using the web-based annotation tool described in Section 3.1.3. The colors of the boxes correspond to annotation tasks. The highlighting on the text in the header of the later tasks is provided by a previous annotator. We show more detailed screenshots of how the tool works in Appendix B.3 . We emphasize that this is an intermediate representation. The logical forms do not come with any mechanism for generating language, and nodes do not correspond in any simple way with words. On the other hand, the logical forms do not encode all of the information necessary for execution without the use of an interpreter that can access the assistant’s memory and the Minecraft world state. Internal nodes: Internal nodes are nodes that allow recursion; although most do not require it. They can correspond to top-level actions, for example BUILD; in which case they would just be an “action” node with “action type” build; see Figure 1. They can also correspond to arguments to top-level actions, for example a “reference object”, which specifies an object that has a spatial location. Internal nodes are not generally required to have children; it is the job of the interpreter to deal with under-specified programs like a BUILD with no arguments. In addition to the various LOCATION, REFERENCE OBJECT, SCHEMATIC, and REPEAT nodes which can be found at various levels, another notable sub-tree is the action’s STOP CONDITION, which essentially allows the agent to understand “while” loops (for example: “dig down until you hit the bedrock” or “follow me”). Leaf nodes: Eventually, arguments have to be specified in terms of values which correspond to (fixed) agent primitives. We call these nodes categorical leaves (green rectangles in Figures 1 and 2). As mentioned above, an “action” internal node has a categorical leaf child which specifies the action type. There are also repeat type nodes similarly specifying a kind of loop for example in the REPEAT sub-tree corresponding to ”make three houses” the repeat type for specifies a “for” loop). There are also location type nodes specifying if a location is determined by a reference object, a set of coordinates, etc.; relative direction nodes that have values like “left” or “right”. The complete list of categorical nodes is given in the Appendix C. However, there are limits to what we can represent with a pre-specified set of hard-coded primitives, especially if we want our agent to be able to learn new concepts or new values. Additionally, even when there is a pre-specified agent primitive, mapping some parts of the command to a specific value might be better left to an external module (e.g. mapping a number string to an integer value). For these reasons, we also have span leaves (red ovals in Figure 2). For example, in the parse for the command “Make three oak wood houses to the left of the dark grey church.”, the SCHEMATIC (an internal node) might be specified by the command sub-string corresponding to its name by the span“houses” and the requested block type by the span “oak wood”. The range of the for loop is specified by the REPEAT’s for value (“three”), and the REFERENCE OBJECT for the location is denoted in the command by its generic name and specific color with spans “church” and “dark grey”. The root: The root of the tree has three productions: PUT MEMORY, and GET MEMORY, corre4696 Figure 4: Frequency of each action type in the different data collection schemes described in Section 3.1. sponding to writing to memory and reading from memory; and HUMAN GIVE COMMAND which also produces an ACTION SEQUENCE, which is a special internal node whose children are ordered; multiple children correspond to an ordered sequence of commands (“build a house and then a tower”). In Figures 1 and 2 we show a schematic representation for an ACTION SEQUENCE. 3 The CAIP Dataset This paper introduces the CraftAssist Instruction Parsing (CAIP) dataset of English-language commands and their associated logical forms (see Appendix D for examples and Appendix C for a full grammar specification). 3.1 Collected Data We collected natural language commands written by crowd-sourced workers in a variety of settings. The complete list of instructions given to crowdworkers in different settings, as well as step-by-step screen-shot of the annotation tool, are provided in the Appendix B. The basic data cleanup is described in Appendix A. 3.1.1 Image and Text Prompts We presented crowd-sourced workers with a description of the capabilities of an assistant bot in Figure 5: Histograms showing distribution over number of nodes in a logical form (top) and utterance length in words (bottom) for each data type. Prompts averages 6.74 nodes per logical form, 7.32 words per utterance, and interactive averages 4.89, 3.42 respectively a creative virtual environment (which matches the set of allowed actions in the grammar), and (optionally) some images of a bot in a game environment. They were then asked to provide examples of commands that they might issue to an in-game assistant. We refer to these instructions as “prompts” in the rest of this paper. 3.1.2 Interactive Gameplay We asked crowd-workers to play creative-mode Minecraft with our assistant bot, and they were instructed to use the in-game chat to direct the bot as they chose. The game sessions were capped at 10 minutes and players in this setting had no prior knowledge of the bot’s capabilities or the grammar. We refer to these instructions as “Interactive” in the rest of this paper. The instructions of this setting are included in Appendix B.2. 3.1.3 Annotation Tool Both prompts and interactive instructions come without a reference logical form and need to be annotated. To facilitate this process, we designed a multi-step web-based tool which asks users a series of multiple-choice questions to determine the semantic content of a sentence. The responses to some questions will prompt other more specific questions, in a process that mirrors the hierarchical structure of the grammar. The responses are then processed to produce the complete logical form. This allows crowd-workers to provide annotations with no knowledge of the specifics of the grammar described above. A pictorial representation of the annotation process is shown in Figure 3 and 4697 a more detailed explanation of the process along with screen-shots of the tool is given in Appendix B.3. We used a small set of tasks that were representative of the actual annotations to select skilled crowd-sourced workers by manually verifying the accuracy of responses on these. Each utterance in our collection of prompts and interactive chats was shown to three different qualified annotators and we included the utterance and logical form in the dataset only if at least 2 out of 3 qualified annotators agreed on the logical form output. The total number of utterances sent to turkers was 6,775. Out of these, 6,693 had at least 2/3 agreements on the logical form and were kept. Of these, 2,872 had 3/3 agreements. The final dataset has 4,532 annotated instructions from the prompts setting (Section 3.1.1), and 2,161 from interactive play (Section 3.1.2). The exact instructions shown to Turkers in the annotation tools are reproduced in Figures 9 and 11 in supplementary. As in (Yih et al., 2016), we have found that careful design of the annotation tool leads to significant improvements in efficiency and accuracy. In particular, we re-affirm the conclusion from (Yih et al., 2016) that having each worker do one task (e.g. labeling a single node in the tree) makes annotation easier for workers. 3.2 Dataset Statistics 3.2.1 Action Frequencies Since the different data collection settings described in Section 3.1 imposed different constraints and biases on the crowd-sourced workers, the distribution of actions in each subset of data is therefore different. The action frequencies of each subset are shown in Figure 4. 3.2.2 Grammar coverage Some crowd-sourced commands describe an action that is outside the scope of the grammar. To account for this, users of the annotation tool are able to mark that a sentence is a command to perform an action that is not covered by our grammar yet. The resulting trees are labeled as OTHERACTION, and their frequency in each dataset in shown in Figure 4. Annotators still have the option to label other nodes in the tree, such as the action’s LOCATION or REFERENCE OBJECT. In both the prompts and interactive data, OTHERACTION amounted to approximately 14% of the data. 3.2.3 Quantitative analysis For each of our data types, Figure 5 show a histogram of sentence length and number of nodes. On an average interactive data has shorter sentences and smaller trees. 3.2.4 Qualitative Linguistic Style We show the linguistic styles and choice of words of the data sources by displaying the surface forms of a set of trees. We randomly picked trees of size (number of nodes) 7 that appear in both data sources, and then for the same tree structure, we looked at the utterances corresponding to that tree. We show some representative examples in table 1. We show more examples of the data in the Appendix D 4 Related Work There have been a number of datasets of natural language paired with logical forms to evaluate semantic parsing approaches, e.g. (Price, 1990; Tang and Mooney, 2001; Cai and Yates, 2013; Wang et al., 2015; Zhong et al., 2017). The dataset presented in this work is an order of magnitude larger than those in (Price, 1990; Tang and Mooney, 2001; Cai and Yates, 2013) and is similar in scale to the datasets in (Wang et al., 2015), but smaller than (Zhong et al., 2017). In addition to mapping natural language to logical forms, our dataset connects both of these to a dynamic environment. In (Lauria et al., 2001; Bos and Oka, 2007; Tellex et al., 2011; Matuszek et al., 2013; Thomason et al., 2019) semantic parsing has been used for interpreting natural language commands for robots. In our paper, the “robot” is embodied in the Minecraft game instead of in the physical world. In (Boye et al., 2006) semantic parsing has been used for spoken dialogue with an embodied character in a 3-D world with pattern matching and rewriting phases. In our work, the user along with the assistant is embodied in game and instructs using language. We go from language to logical forms end-to-end with no pattern match necessary. Semantic parsing in a voxel-world recalls (Wang et al., 2017), where the authors describe a method for building up a programming language from a small core via interactions with players. We demonstrate the results of several neural parsing models on our dataset. In particular, we show the results of a re-implementation of (Dong 4698 Prompts bot move to where the tree is dig a large size hole to put these waste particles into the hole please build a sphere on that location hey bot can you dig a 5 by 5 hole for me Interactive find tree dig large hole build a sphere over here dig a 5 x 5 hole Table 1: Choice of words across different data sources for the same logical form (per column). and Lapata, 2016) adapted to our grammar, and a straightforward fine-tuned BERT model (Devlin et al., 2018). There have been several other papers proposing neural architectures for semantic parsing, for example (Jia and Liang, 2016; Zhong et al., 2017; Wang et al., 2018; Hwang et al., 2019); in particular (Hwang et al., 2019) uses a BERT based model. In those papers, as in this one, the models are trained with full supervision of the mapping from natural language to logical forms, without considering the results of executing the logical form (in this case, the effect on the environment of executing the actions denoted by the logical form). There has been progress towards “weakly supervised” semantic parsing (Artzi and Zettlemoyer, 2013; Liang et al., 2016; Guu et al., 2017) where the logical forms are hidden variables, and the only supervision given is the result of executing the logical form. There are now approaches that have shown promise without even passing through (discrete) logical forms at all (Riedel et al., 2016; Neelakantan et al., 2016). We hope that the dataset introduced here, which has supervision at the level of the logical forms, but whose underlying grammar and environment can be used to generate essentially infinite weakly supervised or execution rewards, will also be useful for studying these models. Minecraft, especially via the MALMO project (Johnson et al., 2016) has been used as a base environment for several machine learning papers. It is often used as a testbed for reinforcement learning (RL) (Shu et al., 2017; Udagawa et al., 2016; Alaniz, 2018; Oh et al., 2016; Tessler et al., 2017). In these works, the agent is trained to complete tasks by issuing low level actions (as opposed to our higher level primitives) and receiving a reward on success. Others have collected large-scale datasets for RL and imitation learning (Guss et al., 2019a,b). Some of these works (e.g. (Oh et al., 2017)) do consider simplified, templated language as a method for composably specifying tasks, but training an RL agent to execute the scripted primitives in our grammar is already nontrivial, and so the task space and language in those works is more constrained than what we use here. Nevertheless, our work may be useful to researchers interested in RL (or imitation): using our grammar and executing in game can supply (hard) tasks and descriptions, and demonstrations. Another set of works (Kitaev and Klein, 2017; Yi et al., 2018) have used Minecraft for visual question answering with logical forms. Our work extends these to interactions with the environment. Finally, (Allison et al., 2018) is a more focused study on how a human might interact with a Minecraft agent; our collection of free generations (see 3.1.1) includes annotated examples from similar studies of players interacting with a player pretending to be a bot. 5 Baseline Models In order to assess the challenges of the dataset, we implement two models which learn to read a sentence and output a logical form by formulating the problem as a sequence-to-tree and a sequenceto-sequence prediction task respectively. 5.1 Sequence to Tree Model Our first model adapts the Seq2Tree approach of (Dong and Lapata, 2016) to our grammar. In short, a bidirectional RNN encodes the input sentence into a sequence of vectors, and a decoder recursively predicts the tree representation of the logical form, starting at the root and predicting all of the children of each node based on its parent and left siblings and input representation. Sentence Encoder and Attention: We use a bidirectional GRU encoder (Cho et al., 2014) which encodes a sentence of length T s = (w1, . . . wT ) into a sequence of T dimension d vectors: fGRU(s) = (h1, . . . , hT ) ∈Rd×T Tree Decoder: The decoder starts at the root, computes its node representation and predicts the state of its children, then recursively computes the representations of the predicted descendants. Similarly to Seq2Tree, a node representation rn is computed based on its ancestors and left siblings. We also found it useful to condition each of the node 4699 representation on the encoder output explicitly for each node. Thus, we compute the representation rnt and recurrent hidden state gnt for node nt as: rnt = attn(vnt + gnt−1, (h1, . . . , hT ); Mσ) (1) gnt = frec(gnt−1, (v′nt + rnt)) (2) Where attn is multi-head attention, Mσ ∈Rd×d×K is a tree-wise parameter, frec is the GRU recurrence function, and v′nt is a node parameter (one per category for categorical nodes), and nt−1 denotes either the last predicted left sibling if there is one or the parent node otherwise. Prediction Heads: Finally, the decoder uses the computed node representations to predict the state of each of the internal, categorical, and span nodes in the grammar. We denote each of these sets by I, C and S respectively, and the full set of nodes as N = I ∪C ∪S. First, each node in N is either active or inactive in a specific logical form. We denote the state of a node n by an ∈{0, 1}. All the descendants of an inactive internal node n ∈I are considered to be inactive. Additionally, each categorical node n ∈C has a set of possible values Cn; its value in a specific logical form is denoted by the category label cn ∈{1, . . . , |Cn|}. Finally, active span nodes n ∈S for a sentence of length T have a start and end index (sn, en) ∈{1, . . . , T}2. We compute, the representations rn of the nodes as outlined above, then obtain the probabilities of each of the labels by: ∀n ∈N, p(an) = σ(⟨rn, pn⟩) (3) ∀n ∈C, p(cn) = softmax(Mc nrn) (4) ∀n ∈S, p(sn) = softmax(rT nMs n(h1, . . . , hT )) p(en) = softmax(rT nMe n(h1, . . . , hT )) (5) where the following are model parameters: ∀n ∈N, pn ∈Rd ∀n ∈C, Mc n ∈Rd×d ∀n ∈S, (Ms n, Me n)n ∈Rd×d×2 Let us note the parent of a node n as π(n). Given Equations 3 to 5, the log-likelihood of a tree with states (a, c, s, e) given a sentence s is then: L = X n∈N aπ(n) log(p(an)) + X n∈C an log(p(cn)) + X n∈S an  log(p(sn)) + log(p(en))  (6) Overall, our implementation differs from the original Seq2Tree in three ways, which we found lead to better performance in our setting. First, we replace single-head with multi-head attention. Secondly, the cross-attention between the decoder and attention is conditioned on both the node embedding and previous recurrent state. Finally, we replace the categorical prediction of the next node by a binary prediction problem: since we know which nodes are eligible as the children of a specific node (see Figures 1 and 2), we find that this enforces a stronger prior. We refer to this modified implementation as SentenceRec. 5.2 Sequence to Sequence Model Our second approach treats the problem of predicting the logical form as a general sequence-tosequence (Seq2Seq) task; such approaches have been used in semantic parsing in e.g. (Jia and Liang, 2016; Wang et al., 2018). We take the approach of (Jia and Liang, 2016) and linearize the output trees: the target sequence corresponds to a Depth First Search walk through the tree representation of the logical form. More specifically the model needs to predict, in DFS order, a sequence of tokens corresponding to opening and closing internal nodes, categorical leaves and their value, and span leaves with start and end sequences. In practice, we let the model predict span nodes in two steps: first predict the presence of the node, then predict the span value, using the same prediction heads as for the SentenceRec model (see Equation 5 above). With this formalism, the logical form for e.g. “build a large blue dome on top of the walls” will be: (ACTION_TYPE:BUILD, OPEN:SCHEMATIC, HAS_SIZE, SIZE_SPAN-(2,2), HAS_COLOR, COLOR_SPAN-(3,3), HAS_NAME, NAME_SPAN-(4,4), CLOSE:SCHEMATIC, OPEN:LOCATION, LOC_TYPE:REF_OBJECT, REL_DIR:UP, OPEN:REF_OBJECT, HAS_NAME, NAME_SPAN-(9,9), CLOSE:REF_OBJECT, CLOSE:LOCATION) We train a BERT encoder-decoder architecture on this sequence transduction task, where the training loss is a convex combination of the output sequence log-likelihood and the span cross-entropy loss. Pre-trained Sentence Encoder: Finally, recent work has shown that using sentence encoder that has been pre-trained on large-scale language modeling tasks can lead to substantial performance 4700 Acc. (std) Inter. Prompts SentRec 50.08 (2.97) 64.17 42.49 DistBERT+SentRec 59.58 (3.49) 76.0 50.74 DistBERT+Seq2Seq 60.74 (3.58) 76.06 52.49 Table 2: Average accuracy over a test set of 650 Prompts + 350 Interactive. improvements (Song et al., 2019).We use the pretrained DistilBERT model of (Sanh et al., 2019) as the encoder of our sequence-to-sequence model, and also propose a version of the SentenceRec which uses it to replace the bidirectional RNN. 6 Experiments In this Section, we evaluate the performance of our baseline models on the proposed dataset. Training Data: The CAIP datasets consists in a total of 6693 annotated instruction-parse pairs. In order for our models to make the most of this data while keeping the evaluation statistically significant, we create 5 different train/test splits of the data and report the average performance of models trained and evaluated on each of them. In each case, we hold out 650 examples from Prompts and 350 from Interactive for testing, and use the remaining 5693 as the training set. Modeling Choices: For the end-to-end trained SentenceRec model, we use a 2-layer GRU sentence encoder and all hidden layers have dimension d = 256. We use pre-trained word embeddings computed with FastText with subword information (Bojanowski et al., 2017). The decoder uses a GRU recurrent cell and 4-headed attention. The Seq2Seq model uses a variant of the bert-base-uncased provided in the Transformer library 4 with 6 encoding and decoding layers. For the Seq2Seq model and the SentenceRec with pre-trained encoder, we use the distilbert-base-uncased encoder from the same library. The Seq2Seq model uses beam search decoding with 15 beams. All models are trained with the Adam optimizer with quadratic learning rate decay. We provide our model and training code along with the dataset for reproducibility purposes. Overview of Results: Table 2 provides the average accuracy (computed as the proportion of logical forms that are entirely accurately predicted) and standard deviation across all five splits, as well as the contributions of the Interactive and Prompts 4https://github.com/huggingface/transformers N=2 N=5 N=15 Joint 67.7 72.76 75.7 Interactive 83.83 88.34 90.63 Prompts 59.02 64.37 67.66 Table 3: Recall at N for the Seq2Seq model beam search. Figure 6: We show nodes in the grammar which are most often wrongly predicted, with false positive (+) and false negative counts (-). data. The first observation is that using a pretrained encoder leads to a significant improvement, with a 10 point boost in accuracy. On the other hand, while the Seq2Seq model is more general and makes less use of our prior knowledge of the structure of logical forms, it does marginally better than the recursive prediction model (although within one standard deviation). Secondly, although the models are trained on more data provided from the Prompts setting than from Interactive play, they all do better on the latter. This is consistent with previous observations on the dataset statistics in Section 3.2.3 which find that players tend to give shorter instructions with simpler execution. Finally, we note that one of the advantages of having the parser be part of an interactive agent is that it can ask the player for clarification and adapt its behavior when it is made aware of a mistake (Yao et al., 2019). In that spirit, Table 3 provides Recall at N numbers, which represent how often the true parse is within the N first elements of the beam after beam search. Recall at 2 does provide a consistent boost over the accuracy of a single prediction, but even the full size 15 beam does not always contain the right logical form. Error Analysis: We further investigate the errors of the Seq2seq models on one of the data splits. We find that the model still struggles with span predictions: out of 363 errors, 125 only make mistakes on spans (and 199 get the tree structure right 4701 but make mistakes on leaves). Figure 6 shows the nodes which are most commonly mistaken, with the number of false positive and false negatives out of these 363 mistakes. Unsurprisingly, the most commonly confused span leaf is “has tag”, which we use as a miscellaneous marker. Aside from that “has tag” however, the span mistakes are evenly spread over all other leaves. The next most common source of mistakes comes from the model struggling between identifying whether a provided location corresponds to the target of the action or to the reference object, and to identify instructions which imply a repetition. The former indicates a lack of compositionality in the input representation: the model correctly identifies that a location is mentioned, but fails to identify its context. Repeat conditions on the other hand challenge the model due to the wide variety of possible stop condition, a problem we suggest future work pay special attention to. 7 Conclusion In this work, we have described a grammar over a mid-level interface for a Minecraft assistant. We then discussed the creation of a dataset of natural language utterances with associated logical forms over this grammar that can be executed in-game. Finally, we showed the results of using this new dataset to train several neural models for parsing natural language instructions. Consistent with recent works, we find that BERT pre-trained models do better than models trained from scratch, but there is much space for improvement. We believe this data will be useful to researchers studying semantic parsing, especially interactive semantic parsing, human-robot interaction, and even imitation and reinforcement learning. The code, dataset and annotation tools described in the paper have been open-sourced 5. 5https://github.com/facebookresearch/ craftassist/tree/master/acl2020_ submission 4702 References Stephan Alaniz. 2018. Deep reinforcement learning with model learning and monte carlo tree search in minecraft. arXiv preprint arXiv:1803.08456. Fraser Allison, Ewa Luger, and Katja Hofmann. 2018. How players speak to an intelligent game character using natural language messages. Transactions of the Digital Games Research Association, 4(2). Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1:49–62. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135–146. Johan Bos and Tetsushi Oka. 2007. A spoken language interface with a mobile robot. Artificial Life and Robotics, 11(1):42–47. Johan Boye, Joakim Gustafson, and Mats Wir´en. 2006. Robust spoken language understanding in a computer game. Speech Commun., 48:335–353. Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 423–433. Giovanni Campagna, Rakesh Ramesh, Silei Xu, Michael Fischer, and Monica S Lam. 2017. Almond: The architecture of an open, crowdsourced, privacy-preserving, programmable virtual assistant. In Proceedings of the 26th International Conference on World Wide Web, pages 341–350. Giovanni Campagna, Silei Xu, Mehrad Moradshahi, Richard Socher, and Monica S Lam. 2019. Genie: A generator of natural language semantic parsers for virtual assistant commands. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 394– 410. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1724– 1734. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. arXiv preprint arXiv:1601.01280. Jonathan Gray, Kavya Srinet, Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo, Siddharth Goyal, C Lawrence Zitnick, and Arthur Szlam. 2019. Craftassist: A framework for dialogue-enabled interactive agents. arXiv preprint arXiv:1907.08584. William H Guss, Cayden Codel, Katja Hofmann, Brandon Houghton, Noboru Kuno, Stephanie Milani, Sharada Mohanty, Diego Perez Liebana, Ruslan Salakhutdinov, Nicholay Topin, et al. 2019a. The minerl competition on sample efficient reinforcement learning using human priors. arXiv preprint arXiv:1904.10079. William H Guss, Brandon Houghton, Nicholay Topin, Phillip Wang, Cayden Codel, Manuela Veloso, and Ruslan Salakhutdinov. 2019b. Minerl: a large-scale dataset of minecraft demonstrations. arXiv preprint arXiv:1907.13440. Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. arXiv preprint arXiv:1704.07926. Wonseok Hwang, Jinyeung Yim, Seunghyun Park, and Minjoon Seo. 2019. A comprehensive exploration on wikisql with table-aware word contextualization. arXiv preprint arXiv:1902.01069. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. arXiv preprint arXiv:1606.03622. Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. 2016. The malmo platform for artificial intelligence experimentation. In IJCAI, pages 4246–4247. Nikita Kitaev and Dan Klein. 2017. Where is misty? interpreting spatial descriptors by modeling regions in space. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 157–166. Thomas Kollar, Danielle Berry, Lauren Stuart, Karolina Owczarzak, Tagyoung Chung, Lambert Mathias, Michael Kayser, Bradford Snow, and Spyros Matsoukas. 2018. The alexa meaning representation language. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), volume 3, pages 177–184. Thomas Kollar, Jayant Krishnamurthy, and Grant P Strimel. 2013. Toward interactive grounded language acqusition. In Robotics: Science and systems, volume 1, pages 721–732. 4703 Stanislao Lauria, Guido Bugmann, Theocharis Kyriacou, Johan Bos, and A Klein. 2001. Training personal robots using natural language instruction. IEEE Intelligent systems, 16(5):38–45. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. 2016. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv preprint arXiv:1611.00020. Cynthia Matuszek, Evan Herbst, Luke Zettlemoyer, and Dieter Fox. 2013. Learning to parse natural language commands to a robot control system. In Experimental Robotics, pages 403–415. Springer. Arvind Neelakantan, Quoc V Le, Martin Abadi, Andrew McCallum, and Dario Amodei. 2016. Learning a natural language interface with neural programmer. arXiv preprint arXiv:1611.08945. Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. 2016. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128. Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. 2017. Zero-shot task generalization with multi-task deep reinforcement learning. arXiv preprint arXiv:1706.05064. Patti J Price. 1990. Evaluation of spoken language systems: The atis domain. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. Sebastian Riedel, Matko Bosnjak, and Tim Rockt¨aschel. 2016. Programming with a differentiable forth interpreter. CoRR, abs/1605.06640. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108. Tianmin Shu, Caiming Xiong, and Richard Socher. 2017. Hierarchical and interpretable skill acquisition in multi-task reinforcement learning. arXiv preprint arXiv:1712.07294. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 5926–5936. Lappoon R Tang and Raymond J Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In European Conference on Machine Learning, pages 466–477. Springer. Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In Twenty-Fifth AAAI Conference on Artificial Intelligence. Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. 2017. A deep hierarchical approach to lifelong learning in minecraft. In AAAI, volume 3, page 6. Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J Mooney. 2019. Improving grounded natural language understanding through human-robot dialog. arXiv preprint arXiv:1903.00122. Jesse Thomason, Shiqi Zhang, Raymond J Mooney, and Peter Stone. 2015. Learning to interpret natural language commands through human-robot dialog. In Twenty-Fourth International Joint Conference on Artificial Intelligence. Hiroto Udagawa, Tarun Narasimhan, and Shim-Young Lee. 2016. Fighting zombies in minecraft with deep reinforcement learning. Technical report, Technical report, Stanford University. Sida I Wang, Samuel Ginn, Percy Liang, and Christoper D Manning. 2017. Naturalizing a programming language via interactive learning. pages 929–938. Sida I Wang, Percy Liang, and Christopher D Manning. 2016. Learning language games through interaction. arXiv preprint arXiv:1606.02447. Wenlu Wang, Yingtao Tian, Hongyu Xiong, Haixun Wang, and Wei-Shinn Ku. 2018. A transferlearnable natural language interface for databases. arXiv preprint arXiv:1809.02649. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1332–1342. Ziyu Yao, Yu Su, Huan Sun, and Wen-tau Yih. 2019. Model-based interactive semantic parsing: A unified framework and A text-to-sql case study. CoRR, abs/1910.05389. Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Josh Tenenbaum. 2018. Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. In Advances in Neural Information Processing Systems, pages 1039–1050. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201–206. 4704 Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103. 4705 A Basic Data Cleanup We threw away all duplicate commands in the dataset and only got annotations for unique commands from each data source. We performed post-processing on the text by first inserting spaces between any special character (brackets, “,”, “x”) followed by alphanumeric character. For example “make a 5x5 hole” was post-processed to “make a 5 x 5 hole” and “go to (1,2,3)” to “go to ( 1 , 2 , 3 )”. We then used the tokenizer from spaCy 6 to tokenize every word in the sentence. When constructing logical forms: we threw away any keys with values : ‘None’ , ‘Other’ or ‘Not Specified’ . Our tool allows workers to select these options when annotating. We skipped stopwords and articles like ‘a’ , ‘an’ etc when constructing spans of children. We reordered the indices of words in spans to always be from left to right (regardless of which order the words were selected in the sentence when annotating). For commands annotated as “composite” (meaning a command that requires multiple actions), we set up another tool where we asked crowd-sourced workers to split the composite command into individual commands. Each of these commands were then sent to our web-based tool described in 3.1.3 and the results were combined together under the key: “action sequence” by preserving the order. So in the sentence: “jump twice and then come to me”, we first have the sentence split into commands: “jump twice” and “come to me” and then combine their logical forms together under “action sequence” so we first have the “Dance” action followed by “Move” action. This tool is described in Section B.4. B Crowd-sourced task and tools instructions This section covers details of each crowd sourced task we’ve described in the paper along with screenshots of the web-based annotation tool described in 3.1. B.1 Image and Text Prompts In this task we showed a screenshot of the bot and environment to the crowd-sourced workers and asked them to give us free-form commands for the assistant. The instructions shown to workers are shown in 7. 6https://spacy.io/ Figure 7: The task instructions shown to crowdsourced workers for the Image and text prompts task Figure 8: The task instructions shown to crowdsourced workers for the interactive game play B.2 Interactive Gameplay In this task we had crowd-sourced workers play with our bot and interact with it using in-game chat. The instructions shown to workers are shown in 8. B.3 Annotation tool The web based annotation tool has two subparts: Tool a and Tool b. B.3.1 Tool a This tool is the first tool in the process of annotation and asks crowd-sourced workers to help determine the intent (dialogue type or action type) of the sentence and highlight other pieces of the text based on the choices they made for the intent. (For example: if the intent was “Build” they are asked to select words for the thing to be built and the location respectively.) We also provided helpful tooltips with examples at every step of the process. The instructions shown to workers for Tool a are shown in figure 9 and step by step annotation process is shown in figure 10 4706 Figure 9: The task instructions shown to crowdsourced workers for the annotation Tool a B.3.2 Tool b After we determine the intent from Tool a and get highlighted span of words for respective children of the intent, we use this tool. This is the second tool in the annotation process and asks crowdsourced workers to help determine the fin-grained properties of specific entities of the action or dialogue. Note that we already got the words representing these, highlighted in B.3.1. For example : the words “ big bright house” are highlighted in the sentence “destroy the big bright house by the tree ” as an outcome of Tool a. The questionnaire changes dynamically based on the choices the workers make at every step of the tool. We provided helpful tooltips with examples at every step of the annotation process. Using the output of Tool a and Tool b, we can successfully construct the entire logical form for a given sentence. The instructions shown to workers for Tool b are shown in Figure 11 and step by step annotation process for annotating properties of “location” in a “Move” action is shown in Figure 12 and annotating “reference object” in “Destroy” action is shown in Figure 13 B.4 Tool for composite commands This tool is meant for “composite” commands (commands that include multiple actions) and asks the users to split a command into multiple individual commands. The instruction for this are shown in figure 14. Once we get the split, we send out each command to annotation tool described in Section B.3 Figure 10: The step by step screenshot of annotations process for the command: “build three sets of bookshelves in front of me .” in Tool a 4707 Figure 11: The task instructions shown to crowdsourced workers for the annotation Tool b Figure 12: The step by step screenshot of annotating properties of highlighted words for“location” in a “Move” action. Figure 13: The step by step screenshot of annotating properties of highlighted words for“reference object” in a “Destroy” action. Figure 14: The task instructions shown to crowdsourced workers for splitting composite commands 4708 C Action Tree structure This section describes the details of logical form of each action. We support three dialogue types: HUMAN GIVE COMMAND, GET MEMORY and PUT MEMORY. The logical form for actions has been pictorially represented in Figures: 1 and 2 We support the following actions in our dataset : Build, Copy, Dance, Spawn, Resume, Fill, Destroy, Move, Undo, Stop, Dig and FreeBuild. A lot of the actions use “location” and “reference object” as children in their logical forms. To make the logical forms more presentable, we have shown the detailed representation of a “reference object” (reused in action trees using the variable: “REF OBJECT”) in Figure 15 and the representation of “location” (reused in action trees using the variable: “LOCATION”) in figure 16. The representations of actions refer to these variable names in their trees. REF_OBJECT : The recursion depth of REF_OBJECT in LOCATION was never greater than 1 in the data. So a REF_OBJECT can have a LOCATION that has a REF_OBJECT that has a LOCATION (and the final location will be one of : COORDINATES / AGENT_POS / SPEAKER_POS / SPEAKER_LOOK). "reference_object" : { "repeat" : { "repeat_key" : ’FOR’ / ’ALL’, "repeat_count" : span, "repeat_dir" : ’LEFT’ / ’RIGHT’ / ’UP’/ ’DOWN’ / ’FRONT’ / ’BACK’ / ’AROUND’} "has_name" : span, "has_colour" : span, "has_size" : span, "has_tag": span, "has_length": span, "has_width": span, "has_height": span, "contains_coreference" : "yes", LOCATION } Figure 15: Logical form of a reference object child LOCATION: "location" : { "location_type" : COORDINATES / REFERENCE_OBJECT / AGENT_POS / SPEAKER_POS / SPEAKER_LOOK "steps" : span, "contains_coreference" : "yes", "relative_direction" : ’LEFT’ / ’RIGHT’ / ’UP’/ ’DOWN’ / ’FRONT’ / ’BACK’ / ’AWAY’ / ’INSIDE’ / ’NEAR’ / ’OUTSIDE’ / ’BETWEEN’, "coordinates" : span, (present if "location_type" is ’COORDINATES), REF_OBJECT (present if "location_type" is ’REFERENCE_OBJECT’) } Figure 16: Logical form of a location child The detailed action tree for each action and dialogue type has been presented in the following subsections. Figure 17 shows an example for a BUILD action. 0 1 2 3 4 5 6 "Make three oak wood houses to the 7 8 9 10 11 12 left of the dark grey church." {"dialogue_type" : "HUMAN_GIVE_COMMAND", "action_sequence" : [ { "action_type" : "BUILD", "schematic": { "has_block_type": [0, [2, 3]], "has_name": [0, [4, 4]], "repeat": { "repeat_key": "FOR", "repeat_count": [1, 1] }}, "location": { "relative_direction": "LEFT", "location_type": "REFERENCE_OBJECT", "reference_object": { "has_colour_": [0, [10, 11]], "has_name_": [0, [12, 12]] } }}]} Figure 17: An example logical form. The spans are indexed as : [sentence number, [starting word index, ending word index]]. sentence number is 0 for the most recent sentence spoken in a dialogue and is 0 in our dataset since we support one-turn dialogues as of now. C.1 Build Action This is the action to Build a schematic at an optional location. The Build logical form is shown in 18 . C.2 Copy Action This is the action to copy a block object to an optional location. The copy action is represented as a ”Build” with an optional ”reference object” . The logical form is shown in 19. C.3 Spawn Action This action indicates that the specified object should be spawned in the environment. The logical form is shown in: 20 C.4 Fill Action This action states that a hole / negative shape at an optional location needs to be filled up. The logical form is explained in : 21 C.5 Destroy Action This action indicates the intent to destroy a block object at an optional location. The logical form is shown in: 22 Destroy action can have one of the following as the child: • reference object 4709 { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’BUILD’, LOCATION, "schematic" : { "repeat" : { "repeat_key" : ’FOR’ / ’ALL’, "repeat_count" : span, "repeat_dir" : ’LEFT’ / ’RIGHT’ / ’UP’/ ’DOWN’ / ’FRONT’ / ’BACK’ / ’AROUND’} "has_name" : span, "has_block_type" : span, "has_size" : span, "has_orientation" : span, "has_thickness" : span, "has_colour" : span, "has_length": span, "has_height" : span, "has_radius" : span, "has_slope" : span, "has_width": span, "has_base" : span, "has_distance" : span, }, "repeat" : { "repeat_key" : ’FOR’ / ’ALL’, "repeat_count" : span, "repeat_dir" : ’LEFT’ / ’RIGHT’ / ’UP’/ ’DOWN’ / ’FRONT’ / ’BACK’ / ’AROUND’} } ] } Figure 18: Details of logical form for Build { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’BUILD’, LOCATION, REF_OBJ, "repeat" : { "repeat_key" : ’FOR’ / ’ALL’, "repeat_count" : span, "repeat_dir" : ’LEFT’ / ’RIGHT’ / ’UP’/ ’DOWN’ / ’FRONT’ / ’BACK’ / ’AROUND’} } ] } Figure 19: Details of logical form for Copy • nothing C.6 Move Action This action states that the agent should move to the specified location, the corresponding logical form is in: 23 Move action can have one of the following as its child: • location • stop condition (stop moving when a condition is met) • location and stop condition • neither C.7 Dig Action This action represents the intent to dig a hole / negative shape of optional dimensions at an optional location. The logical form is in 24 { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’SPAWN’, LOCATION, REF_OBJ }] } Figure 20: Details of logical form for Spawn action { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’FILL’, "has_block_type" : span, REF_OBJ } ] } Figure 21: Details of logical form for Fill C.8 Dance Action This action represents that the agent performs a movement of a certain kind. Note that this action is different than a Move action in that the path or step-sequence here is more important than the destination. The logical form is shown in 25 C.9 FreeBuild Action This action represents that the agent should complete an already existing half-finished block object, using its mental model. The logical form is explained in: 26 FreeBuild action can have one of the following as its child: • reference object only • reference object and location C.10 Undo Action This action states the intent to revert the specified action, if any. The logical form is in 27. Undo action can have on of the following as its child: • target action type • nothing (meaning : undo the last action) C.11 Stop Action This action indicates stop and the logical form is shown in 28 C.12 Resume Action This action indicates that the previous action should be resumed, the logical form is shown in: 29 C.13 Get Memory Dialogue type This dialogue type represents the agent answering a question about the environment. This is similar to the setup in Visual Question Answering. The logical form is represented in: 30 4710 { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’DESTROY’, REF_OBJ } ] } Figure 22: Details of logical form Destroy { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’MOVE’, LOCATION, "stop_condition" : { "condition_type": ’ADJACENT_TO_BLOCK_TYPE’ / ’NEVER’, "block_type": span, "condition_span" : span }, "repeat" : { "repeat_key" : ’FOR’ / ’ALL’, "repeat_count" : span, "repeat_dir" : ’LEFT’ / ’RIGHT’ / ’UP’/ ’DOWN’ / ’FRONT’ / ’BACK’ / ’AROUND’} } ] } Figure 23: Details of logical form for Move action Get Memory dialogue has the following as its children: filters, answer type and tag name. This dialogue type represents the type of expected answer : counting, querying a specific attribute or querying everything (”what is the size of X” vs ”what is X” ) C.14 Put Memory Dialogue This dialogue type represents that a reference object should be tagged with the given tag and the logical form is shown in: 31 C.15 Noop Dialogue This dialogue type indicates no operation should be performed, the logical form is shown in : 32 { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’DIG’, LOCATION, "schematic" : { "repeat" : { "repeat_key" : ’FOR’ / ’ALL’, "repeat_count" : span, "repeat_dir" : ’LEFT’ / ’RIGHT’ / ’UP’/ ’DOWN’ / ’FRONT’ / ’BACK’ / ’AROUND’} "has_size" : span, "has_length": span, "has_depth" : span, "has_width" : span}, "stop_condition" : { "condition_type" : ’ADJACENT_TO_BLOCK_TYPE’ /s ’NEVER’, "block_type": span } } ] } Figure 24: Details of logical form for Dig action { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’DANCE’, LOCATION, "stop_condition" : { "condition_type" : ’NEVER’} "repeat: { "repeat_key" : FOR, "repeat_count" : span } } ] } Figure 25: Details of logical form for Dance action { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’FREEBUILD’, REF_OBJECT, LOCATION } ] } Figure 26: Logical form for Freebuild action { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’UNDO’, "target_action_type" : span } ] } Figure 27: Details of logical form for Undo action { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’STOP’, "target_action_type" : span } ] } Figure 28: Details of logical form for Stop action { "dialogue_type" : ’HUMAN_GIVE_COMMAND’, "action_sequence" : [ {"action_type" : ’RESUME’, "target_action_type" : span } ] } Figure 29: Details of logical form for Resume action { "dialogue_type": "GET_MEMORY", "filters": {"temporal": CURRENT, "type": "ACTION" / "AGENT" / "REFERENCE_OBJECT", "action_type": BUILD / DESTROY / DIG / FILL / SPAWN / MOVE "reference_object" : { LOCATION, "has_size" : span, "has_colour" : span, "has_name" : span, "coref_resolve": span}}, "answer_type": "TAG" / "EXISTS" , "tag_name" : ’has_name’ / ’has_size’ / ’has_colour’ / ’action_name’ / ’action_reference_object_name’ / ’move_target’ / ’location’ , "replace": true } Figure 30: Logical form for Get Memory Dialogue 4711 { "dialogue_type": "PUT_MEMORY", "filters": { REF_OBJECT }, "upsert" : { "memory_data": { "memory_type": "REWARD" / "TRIPLE", "reward_value": "POSITIVE" / "NEGATIVE", "has_tag" : span, "has_colour": span, "has_size": span } } } Figure 31: Details of logical form for Put Memory Dialogue { "dialogue_type": "NOOP" } Figure 32: Details of logical form for Noop Dialogue 4712 D Crowd-sourced task and tools instructions Some examples from prompts data: bot move the tree to the left side of the house {’action_sequence’: [{ ’action_type’: ’OTHERACTION’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [10, 10]]}, ’relative_direction’: ’LEFT’}, ’reference_object’: { ’has_name’: [0, [3, 3]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} dig a hole next to that house {’action_sequence’: [{ ’action_type’: ’DIG’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’contains_coreference’: ’yes’, ’has_name’: [0, [6, 6]]}, ’relative_direction’: ’NEAR’}, ’schematic’: { ’has_name’: [0, [2, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} how about you copy the crops i planted to fill this whole plain {’action_sequence’: [{ ’action_type’: ’BUILD’, ’reference_object’: { ’has_name’: [0, [5, 5]], ’has_tag’: [0, [6, 7]]}, ’repeat’: { ’stop_condition’: { ’condition_span’: [0, [9, 12]]}}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} make sure i spawn on top of the pyramid each time {’action_sequence’: [{ ’action_type’: ’OTHERACTION’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [8, 8]]}, ’relative_direction’: ’UP’}, ’reference_object’: { ’has_name’: [0, [2, 2]]}, ’repeat’: {’stop_condition’: {’ condition_type’: ’NEVER’}}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} complete the structure 10 meters west from your position {’action_sequence’: [{ ’action_type’: ’FREEBUILD’, ’reference_object’: { ’has_name’: [0, [2, 2]], ’location’: { ’location_type’: ’AGENT_POS’, ’relative_direction’: ’LEFT’, ’steps’: [0, [3, 3]]}}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} destroy the structure that is blocking the view of the landscape {’action_sequence’: [{ ’action_type’: ’DESTROY’, ’reference_object’: { ’has_name’: [0, [2, 2]], ’has_tag’: [0, [5, 10]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} complete the project that i am working on by building more devices {’action_sequence’: [{ ’action_type’: ’FREEBUILD’, ’reference_object’: { ’has_name’: [0, [2, 2]], ’has_tag’: [0, [4, 7]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} show me how to dance {’action_sequence’: [{ ’action_type’: ’DANCE’}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} please build a garden {’action_sequence’: [{ ’action_type’: ’BUILD’, ’schematic’: { ’has_name’: [0, [3, 3]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} fill the small pond with sand {’action_sequence’: [{ ’action_type’: ’FILL’, ’has_block_type’: [0, [5, 5]], ’reference_object’: { ’has_name’: [0, [3, 3]], ’has_size’: [0, [2, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} move north for 5 minutes {’action_sequence’: [{ ’action_type’: ’MOVE’, ’location’: { ’location_type’: ’AGENT_POS’, ’relative_direction’: ’FRONT’}, ’repeat’: { ’stop_condition’: { ’condition_span’: [0, [3, 4]]}}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} dig a hole next to the sidewalk of the school {’action_sequence’: [{ ’action_type’: ’DIG’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [6, 9]]}, ’relative_direction’: ’NEAR’}, ’schematic’: {’has_name’: [0, [2, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} move to the right until you ca n’t anymore {’action_sequence’: [{ ’action_type’: ’MOVE’, ’location’: { ’location_type’: ’SPEAKER_POS’, 4713 ’relative_direction’: ’RIGHT’}, ’repeat’: { ’stop_condition’: { ’condition_span’: [0, [4, 8]]}}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} move up the hill {’action_sequence’: [{ ’action_type’: ’MOVE’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [3, 3]]}, ’relative_direction’: ’UP’}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} build a bridge over the lava {’action_sequence’: [{ ’action_type’: ’BUILD’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [5, 5]]}, ’relative_direction’: ’UP’}, ’schematic’: {’has_name’: [0, [2, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} this pyramid is 5 platforms tall {’dialogue_type’: ’NOOP’} spawn 30 cows and build a 15 by 15 fence {’action_sequence’: [ { ’action_type’: ’SPAWN’, ’reference_object’: { ’has_name’: [0, [2, 2]]}, ’repeat’: { ’repeat_count’: [0, [1, 1]], ’repeat_key’: ’FOR’}}, { ’action_type’: ’BUILD’, ’schematic’: { ’has_height’: [0, [2, 2]], ’has_name’: [0, [5, 5]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} move three feet forward and stop {’action_sequence’: [{ ’action_type’: ’MOVE’, ’location’: { ’location_type’: ’AGENT_POS’, ’relative_direction’: ’FRONT’, ’steps’: [0, [1, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} destroy the building that ’s in front of you {’action_sequence’: [{ ’action_type’: ’DESTROY’, ’reference_object’: { ’has_name’: [0, [2, 2]], ’location’: { ’location_type’: ’AGENT_POS’, ’relative_direction’: ’FRONT’}}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} tag the horse armor {’dialogue_type’: ’PUT_MEMORY’, ’filters’: { ’reference_object’: { ’has_name’: [0, [2, 3]]}}} bot build it to fit into the open frame {’action_sequence’: [{ ’action_type’: ’BUILD’, ’schematic’: { ’has_name’: [0, [2, 2]], ’has_tag’: [0, [4, 8]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} destroy the hut near the big tree {’action_sequence’: [{ ’action_type’: ’DESTROY’, ’reference_object’: { ’has_name’: [0, [2, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} move the rabbit into the box {’action_sequence’: [{ ’action_type’: ’OTHERACTION’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [5, 5]]}, ’relative_direction’: ’INSIDE’}, ’reference_object’: {’has_name’: [0, [2, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} fill the entire tub with pepsi {’action_sequence’: [{ ’action_type’: ’FILL’, ’has_block_type’: [0, [5, 5]], ’reference_object’: { ’has_name’: [0, [3, 3]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} stop digging {’action_sequence’: [{ ’action_type’: ’STOP’, ’target_action_type’: [0, [1, 1]]}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} destroy the box {’action_sequence’: [{ ’action_type’: ’DESTROY’, ’reference_object’: { ’has_name’: [0, [2, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} let ’s resume our mission of traveling over that treacherous mountain pass {’action_sequence’: [{ ’action_type’: ’RESUME’, ’target_action_type’: [0, [3, 11]]}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} build a house with a porch next to the pyramid {’action_sequence’: [{ ’action_type’: ’BUILD’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [9, 9]]}, ’relative_direction’: ’NEAR’}, ’schematic’: { ’has_name’: [0, [2, 2]], ’has_tag’: [0, [3, 5]]}}], 4714 ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} build stairs in the corner {’action_sequence’: [{ ’action_type’: ’BUILD’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [4, 4]]}}, ’schematic’: { ’has_name’: [0, [1, 1]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} spawn milk {’action_sequence’: [{ ’action_type’: ’SPAWN’, ’reference_object’: { ’has_name’: [0, [1, 1]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} build a wall to divide the largest room in the house {’action_sequence’: [{ ’action_type’: ’BUILD’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [6, 10]]}, ’relative_direction’: ’INSIDE’}, ’schematic’: {’has_name’: [0, [2, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} build foundation {’action_sequence’: [{ ’action_type’: ’BUILD’, ’schematic’: { ’has_name’: [0, [1, 1]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} please change the barn to a shop {’action_sequence’: [{ ’action_type’: ’OTHERACTION’, ’reference_object’: { ’has_name’: [0, [3, 3]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} copy the loaf of bread 100 times for distribution to the assembled army in front of you {’action_sequence’: [{ ’action_type’: ’BUILD’, ’reference_object’: { ’has_name’: [0, [2, 4]]}, ’repeat’: { ’repeat_count’: [0, [5, 5]], ’repeat_key’: ’FOR’}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} spawn fifteen horses {’action_sequence’: [{ ’action_type’: ’SPAWN’, ’reference_object’: { ’has_name’: [0, [2, 2]]}, ’repeat’: { ’repeat_count’: [0, [1, 1]], ’repeat_key’: ’FOR’}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} dance {’action_sequence’: [{ ’action_type’: ’DANCE’}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’} dig a hole beneath the fence on the west side of the prison yard big enough for a person to crawl through {’action_sequence’: [{ ’action_type’: ’DIG’, ’location’: { ’location_type’: ’REFERENCE_OBJECT’, ’reference_object’: { ’has_name’: [0, [5, 13]]}, ’relative_direction’: ’DOWN’}, ’repeat’: { ’stop_condition’: { ’condition_span’: [0, [14, 21]]}}, ’schematic’: { ’has_name’: [0, [2, 2]]}}], ’dialogue_type’: ’HUMAN_GIVE_COMMAND’}
2020
427
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4715–4728 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4715 Don’t Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training Margaret Li1, Stephen Roller1, Ilia Kulikov2⋆, Sean Welleck2⋆ Y-Lan Boureau1, Kyunghyun Cho1,2, Jason Weston1,2 1Facebook AI Research 2New York University {margaretli,roller,ylan,kyunghyuncho,jase}@fb.com [email protected],[email protected] Abstract Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019a) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks. 1 Introduction Open-ended tasks such as dialogue reveal a number of issues with current neural text generation methods. In more strongly grounded tasks such as machine translation and image captioning, current encoder-decoder architectures provide strong performance, where mostly word-level decisions are often taken correctly by the model. However, critical failings are exposed in less constrained generation: reliance on repetitive copying and overuse of frequent words, and an inability to maintain logical coherence. The former shows the learning objective is faulty in that it cannot match simple statistics of the training data, while the latter touches more to the heart of artificial intelligence: ⋆Work done while at Facebook AI Research (FAIR). Figure 1: GPT-2 345M model completions can show lack of coherence, e.g. direct contradictions. these models do not understand what they are saying. For example, Figure 1 shows how the 345Mparameter GPT2 model (Radford et al., 2019) can give high probability to contradictory generations. In this work, we show how the recently introduced unlikelihood objective (Welleck et al., 2019a) can be generalized to remedy these problems. Unlikelihood is a technique developed for removal of repetition in language model completions, and works by adding an extra term to the objective that forces repetitions to have low probability, alleviating the degenerative problems highlighted in Holtzman et al. (2019). In fact, unlikelihood can be seen as a much more general framework, as we will see. We first generalize unlikelihood to a different domain: dialogue, where we measure statistics of the training distribution in terms of contextual copies, within-utterance repeats, and vocabulary usage. We then develop loss functions that control these statistics, providing improved metrics on several tasks. Secondly, we show how the same tools can be used to address deeper semantic issues in such models. By leveraging existing natural language inference (NLI) data (Welleck et al., 2019b) as supervision against poor quality generations, we train models that assign low probability to generating incoherent and contradictory text. Overall, our approach yields more consistent dialogue models across several axes, and provides a 4716 promising framework for further advances. Code and pre-trained models will be made available.† 2 Dialogue Unlikelihood Training Dialogue Generation Dialogue generation consists in predicting an utterance y = (y1, . . . , y|y|) given a context x = {s1, . . . , sk, u1, . . . , ut} that consists of initial context sentences s1:k (e.g., scenario, knowledge, personas, etc.) followed by dialogue history utterances u1:t from speakers who take consecutive turns. Likelihood Training Given a dataset D = {(x(i), y(i))} derived from a collection of humanhuman interactions, the standard approach to generative training for dialogue tasks is maximum likelihood estimation (MLE), that minimizes: L(i) MLE(pθ, x(i), y(i)) = − |y(i)| X t=1 log pθ(y(i) t |x(i), y(i) <t), where x(i) is a gold context (dialogue history and initial context sentences) and y(i) is a gold nextutterance, and y(i) t is the t-th token of y(i). Likelihood-based (greedy or beam) decoding applied after training a model with this objective yields sequences with statistics that do not match the original human training sequence distribution. Unlikelihood Training To control for such distribution mismatches, we employ the unlikelihood loss (Welleck et al., 2019a), generalizing it to our setting, and developing a particular form of the loss function for each type of mismatch. The general form of the unlikelihood loss penalizes a set of tokens Ct at each time-step, L(i) UL(pθ, C1:T , x, y) = − |y| X t=1 X yc∈Ct β(yc) log (1 −pθ(yc|x, y<t)) , where Ct ⊆V is a subset of the vocabulary, and β(yc) is a candidate-dependent scale that controls how much the candidate token should be penalized. The overall objective in unlikelihood training then consists of mixing the likelihood and unlikelihood losses, L(i) ULE = L(i) MLE + αL(i) UL, (1) †https://parl.ai/projects/dialogue_ unlikelihood/ where α ∈R is the mixing hyper-parameter. Likelihood tries to model the overall sequence probability distribution, while unlikelihood corrects for known biases. It does this via the set of negative candidates Ct calculated at each step t, where we are free to select candidate generation functions depending on the biases to be mitigated. Likelihood pushes up the probability of a gold token y(i) t while unlikelihood pushes down the probability of negative candidate tokens yc ∈Ct. In Welleck et al. (2019a) the context x consists of a ground-truth sequence (x = x(i)), the target y is either a ground-truth sequence (y = y(i)) or a model-generated sequence (y = ˆy), and the pertoken scale parameter β(yc) is 1. In this paper, we demonstrate how unlikelihood can be used as a general framework by applying it to the dialogue domain. We show how varying the contexts x, targets y, candidates C and scaling β can be used to improve the coherence and language modeling quality of dialogue models. To do this, we now consider the different biases we wish to mitigate, and construct a specific unlikelihood loss for each in turn. 2.1 Repetition and Copying Generative dialogue models are known to both (i) rely too much on copying existing context knowledge or dialogue history; and (ii) repeat themselves within individual utterances. To address this with unlikelihood, we define two types of negative candidate tokens which either appear in a repeating n-gram from the context or from the generated label itself, Ccontext-copy t = ( {yt} yt ∈repeat context n-gram ∅ otherwise, Clabel-repeat t = ( {yt} yt ∈repeating label n-gram ∅ otherwise, where yt is a token in a repeating context n-gram when yt is part of an n-gram that already appeared in the context tokens x, and is in a repeating label n-gram when yt is part of an n-gram that already appeared in y<t. Given a ground-truth context x(i), we apply these two forms of unlikelihood to a model-generated sequence ˆy(i). In summary, we either apply the per-example loss L(i) UL(pθ, Ccontext-copy 1:|y| , x(i), ˆy(i)) 4717 for controlling context copies, or L(i) UL(pθ, Clabel-repeat 1:|y| , x(i), ˆy(i)). for controlling label repeats. We also consider mixing the two losses to mitigate both issues. 2.2 Vocabulary Usage Neural sequence models trained with maximum likelihood generate sequences with token distributions that differ from those of human text (Dinan et al., 2020; Holtzman et al., 2019). In particular, these models tend to produce high frequency tokens too often and low frequency tokens too rarely, where frequency is defined by the human token distribution. We address this with unlikelihood by penalizing tokens according to the mismatch between the model and ground-truth unigram distributions. Specifically, we first maintain an empirical estimate of the model’s unigram distribution pmodel(yt) and the human distribution p∗(yt): pmodel(yt) = count(yt) |Y | , where Y is a collection of token predictions on a subset of training data D′ (e.g. the preceding k = 256 batches), and count(yt) is the number of occurrences of yt in Y . This is computed using model sequences (y = ˆy), defining Y as the collection of all tokens in all ˆy. We wish to push down the probability of tokens appearing too often, i.e. when pmodel(yt) > p∗(yt). For the unlikelihood loss, each step’s candidate is thus the current token, Cidentity t = {yt}, and each token’s unlikelihood loss is scaled according to the mismatch between the approximated model and human distributions, β(yc) = pmodel(yc) log pmodel(yc) p∗(yc)  . The unlikelihood loss for a token yc is non-zero when the token occurs more often in the model’s estimated unigram distribution. In summary, the resulting per-example loss is L(i) UL(pθ, Cidentity 1:|y| , x(i), y) where y is a model-generated sequence. 2.3 Contradictions Neural generation models appear fluent, especially when pre-trained on large datasets, but are still poor at understanding the language they produce. That is, they can produce logically or factually inaccurate, or contradicting statements (Welleck et al., 2019b; Zhang et al., 2018; Hayashi et al., 2019; Petroni et al., 2019). Here, we show how the unlikelihood objective can be used to train such models to assign low probability to inconsistent and contradictory utterances. To do so, we assume the existence of training data of both positive and negative examples of coherent behavior. There is a raft of recent largescale, high quality data that can be massaged into this form, from natural language inference (NLI) tasks (Bowman et al., 2015; Williams et al., 2018; Welleck et al., 2019b) to commonsense reasoning tasks (Zellers et al., 2019; Qin et al., 2019). Two collections of data can be derived from the labels of such a supervised task: D+ = {(x(i), y(i)+)}, D−= {(x(i), y(i)−)}, where D+ is coherent behavior, e.g. neutral or entailing data in NLI, and D−is incoherent behavior, e.g. contradictions. In general, many forms of this type of data can be collected, not just NLI, and it is also not necessary for the contexts x(i) to overlap as we have written here. Standard likelihood training can then be performed on coherent data D+, while the unlikelihood objective is applied to D−as we wish to push down the probability of generating the incoherent response y−given a context x. That is, given an incoherent pair (x, y−) we use the loss LUL(pθ, Cidentity 1:|y| , x, y−), where we penalize each token in the target (Cidentity t = {y− t }). Hence, the loss makes generating the contradicting sentences less likely. 3 Related Work Our work provides new applications of unlikelihood training (Welleck et al., 2019a), showing that unlikelihood offers a general framework for improving generative models, and in particular dialogue models. Outside of that work, the use of negative training in dialogue retrieval, rather than generation, has been previously extensively studied, see e.g. (Humeau et al., 2019; Nugmanova 4718 et al., 2019). In the area of generative dialogue, a number of works have focused on improving the standard likelihood training approach. Closer to our work is that of He and Glass (2019) which developed the approach of negative training to prevent generic and malicious responses in dialogue models. In terms of improving repetition and specificity, a recent alternative approach is that of control (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; See et al., 2019). Nucleus sampling (Holtzman et al., 2019) can help to remove generic or repetitive utterances at the expense of accuracy, but was shown to be inferior to beam blocking, which in turn was shown to be inferior to unlikelihood in Welleck et al. (2019a). In terms of dialogue coherence, Welleck et al. (2019b) showed that retrieval, but not generative models, could be improved with NLI as a rescorer, while Yang et al. (2018) multi-tasked with NLI. The work of Gabriel et al. (2019) has also studied improving narrative flow with a discriminative rescorer, but in that case for generated language. In our work, the improvements are tightly integrated into the training of the model itself. 4 Experiments In all of our experiments we employ a large pre-trained seq2seq Transformer (Vaswani et al., 2017) as our base model, which we then fine-tune for particular tasks with the objectives outlined in Section 2 and specified in each experiment below. Following previous work (Humeau et al., 2019), we pre-train our model on dialogue data, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io, training to generate a comment conditioned on the full thread leading up to the comment, spanning ∼2200M training examples. Our Transformer model consists of an 8 layer encoder, 8 layer decoder with 512-dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of Miller et al. (2017). The model was trained with a batch size of 3072 sequences for approximately 3M updates using a learning rate of 5e-4, and an inverse square root scheduler. This pre-training took approximately two weeks using 64 NVIDIA V100s. 4.1 Repetition and Copying We use the ConvAI2 persona-based dialogue (Zhang et al., 2018), Wizard of Wikipedia Repetition Model PPL F1 Context Label Human .0223 .0004 MLE Baseline 11.4 .199 .1131 .0210 UL (Context only) 11.8 .194 .0330 .0069 UL (Label only) 11.4 .203 .0984 .0005 UL (Context & Label) 11.9 .193 .0352 .0023 Table 1: Evaluation on the ConvAI2 task valid set (test set is hidden), comparing standard likelihood (MLE) with context and label repetition unlikelihood loss training. The repetition types can be decreased depending on which type of unlikelihood loss is used, with minimal changes in perplexity and F1. Repetition Model PPL F1 Context Label Human .160 .001 MLE Baseline 8.3 .368 .441 .014 UL (Context only) 8.8 .346 .229 .037 UL (Label only) 8.3 .371 .426 .001 UL (Context + Label) 8.5 .358 .313 .009 Table 2: Evaluation on the Wizard of Wikipedia test set, comparing standard likelihood (MLE) with context and label repetition unlikelihood loss training. The repetition types can be decreased depending on the type of unlikelihood loss used, while minimally impacting F1. knowledge-grounded dialogue (Dinan et al., 2019) and ELI5 long-form question answering (Fan et al., 2019) datasets to evaluate the effect of using unlikelihood to reduce copying and repetition in model generated utterances. On each dataset, we fine-tune the pre-trained pushshift.io Reddit model, then evaluate by generating nextutterances for dialogue contexts from the test set (or validation in ConvAI2, as the test set is hidden). We use greedy decoding in our main experiments for simplicity and scalability, but we also obtained similar results with beam search, shown in Appendix A. To measure label repetition in a sequence y, we use the portion of duplicate n-grams: 1.0 −|unique n-grams(y)| |n-grams(y)| , and report the metric averaged over the examples. Label repetition increases from zero as the model generates more repeated n-grams. To measure context repetition, we measure the fraction of gen4719 Repetition Model PPL F1 Context Label Human .009 .010 MLE Baseline 21.0 .130 .033 .617 UL (Context only) 21.4 .163 .008 .322 UL (Label only) 21.4 .183 .015 .055 UL (Context + Label) 21.8 .184 .009 .078 Table 3: Evaluation on the ELI5 task test set, comparing standard likelihood (MLE) with context and label repetition unlikelihood loss training. The repetition types can be decreased depending on which type of unlikelihood loss is used, while improving F1. erated n-grams that appear in the original context: |n-grams(y) ∩n-grams(x)| |n-grams(y)| , and report the metric averaged over the examples. Context repetition increases when the model ‘copies’ n-grams from the context. To quantify language modeling quality, we use standard perplexity and F1 metrics. We use the pre-trained model fine-tuned with MLE as the baseline, and compare it against the pre-trained model fine-tuned with copy and repetition unlikelihood (§2.1). Results Results for ConvAI2 are shown in Table 1. We see that training unlikelihood using only-contexts or only-labels reduces their corresponding metrics dramatically compared to the MLE baseline. Training with both context- and label-repetition unlikelihood reduced both context repetitions (by 69%, .0352 vs. .1131) and label repetitions (by 89%, .0023 vs .0210) compared to the MLE baseline, much closer to human levels, while keeping perplexity essentially constant. Comparatively, the Wizard of Wikipedia MLE baseline experiences a much larger problem with context repetition, due to its tendency to copy grounded knowledge verbatim (Table 2). Results for ELI5, shown in Table 3, show that it has an especially large problem with label repetition, and that label-unlikelihood is able to reduce the repetitions by 91% (.055 vs .617), while significantly boosting F1 (.130 to .182). Figures 2 and 3 show perplexity as a function of label and context repeats respectively using unlikelihood on ELI5. The parameter α can clearly control repeats smoothly, with only very high values resulting in increased perplexity. 0.00 0.02 0.04 0.06 0.08 0.10 0.12 ELI5 Label Repeats 22 24 26 28 30 32 PPL Human level 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Figure 2: ELI5: Perplexity vs. label repeats as a function of α in the label unlikelihood objective. 0.00 0.01 0.02 0.03 ELI5 Context Repeats 21.4 21.6 21.8 22.0 22.2 22.4 22.6 22.8 PPL Human level 0.00 0.25 0.50 0.75 1.00 Figure 3: ELI5: Perplexity vs. context repeats as a function of α in the context unlikelihood objective. Human Evaluation Finally, we perform a human evaluation using the same pairwise evaluation scheme as (Fan et al., 2019) performed on ELI5, comparing the MLE baseline to UL (Label only) which asks: Which response answers the question better? The evaluators are asked to consider both the readability and accuracy of the answer. Results are given in Figure 4 (left), showing a statistically significant improvement over the baseline (150 trials, two tailed binomial test, p < 0.01). Further details are given in Appendix C. 4.2 Vocabulary Usage We evaluate the ability of vocabulary unlikelihood (§2.2) to reduce the mismatch between model and human token distributions. We use the ConvAI2 dataset, where our baseline is again trained using maximum likelihood. Starting with the baseline model, we then fine-tune several models using vocab unlikelihood at logarithmically interpolated values of α ∈[1, 1000]. We partition the vocabulary into ‘frequent’, ‘medium’, ‘rare’, and ‘rarest’ using the human 4720 unigram distribution computed with the ConvAI2 training set, corresponding to the sorted token sets whose cumulative mass accounts for the top 40%, the next 30%, the next 20% and the final 10% of usage, respectively. We evaluate a model by generating utterances given contexts from the ConvAI2 validation set, and compute the fraction of tokens within each class. Results Figure 5 shows how the vocabulary distribution obtained after unlikelihood training is affected by the choice of mixing hyperparameter α (Eq. 1): it can smoothly transition between the human training distribution and the MLE trained distribution (‘Baseline’), which is far from the human one. Table 4 compares the MLE baseline with unlikelihood with increasing α values in terms of distribution and F1 score. The vocabulary unlikelihood fine-tuning shifts probability mass from the over-represented frequent words towards underrepresented medium and rare words, with the effect strengthening as α increases. At a small cost to perplexity and F1, the unlikelihood tuning reduced the overuse of common tokens by 9 points, matching the human rate, while improving the production of rare tokens by 3 percentage points. Human Evaluation Finally, we perform a human evaluation using the ACUTE-EVAL framework (Li et al., 2019), comparing the MLE baseline to UL for various α. First, 252 human-bot conversations (8 turns each) are collected, and then models are compared pairwise by asking the question: Who would you prefer to talk to for a long conversation? For these experiments we compare with both methods generating using beam with context blocking of trigrams. Results are given in Figure 4 (right), showing a statistically significant improvement over the baseline according to humans (two tailed binomial test, p < 0.01). Further details are given in Appendix C. 4.3 Contradictions We use the dialogue natural language inference (NLI) task of Welleck et al. (2019b) to obtain labeled non-contradicting and contradicting dialogue sentence pairs to use in unlikelihood training (§2.3). Dialogue NLI contains utterances labeled as entailing (E), neutral (N) or contradiction (C), given a premise that is either a persona sentence (an initial context sentence describing a dialogue agent’s personality) or another dialogue utterance α = 101 α = 102 0% 25% 50% 75% 100% Winning Percentage Repetition (ELI5) Vocabulary (ConvAI2) MLE Baseline Unlikelihood Figure 4: Human evaluation experiments for label unlikelihood on ELI5 (left), and vocabulary unlikelihood on ConvAI2 for two values of α (right). Unlikelihood significantly outperforms the MLE baselines. Token frequency classes Model PPL F1 Freq Med Rare Rarest Human .400 .300 .200 .100 MLE Baseline 11.4 .199 .491 .282 .157 .068 UL, α = 100 11.4 .200 .483 .289 .163 .063 UL, α = 101 11.9 .201 .459 .328 .154 .058 UL, α = 102 12.5 .190 .430 .335 .163 .071 UL, α = 103 14.4 .174 .399 .339 .188 .073 Table 4: Unlikelihood loss applied to vocabulary distributions. Stronger α terms greatly shift probability mass from the most Frequent words to Medium and Rare words, at a small cost to PPL and F1. Frequent, medium, rare and rarest token classes are defined as the sets of tokens whose cumulative masses account for the top 40%, the next 30%, the next 20% and final 10% of tokens empirically generated by humans, respectively. 0.37 0.39 0.41 0.43 0.45 0.47 0.49 Frequent words cumulative mass 0.15 0.17 0.19 0.21 Rare words cumulative mass Baseline Human 1 10 100 1000 Figure 5: Vocabulary control with unlikelihood training: more probability mass is transferred from Frequent words to Rare words as we increase the α weighting parameter. The maximum likelihood baseline is far from the human distribution. from the Persona-Chat dialogue task (Zhang et al., 2018). We show examples from Dialogue NLI in 4721 Figure 6: Dialogue NLI from (Welleck et al., 2019b). Train Test Valid Entailment 95k 4613 4959 Triple-Entailment 105k 5285 5481 Neutral 110k 5500 5700 Negatives 110k 5500 5700 Table 5: Dialogue NLI two utterance generation task dataset statistics. Figure 6. The original data consists of sentence pairs (s1, s2) along with a label (E, N, or C), and was constructed by developing a schema and employing crowdworkers to label utterances with relation triples. The labels are then inferred from the triple representation. We first transform the original classification dataset into a form useful for unlikelihood training of a generative dialogue model. We consider two setups: (i) a two utterance generation task; and (ii) a full dialogue generation task. Two Utterance Generation Task We adapt the initial dialogue NLI dataset by using entailing and neutral training sentence pairs as plausible positive utterances, and contradicting pairs as negatives. That is, if a pair (s1, s2) from Dialogue NLI has label E or N, the example (x, y) = (s1, s2) is added to D+, otherwise (label C) it is added to D−. We consider two types of entailment: entailing sentence pairs that appear together in a dialogue in the original Persona-Chat dataset and are therefore natural (‘entailment’), and those that only entail via their triple relations (‘triple-entailment’). The latter are more challenging, noisier targets. Evaluation is performed by measuring the test set perplexity over the four target label types, where contradictions should have relatively higher perplexity. We additionally evaluate a selection accuracy task, where for each test example there are two candidate responses: a positive and a negative (contradicting) statement. The candidate response with the lowest perplexity is considered to be the model’s selection, and we measure the selection success rate. Evaluation is broken down by positive type (entailment, triple-entailment, neutral). Dataset statistics are given in Table 5. Full Dialogue Task To evaluate in a more realistic setup that involves full dialogue rather than a single utterance, we take full Persona-Chat dialogues (Zhang et al., 2018) similar to Figure 6, and map back the dialogue NLI data to provide positive and negative continuations of the dialogue. We consider continuations as either triple entailing utterances, neutral utterances or contradictions – where the relation triple is used to match the existing persona or dialogue turns by the same speaker to induce the label. That is, an example (x, y) consists of a dialogue history x = {p1, . . . , pk, u1, . . . , ut} and utterance y = s2, where (s1, s2) is a sentence pair from Dialogue NLI, and at least one sentence in x has the same relation triple as s1. When the pair (s1, s2) is labeled as E or N in Dialogue NLI, the example (x, y) is added to D+, and otherwise it is added to D−. Results Our MLE baseline obtains a perplexity of 11.4, in line with current best systems on this task (Lewis et al., 2019). Unfortunately, despite being good on such standard metrics, our baseline models fail at our coherence task. As seen in Table 6 for the two utterance task, the perplexity of contradicting utterances (12.5) is on average lower than for neutral (36.7) or triple-entailing utterances (17.5), although it is higher than entailing utterances. We believe this is due to contradicting utterances having high word overlap with the premise utterance, coupled with an inability to judge incoherence. Viewed as a selection task between utterances, picking the utterance with the lowest perplexity, this means the selection rates of non-contradicting utterances are very low, e.g. picking neutral utterances over contradicting utterances only 18% of the time. Even fully entailing utterances are only picked 73% of the time. Similar results are found on the full dialogue task as well, see Table 7. Unlikelihood training brings large improvements in coherence metrics, whilst minimally impacting overall dialogue perplexity. After applying unlikelihood, perplexity for contradicting utterances has a clear signature, with very large av4722 Selection Accuracy Perplexity Data + Model Entail Tr.-E Neutral Entail Tr.-E Neutral Contradict ConvAI2 MLE Baseline 72% 41% 18% 8.54 17.5 36.7 12.5 11.4 UL (Dialogue NLI) 96% 85% 78% 9.1 26.6 39.4 248.9 11.9 Table 6: Test evaluation on the Dialogue NLI two utterance generation task, comparing standard likelihood (MLE) models trained on pushshift.io Reddit and ConvAI2 with unlikelihood loss NLI training. Results are broken down according to whether the premise and positive candidate are entailing, triple-entailing, or neutral (Entail, Tr.-E, Neutral). Selection Accuracy measures how often the model assigns lower perplexity to the positive candidate than to the negative candidate in the pair. Top two rows: for standard maximum likelihood models, the perplexity of contradicting utterances is lower compared to neutral or triple-entailing utterances (albeit higher compared to entailing utterances), showing partial failure at the coherence task. Bottom row: NLI Unlikelihood training yields large improvements on all coherence metrics, while minimally increasing overall perplexity. Selection Accuracy (vs. Neg) Perplexity Data + Model Triple-Entail Neutral Triple-Entail Neutral Contradict ConvAI2 MLE Baseline 66.5% 36.8% 23.3 45.1 35.9 11.4 UL (Dialogue NLI) 89.0% 69.8% 21.5 40.3 63.5 11.8 Table 7: Test evaluation on the Full Dialogue NLI generation task. NLI unlikelihood training improves coherence metrics compared to likelihood (MLE) training. For UL, the triple-entailing or neutral candidates are assigned relatively lower perplexity compared to contradicting candidates, with higher selection accuracy for coherent labels. LMLE LUL Premise Hypothesis PPL PPL Yes, I love watching baseball and basketball. I do not (C) I love running. 25.5 226.9 like running though. (E) I despise running. 29.9 9.4 Yes, I love watching baseball and basketball. I do like (E) I love running. 26.2 3.1 running though. (C) I despise running. 42.8 247.1 We did too but working in real estate for 12 years . (E) I have been working as a real estate sucked up a lot of time agent for the past 12 years. 3.9 3.8 (C) We did too but working in real estate for fifteen years sucked up a lot of time. 3.1 17.6 Figure 7: Example perplexities of a baseline maximum likelihood model (LMLE) and our unlikelihood trained model (LUL ) when generating the provided hypotheses, given the premise. The maximum likelihood trained model assigns high probability (low perplexity) to contradictory generations, while unlikelihood does not. erage values compared to entailing or neutral utterances, e.g. 248.9 vs. 9.1 for contradict vs. entail on the two utterance task. This converts to corresponding large increases in selection accuracy across all types on both tasks, e.g., an increase from 18% to 78% on neutral statements on the two utterance task, and from 37.4% to 69.8% on the full dialogue task. Some example model predictions are given in Figure 7, comparing the MLE baseline and unlikelihood model perplexities of generating the given hypotheses. The likelihood model cannot differentiate between contradicting and entailing statements easily, while there are large perplexity differences for the unlikelihood model in these cases. 5 Conclusion Generating consistent and coherent human-like dialogue is a core goal of natural language research. We studied several aspects that contribute to that goal, defined metrics to measure them, and proposed algorithms that improve them, mitigating some of the failings of maximum likelihood training, the current dominant approach. Our method defines objective functions under the umbrella of unlikelihood: during training, we wish to make inconsistent dialogue unlikely by lowering the probability of such events occurring. This makes generative models repeat themselves less, copy the context less, and use more rare words from the vocabulary – closer to matching human statistics. Further, utilizing supervised datasets with labeled 4723 coherent and incoherent utterances and applying unlikelihood yields measurably improved levels of coherence with respect to the aspect measured, in this case contradiction. Future work could apply this same technique with other supervised data, e.g. correcting causal or commonsense reasoning errors (Zellers et al., 2019; Qin et al., 2019). References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2020. The second conversational intelligence challenge (ConvAI2). In The NeurIPS ’18 Competition, pages 187– 208, Cham. Springer International Publishing. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In Proceedings of the International Conference on Learning Representations. Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 45–54. Association for Computational Linguistics. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94–104, Copenhagen, Denmark. Association for Computational Linguistics. Saadia Gabriel, Antoine Bosselut, Ari Holtzman, Kyle Lo, Asli Celikyilmaz, and Yejin Choi. 2019. Cooperative generator-discriminator networks for abstractive summarization with narrative flow. arXiv preprint arXiv:1907.01272. Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. 2017. Hafez: an interactive poetry generation system. In Proceedings of ACL 2017, System Demonstrations, pages 43–48, Vancouver, Canada. Association for Computational Linguistics. Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, and Graham Neubig. 2019. Latent relation language models. arXiv preprint arXiv:1908.07690. Tianxing He and James Glass. 2019. Negative training for neural dialogue response generation. arXiv preprint arXiv:1903.02134. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Margaret Li, Jason Weston, and Stephen Roller. 2019. ACUTE-EVAL: Improved dialogue evaluation with optimized questions and multi-turn comparisons. In Proceedings of the NeurIPS Workshop on Conversational AI. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84, Copenhagen, Denmark. Association for Computational Linguistics. Aigul Nugmanova, Andrei Smirnov, Galina Lavrentyeva, and Irina Chernykh. 2019. Strategy of the negative sampling for training retrieval-based dialogue systems. In 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pages 844– 848. IEEE. Fabio Petroni, Tim Rockt¨aschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv:1909.01066. Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5042– 5052, Hong Kong, China. Association for Computational Linguistics. 4724 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1702–1723, Minneapolis, Minnesota. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019a. Neural text generation with unlikelihood training. arXiv preprint arXiv:1908.04319. Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019b. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731–3741, Florence, Italy. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learning semantic textual similarity from conversations. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 164– 174, Melbourne, Australia. Association for Computational Linguistics. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213, Melbourne, Australia. Association for Computational Linguistics. 4725 Repetition Model PPL F1 Context Label Human .160 .0006 MLE Baseline 8.3 .373 .582 .002 UL (Context only) 8.8 .345 .270 .001 UL (Label only) 8.3 .371 .645 .000 UL (Context + Label) 8.5 .358 .445 .003 Table 8: Evaluation on the Wizard of Wikipedia task test set, comparing standard likelihood (MLE) with repetition unlikelihood loss training, where both methods use beam search (beam size of 5). A Repetition Control with Beam Search The experiments on repetition and copying in the main paper were carried out with greedy decoding for simplicity. In this section we show that similar results hold with beam decoding as well. Using a beam size of 5, we take the same 4 models from Table 2 and compute metrics with beam instead. The results are given in Table 8 which show similar trends to before, except the baseline model using beam tends to suffer more from repetition, which is a known result (Holtzman et al., 2019). Note that we simply evaluated the same unlikelihood models as before, but we expect that better results could be obtained by performing sequence level unlikelihood training with beam search in the training loop, as well as choosing hyperparameters specifically with this kind of decoding being used to measure validation performance. B Nucleus Sampling for Vocabulary control Table 9 compares the MLE baseline, unlikelihood with increasing α values, and Nucleus sampling (Holtzman et al., 2019) with hyperparameter p in terms of distribution and F1 score. The vocabulary unlikelihood fine-tuning shifts probability mass from the over-represented frequent words towards under-represented medium and rare words, with the effect strengthening as α increases. At a small cost to perplexity and F1, the unlikelihood tuning reduced the overuse of common tokens by 9 points, matching the human rate, while improving the production of rare tokens by 3 percentage points. Nucleus sampling is a popular method that can also produce generations closer to the human vocabulary distribution. It does this by sampling from the model’s probability distribution rather Token frequency classes Model PPL F1 Freq Med Rare Rarest Human .400 .300 .200 .100 MLE Baseline 11.4 .199 .491 .282 .157 .068 Nucleus p = 0.3 11.4 .180 .452 .315 .168 .064 Nucleus p = 0.4 11.4 .171 .440 .320 .172 .068 Nucleus p = 0.5 11.4 .160 .425 .322 .180 .072 Nucleus p = 0.6 11.4 .151 .411 .318 .192 .078 Nucleus p = 1.0 11.4 .141 .394 .302 .201 .101 UL, α = 100 11.4 .200 .483 .289 .163 .063 UL, α = 101 11.9 .201 .459 .328 .154 .058 UL, α = 102 12.5 .190 .430 .335 .163 .071 UL, α = 103 14.4 .174 .399 .339 .188 .073 Table 9: Unlikelihood loss applied to vocabulary distributions. Stronger α terms greatly shift probability mass from the most Frequent words to Medium and Rare words, at a small cost to PPL and F1. Frequent, medium, rare and rarest token classes are defined as the sets of tokens whose cumulative masses account for the top 40%, the next 30%, the next 20% and final 10% of tokens empirically generated by humans, respectively. Nucleus sampling can also produce a distribution close to human with parameter p close to 1, but with larger losses in F1. than using beam search, where the sampler restricts to the smallest set of tokens with total mass above a threshold p ∈[0, 1]. Small values of p are similar to greedy sampling. Increasing p yields distributions closer to human, but with large losses in F1 score, e.g. p = 0.5 has a similar distribution to unlikelihood with α = 102 but the F1 scores are 0.160 vs. 0.190. This can be understood because maximizing likelihood during decoding yields better token accuracy than sampling (Welleck et al., 2019a), so the unlikelihood training approach to both use likelihood decoding and match the human distribution can obtain the best of both worlds. C Human Evaluation Description of ConvAI2 vocabulary setup We follow (Li et al., 2019) and perform a pairwise comparison with full-length model conversations. We first collected 252 model-human conversations with each of the models (MLE baseline, and weights for α of Unlikelihood, examples in 8). We then set up a pairwise-comparison using the software of (Li et al., 2019), using the same question (“Who would you prefer to talk to for a long conversation?”) and use the exact same quality control question (a baseline greedy model without repetition control, versus a human). We collected ap4726 proximately 200 preferences per model comparison and filtered annotators who failed quality control. Description of ELI5 repetition setup We follow (Fan et al., 2019) and perform a pairwise evaluation where human annotators were asked “which response answers the question better?” A screenshot of the UI is shown in Figure 9. Human evaluators were asked to rate a total of 5 questions, two of which were quality control annotations. The quality control examples contained the real human responses, along with model predictions: one question contained a baseline model, and one contained an unlikelihood model. Annotators which did not pick humans in quality controls were removed from the final setups. We collected 200 annotations comparing the baseline and the unlikelihood model. Results Evaluation results from all evaluated matchups are shown in Figure 10. We find our repetition-controlled ELI5 model significantly outperforms the MLE baseline. We find that two of the vocabulary repetition significantly outperform the MLE baseline. We compute significance with a two-tailed binomial test (p < .01). 4727 Figure 8: Examples of model-human conversations collected during human evaluation of the vocab unlikelihood models. Human utterances are in blue bubbles, model utterances are in white. Conversations (a) and (b) are from the baseline. Conversations (c) and (d) are from the α = 102 model and more frequently employ rarer words. 4728 Figure 9: Screenshot of the Human Evaluator UI. α = 100 α = 101 α = 102 α = 103 0% 25% 50% 75% 100% Winning Percentage Repetition (ELI5) Vocabulary (ConvAI2) MLE Baseline Unlikelihood Figure 10: Complete Human Evaluation results. Human evaluators do not significantly prefer the α = 100 and α = 103 models over the baseline model.
2020
428
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4729–4747 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4729 How does BERT’s attention change when you fine-tune? An analysis methodology and a case study in negation scope Yiyun Zhao Department of Linguistics University of Arizona [email protected] Steven Bethard School of Information University of Arizona [email protected] Abstract Large pretrained language models like BERT, after fine-tuning to a downstream task, have achieved high performance on a variety of NLP problems. Yet explaining their decisions is difficult despite recent work probing their internal representations. We propose a procedure and analysis methods that take a hypothesis of how a transformer-based model might encode a linguistic phenomenon, and test the validity of that hypothesis based on a comparison between knowledge-related downstream tasks with downstream control tasks, and measurement of cross-dataset consistency. We apply this methodology to test BERT and RoBERTa on a hypothesis that some attention heads will consistently attend from a word in negation scope to the negation cue. We find that after fine-tuning BERT and RoBERTa on a negation scope task, the average attention head improves its sensitivity to negation and its attention consistency across negation datasets compared to the pre-trained models. However, only the base models (not the large models) improve compared to a control task, indicating there is evidence for a shallow encoding of negation only in the base models. 1 Introduction As large-scale pre-trained language models such as BERT and ELMo have achieved high performance in a variety of natural language processing tasks (Peters et al., 2018a; Radford et al., 2018; Devlin et al., 2019), a growing body of research is devoted to understanding what linguistic properties these language models have acquired. Recent work uses probes, which are supervised models trained to predict linguistic properties including morphology (Belinkov et al., 2017), syntax (Hewitt and Manning, 2019) and semantics (Peters et al., 2018b), etc. (See Belinkov and Glass (2019) for a complete survey.) A good probing performance is considered as evidence that the language models have learned the linguistic knowledge. What is not yet well understood is how this encoded linguistic knowledge changes when a pretrained language model is fine-tuned for a downstream task. Peters et al. (2019) applies a supervised probe both before and after fine-tuning BERT, and suggests that fine-tuning makes the internal representation task-sensitive. But with supervised probes it can be difficult to disentangle what was learned by the probe from what was present in the internal representation (Hewitt and Liang, 2019). Recent studies have thus turned to unsupervised probes that require no additional training of the model and instead look directly at the attention mechanism, i.e., how much to care about other words when computing the next version of the current word. Clark et al. (2019) inspected pretrained transformers and found several syntactic properties encoded in an intuitive way, where the maximum attention from a dependent is on its syntactic head. But only the pretrained models were considered, not what happened to these intuitive encodings after fine-tuning to a downstream task. We argue that if some interpretable encoding of linguistic knowledge is a good explanation of a model, rather than showing it in the pretrained model, it is more important to show it will be enhanced by fine-tuning on a task where that linguistic knowledge is necessary. If the encoding is not enhanced by such fine-tuning, then the model must be using some other mechanism to encode that linguistic knowledge. We therefore propose the following methodology for testing whether a hypothesized encoding of a linguistic phenomenon is a good explanation for a transformer’s predictions. 1. Hypothesize an attention representation of the knowledge of interest and design an unsupervised probe, such that each attention head can 4730 make its own prediction. 2. Identify a downstream task related to the knowledge of interest, and design a control task that is learnable and has a similar input and output space but is not related to the knowdge of interest. 3. Fine-tune on both the downstream and control tasks, and measure the unsupervised probe performance of each attention head before and after fine-tuning. Applying this methodology and a variety of analyses that it enables, and focusing on the phenomenon of linguistic negation scope in a intuitive encoding (the maximal attention from a word in negation scope will be on the negation cue), we find that: 1. Before fine-tuning, several attention heads are sensitive to negation scope. The best heads are better than a fixed-offset baseline, with the best BERT-base head achieving an F1 of 53.8 in a fully unsupervised setting. 2. There is consistency in which heads are negation-sensitive across different datasets. 3. After fine-tuning on a negation scope task, the average sensitivity of attention heads improved over the pretrained model for all four models (BERT-base, BERT-large, RoBERTabase, RoBERTa-large) but only the two base models improved more than the control task. 4. The rich do not get richer: attention heads that had the top F1s in the pretrained model do not have the top-ranked improvements after fine-tuning on negation scope. 5. The behavior of individual attention heads becomes more consistent across datasets after fine-tuning on the negation task, compared to the pretrained model and the control task, except for RoBERTa-large. Items 1 and 2 suggest that in the pretrained models negation scope may be encoded via attention to negation cues. Items 3 to 5 indicate that during fine-tuning, this encoding continues to play a role in BERT-base and RoBERTa-base, but RoBERTalarge and BERT-large may rely on other mechanisms to represent negation scope. The analysis code is available at https://github.com/ yiyunzhao/negation-scope-probing Though our findings are specific to the linguistic phenomenon of negation scope and the specific attention encoding we hypothesized, our proposed methodology and analyses are general, and can easily be applied to other linguistic phenomena or other encoding hypotheses to discover the role they play in modern pre-trained neural network models. 2 Background 2.1 BERT and attention heads We performed our analysis on the attention mechanism of uncased BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), large Transformer models (Vaswani et al., 2017). In the following text, we primarily focus on BERT-base and refer the reader to the appendix for detailed results on the other models. BERT-base contains 12 layers and each layer contains 12 attention heads. Each attention head takes a sequence of input vectors h = [h1, .., hn] that correspond to the n tokens. An attention head transforms each hi into query (qi), key (ki) and value (vi) vectors and computes an output vector (oi) via a weighted sum of value vectors based on attention weights (ai) : aij = exp(qT i kj) Pn l=1 exp(qT i kl) (1) oi = n X j=1 aijvj (2) Attention weights can be viewed as the amount of contribution from other tokens to the new representation of the current token. 2.2 Negation scope Negation is a grammatical structure that reverses the truth value of a proposition. The tokens that express the presence of negation are the negation cue and the tokens that are affected by the negation cue belong to the negation scope. For example, in the following sentence, not is the negation cue and the underlined tokens are the negation scope. Holmes was sitting with his back to me, and I had given him {no} sign of my occupation. Knowledge about negation and its scope is important for tasks such as sentiment anlaysis and logical inference. And as a linguistic phenomenon that bridges between syntax and semantics, it is a good candidate for exploring BERT’s attention, as related phenomena have already been found in BERT (Tenney et al., 2019; Clark et al., 2019). 4731 3 Methodology and Analyses In this section, we explain our proposed methodology and analyses, and illustrate their application to the linguistic phenomenon of negation scope. Step 1: hypothesize an interpretable representation of the phenomenon of interest. Transformer models could represent linguistic knowledge in many ways: attention, contextualized embeddings, etc. To apply our methodology, one must first hypothesize a specific encoding of the phenomenon of interest. For negation scope, we hypothesize that for some subset of attention heads, words in negation scope will attend primarily to the negation cue, while words out of negation scope will attend primarily to other words (see Section 4.1). Under this hypothesis, each attention head is an unsupervised negation scope classifier. Step 2: Identify a downstream task that requires the phenomenon of interest. To infer that a transformer model is explainable in terms of the hypothesized encoding, we must see evidence that the encoding is strengthened when fine-tuning on a task that requires the phenomenon of interest. If the encoding is visible in the pre-trained model but disappears during fine-tuning, then the model is handling the phenomenon through some other mechanism. For negation scope, our downstream tasks are supervised negation scope prediction problems (see Section 5.1). Step 3: Design a control task where the phenomenon of interest is irrelevant. The control task should have input and output spaces that match those of the downstream task but should be learnable without any knowledge of the phenomenon. For negation scope, we arbitrarily assign word types to binary labels (see Section 5.1). Step 4: Analyze differences between models fine-tuned on the downstream and control tasks. If the hypothesized encoding explains the model predictions, changes observed when fine-tuning on the downstream task must be greater than changes observed when fine-tuning on the control task. For negation scope, we analyze changes in performance of individual attention heads as unsupervised negation classifiers. ... and you know not whether for good or ill Figure 1: Example text with true negation scope on top and layer 8 head 4’s maximally-attended word for each input on the bottom. Dashed lines are precision errors and dotted lines are recall errors. 4 Does BERT pay ‘attention’ to negation scope before fine-tuning? We start by hypothesizing a way that negation scope could be encoded in transformer models. This hypothesis must not rely on any negationspecific training data, as we want to be able to measure evidence of the encoding equally well both before and after fine-tuning. Our hypothesized encoding treats each attention head as an unsupervised negation scope classifier. 4.1 Attention as a negation classifier Our goal is to see if any individual attention head is good at detecting negation scope. Because attention heads by definition compare two tokens to each other, we formulate negation scope detection as a pair-wise task. We treat each attention head as an unsupervised classifier that considers each token in the sentence, and if the maximum attention from that token is to the negation cue, we classify the token as within the negation scope. Formally, the prediction of an attention head for token i is: attendneg(i) =    1 if jneg = n argmax j=1 aij 0 otherwise (3) where jneg is the index of the negation cue, and aij is attention as defined in Equation (1). The quality of each attention head as such a negation classifier can be evaluated based on how often it agrees with the true negation scope, as shown in Figure 1. We use the standard measures of precision, recall, and F1: precision = Pn i=1 attendneg(i) ∧negscope(i) Pn i=1 attendneg(i) recall = Pn i=1 attendneg(i) ∧inscope(i) Pn i=1 negscope(i) F1 = 2 · precision · recall precision + recall 4732 where attendneg(i) is the unsupervised classifier of Equation (3) and negscope(i) is 1 if i is within the annotated negation scope and 0 otherwise. 4.2 Checking for confounds If we find an attention head that achieves a high F1 for negation detection, are we sure that BERT has learned negation? Or could the head be doing something simpler to achieve that F1? If most negation scopes were just one word after the negation cue, simply attending to the previous word would achieve high performance on the negation task. To build confidence that attention heads that achieve high F1 in negation detection aren’t somehow cheating, we (1) look at several baselines to establish the difficulty of the task, (2) use a regression to see which factors explain the attention, and (3) look for consistency in attention head performance across different datasets. We use the baselines: all in-scope: Always attend to the negation token, regardless of the input word. This guarantees 100% recall, but is somewhat unrealistic, since the attention mechanism doesn’t know where the negation word is1. fixed offset: Always attend to a fixed position relative to the input word. For example, a fixed offset of +1 would mean to always attend to the next word in the sentence, and therefore, according to Equation (3), to only predict a token is in the negation scope if it is immediately followed by the negation cue. Clark et al. (2019) observed several of BERT’s attention heads displaying such behavior. We considered fixed offsets from -3 to +3. Predictors of attention If an attention head has truly learned something about negation, its attention should not be easily explainable by something simpler like the proximity in the text. We thus build a simple regression model using the token’s negation scope label (in-scope or out-of-scope) and the distance to the negation cue as predictors, and the attention of the token to the negation cue as the dependent variable. If an attention head is truly detecting negation scope, we expect that scope label will be a significant predictor in this model, and token distance will be much less important. Consistency across domains If an attention head has truly learned something about negation, 1Note that our classifier in Equation (3) does know where the negation word is, since it is given jneg as an input. But a standalone transformer model is not given such information. Models P R F1 baseline all in scope 34.0 100.0 50.7 baseline average fixed offset 66.1 8.6 15.2 baseline best fixed offset (-1) 83.5 11.6 20.4 attention average head 49.5 5.2 9.0 attention best head (8-4) 76.2 41.5 53.8 Table 1: Performance of unsupervised BERT-base attention-based classifiers and baselines on the negation scope detection task in terms of precision (P), recall (R) and F1. The best fixed offset and attention head according to their F1 score are reported. we would expect it to perform reasonably well regardless of changes in text genre or style of negation annotation. Several studies show that generalization ability to a different dataset is not always guaranteed despite a good test performance on the same dataset (Weber et al., 2018; McCoy et al., 2019). We thus consider two different corpora annotated for negation: ConanDoyle-neg (Morante and Daelemans, 2012) and SFU Review (Konstantinova et al., 2012)2. These datasets differ in genre (Sherlock Holmes stories vs. movie, book, and consumer product reviews) and in annotation schema (e.g., they have different rules for what sentences are considered to contain negation, and how to deal with coordination structure). To see whether the same attention heads are performing well at negation scope detection across the two corpora, we measure kendall rank correlation: τ = 2 n(n −1) X i<j sgn(xi −xj)sgn(yi −yj) where xi is the performance of attention head i on the Conan Dolye dataset and yi is the performance of head i on the SFU-review dataset. 4.2.1 Results Table 1 shows the performance of BERT-base’s attention heads and the baselines. Table A1 in the Appendix shows the results for other models. BERTbase attention heads on average are not good predictors of negation scope (49.5% in precision, 5.2% in recall, 9.0% in F1) but the 4th attention head in layer 8 stands out (76.2% in precision, 41.5% in recall, 53.8% in F1). This performance is unlike either the best fixed offset baseline (-1) or the 2We exclude cases in these datasets where the negation cue is part of a word (e.g., im in impossible) because such subword segmentation does not always align to BERT’s tokenization. 4733 Figure 2: The heatmap of unsupervised negation-scope classification F1 for BERT-base’s 12 layers x 12 heads across two different datasets. The consistency (measure by kendall rank correlation) between the two datasets for precision, recall and F1 are 0.440, 0.418 and 0.415 respectively. See fig. A1 for precision and recall. all-in-scope baseline, exceeding both of these in F1, and with very different precision/recall tradeoffs. When we fit a regression model to predict layer 8 head 4’s attention based on token distance and the true negation scope label, we found that both distance (β = 0.043, p < 2 × 10−16) and label (β = 0.310, p < 2 × 10−16) were significant predictors for the attention, but the true negation scope label had a much larger coefficient. Anova tests comparing the full model with a model leaving out distance or label found that true negation scope explains more variance (207.7) than distance (1.5). This suggests that a large part of what the best attention head is doing can be best explained as detecting negation. Figure 2 shows that there is consistency in the F1 of BERT-base’s attention heads across the two negation scope datasets, e.g., BERT-base’s layer 8 head 4 has the best F1 in both. Kendall correlation tests confirm that the similarities across attention heads of BERT-base are significant: 0.440 tau coefficient (p = 5.24 × 10−15) in precision, 0.418 tau coefficient (p = 1.20 × 10−13) in recall and 0.415 tau coefficient (p = 1.56 × 10−13) in F1. Figures A1 to A4 in the Appendix show plots for precision and recall, and that similar results hold for the other models. Seeing that attention heads that are predictive of negation in one dataset continue to be predictive in another differently annotated dataset from a different text genre suggests that these most successful heads are indeed learning some form of linguistic negation during the BERT pre-training. 5 What happens to negation-sensitive attention heads when you fine-tune? We have seen that without any explicit training on a negation task, some attention heads are sensiBERT ... ... and 0 you 1 know 1 { 1 not 1 } 1 whether 1 for 1 good 1 or 1 ill 1 . 0 Figure 3: Negation scope detection as a word-piece-byword-piece binary classification task. tive to negation scope in an intuitive way (in-scope words attend primarily to the negation cue). What happens to the attention when we fine-tune (i.e., continue training the pre-trained model) on a downstream task that requires an understanding of negation scope? Will this attention-based encoding of negation scope be strengthened? Or will the model choose to represent negation-scope knowledge in some other way during fine-tuning? What about for a downstream task that is unrelated to negation? We answer these questions and others in the following sections by fine-tuning models on downstream tasks, and measuring how this changes the negation-sensitivity of different attention heads. 5.1 Downstream Tasks Downstream negation task We construct a downstream negation scope detection task from the ConanDoyle-neg dataset. As shown in Figure 3, we formulate the problem as a word-pieceby-word-piece binary classification problem, where a word-piece should be labeled 1 if it is in a negation scope and 0 otherwise. To provide the location of the negation cue as an input to the classifier, we add two tokens to the input, surrounding the cue with “{” and “}”. As is standard for BERT token classification models, a fully-connected layer with sigmoid activation connects BERT’s contextual embedding for each token with the binary outputs that 4734 must be predicted. This model can then be trained with BERT’s standard back-propagation procedure. Downstream control task Inspired by the control tasks of Hewitt and Liang (2019), we construct a downstream control task on the ConanDoyle-neg dataset that has the same input space and output space as the downstream negation task, but is constructed to be irrelevant to negation and most other linguistic phenomena. We arbitrarily assign each unique token in the training vocabulary to be always in-scope or always out-of-scope, with a distribution close to the empirical in-scope and out-ofscope distribution. To succeed in this control task, the model must memorize the category (in-scope or out-of-scope) for each token type. Since the assignment is arbitrary, there is no way for the model to generalize to unseen tokens, and thus when we evaluate performance on this task, we consider performance only on the tokens seen during training. 5.2 Fine-tuning classifiers We split the data into 662 negation frames for training and 200 negation frames for testing. We use the same data split for both the downstream negation scope task and the downstream control task. For each task, we take pre-trained BERT base as our starting point. We fine-tune this model for 50 epochs with a learning rate of 4 × 10−5 using the transformers libary (Wolf et al., 2019), and pick the best epoch based upon its performance on the testing data. For the negation scope task, performance is measured in F1. For the control task, performance is measured in accuracy on the testing data tokens that have been seen in the training data. We repeat this process 10 times, generating 10 different fine-tuned BERT models for each task, to allow us to quantify variance due to the inherent randomness in neural network training3. 5.3 Results Table 2 and Table A2 in the Appendix show that after fine-tuning all models achieve very high performance in both downstream tasks. BERT-base achieves on average 92.8% F1 for the negation scope task and on average 95.9% accuracy for the control task. The BERT-base model trained on the control task has learned essentially nothing about negation scope relationship, achieving an average 3Random restarts with the exact same hyperparameters can induce a surprising amount of instability in performance (Reimers and Gurevych, 2017; Devlin et al., 2019). 35.4% F1. These results show that both tasks are learnable from their data, and that the control task is irrelevant to negation scope. How does fine-tuning change attention? Finetuning changes many parameters to make a model better at a downstream task. Will the change be reflected in our hypothesized encoding, i.e., will in-scope words increase their attention to negation cues? And what will the patterns of such a change be? Will sensitivity to negation be spread throughout the attention heads of the model? Will just the attention heads that were already sensitive to negation improve? Or maybe no individual attention heads will get better at negation; the model will only becomes sensitive to negation in aggregate? We first look at overall changes. Table 3 shows the average performance change across all 144 heads of BERT-base, and for just the best head (layer 8, head 4). Table A3 shows average performance changes for the other models. When BERT-base is fine-tuned on the control task, the F1 for most heads is similar to what it was before fine-tuning. When BERT is fine-tuned on the negation task, both the average F1 and the F1 of the best attention head increase. The Wilcoxon test shows that both the average F1 (p = 7.578×10−5) and the F1 of the best head (p = 0.002089) finetuned on the negation task are significantly higher than when fine-tuned on the control task. Table A3 shows that all negation-finetuned models improve over the pretrained models, but only BERT-base and RoBERTa-base improve over the controls. We next look at changes at the level of individual attention heads. Figure 4 plots the average F1 performance gain for each of BERT-base’s 144 attention heads after fine-tuning on either the negation or control task. Figure A5 in the Appendix plots the same for the other models. These plots show that in negationfinetuned models the mid-to-late layers of attention heads improve their sensitivity to negation scope, while in control-finetuned models the changes are less positive and spread more broadly. Figure 4 shows that when BERT-base is fine-tuned on the negation task, the biggest gains in F1 are on attention heads in layers 6 through 10, while no such pattern is visible when BERT-base is fine-tuned on the control task. Do the rich heads get richer? Are attention heads that are already good predictors of negation 4735 Testing Task Negation Control Training Task P ± sd R ± sd F1± sd A ± sd Negation 96.1± 1.3 89.7 ± 1.3 92.8 ± 1.1 Control 34.8 ± 0.4 36.1 ± 2.2 35.4 ± 1.2 95.9 ± 3.0 Table 2: Performance of fine-tuned BERT-base models on the supervised negation scope detection and control tasks in terms of precision (P), recall (R) and F1 for negation scope and accuracy (A) for the control task. We report the average performance of 10 runs and 1 standard deviation. Attention Head Fine-Tune P ± sd R ± sd F1 ± sd Average None 49.5 5.2 9.0 Average Control 48.6 ± 1.7 5.3 ± 0.2 9.0 ± 0.4 Average Negation 52.2 ± 2.2 6.6 ± 0.8 11.1 ± 1.2 Best (8-4) None 76.2 41.5 53.8 Best (8-4) Control 65.0 ± 8.9 47.5 ± 11.7 53.1 ± 6.7 Best (8-4) Negation 82.3 ± 4.1 58.6 ± 10.8 67.7 ± 7.9 Table 3: Performance of unsupervised BERT-base attention-based classifiers on the scope detection task in terms of precision (P), recall (R) and F1 after the BERT model has been fine-tuned on different downstream tasks. scope improve more after fine-tuning? That is, if an attention head has a high negation-scope prediction performance before fine-tuning, will it increase in performance more than other attention heads that had lower performance before fine-tuning? To test this, we measure the kendall rank correlation between an attention head’s performance before fine-tuning on the downstream negation task, and its change in performance after fine-tuning. For the BERT-base model, most coefficients are very small and many of the runs show no significant correlation: the average τ coefficient for precision is -0.07 and only 3 out of 10 runs show a significant correlation, the average τ coefficient for recall is 0.10 and only 5 out of 10 runs show a significant correlation, and the τ coefficient for F1 is 0.08 and only 5 out of 10 runs show a significant correlation. Table A4 in the Appendix shows that in other models the rich on average get poorer: we find weak negative correlations. This suggests fine-tuning, even on a relevant downstream task, does not focus on improving the attention heads that are already good at the problem. Which layers improve the most? Are attention heads at certain layers more sensitive to fine-tuning than other layers? We measure the average performance gain for attention heads in each layer of BERT-base, and plot how these vary across the 10 runs in Figure 5. Figure A6 in the Appendix plot the same for the other models. After the model is fine-tuned on the negation task, we see that attention heads in mid-to-later layers (e.g., layers 6 through 10 in BERT-base) become more sensitive to negation scope. The models fine-tuned on the control task generally show smaller changes. The exception is BERT-large, whose pattern is very different, perhaps because it is the only model to have perfectly memorized the control task. Is the change consistent across datasets? We have seen that fine-tuning on a downstream negation task increases the negation sensitivity broadly across the many attention heads. Do these changes truly represent a better understanding of the linguistic phenomenon of negation, or are they simply a form of better fitting the training data? If a more general understanding is being learned, when looking across several different types of negation problems, there should be greater consistency in which attention heads are paying attention to negation than in the pretrained model or control task. We thus take models after fine-tuning on the ConanDoyle-neg downstream negation scope task, treat each of the attention heads as unsupervised negation-scope classifiers as in Section 4.1, and calculate performance on both the ConanDoyleneg data (the same type of data as was used for 4736 Figure 4: Change in F1 for each attention head in BERT-base (averaged across 10 runs) before and after fine-tuning. Figure 5: Average change in F1 for the attention heads in each layer in BERT-base, repeated for 10 runs. fine-tuning) and the SFU-review data (a different text genre and annotation scheme). We then run kendall rank correlation tests between the two sets of attention-head performances and report them in Table 4 for BERT-base and Table A5 in the Appendix for the other models. Fine-tuning BERTbase on the downstream negation task indeed yields more similar performance across datasets (0.516 F1) than for the original model before fine-tuning (0.415 F1) or the model fine-tuned on the downstream control task (0.409 F1). A Wilcoxon test shows that the τ coefficients fine-tuned on the negation task are significantly higher compared to those fine-tuned on the control task (p = 1.083 × 10−5). RoBERTa-base patterns similarly. For BERT-large the negation-tuned models show a marginal consistency improvement over the pretrain and the attention head consistency in the negation-tuned RoBERTa-large models does not exceed that of the control-tuned ones. 6 Discussion We have presented a methodology for looking for explanations of transformer models, where a hypothesized encoding of knowledge within the transformer is measured before and after fine-tuning and the changes are compared to those seen when finetuning on a control task. We considered a specific linguistic phenomenon, negation scope detection, proposed an intuitive way that attention may encode negation-scope (in-scope words pay attention to the negation cue), and applied our methodology to test whether the hypothesized encoding was indeed an explanation of the behavior of BERT and/or RoBERTa models. We found evidence that BERT-base and RoBERTa-base encode some negation knowledge in the proposed way as both average negation sensitivity and cross-dataset consistency improved over the pretrained model and the control task. Evidence for the large versions of the models was weaker, suggesting that they may be representing negation knowledge in other ways. Other works have explored the effects of finetuning on attention without testing for specific linguistic knowledge. Serrano and Smith (2019), Jain and Wallace (2019) and Wiegreffe and Pinter (2019) found many redundancies in the attention of sequence-to-sequence models, suggesting that attention may encode knowledge in many ways. Kovaleva et al. (2019) found that removal of attention heads in transformers does not necessarily damage downstream performance. Our results suggest an explanation for this finding: knowledge sensitivity spreads broadly, so recovering from a small number of missing heads should be easy. Htut et al. (2019) investigated the role of gram4737 Fine-Tune Precision Recall F1 mean τ ± sd sig mean τ ± sd sig mean τ ± sd sig Pretrain 0.440 0.418 0.415 Control 0.438 ± 0.020 10/10 0.406 ± 0.034 10/10 0.409 ± 0.026 10/10 Negation 0.469 ± 0.025 10/10 0.519 ± 0.020 10/10 0.516 ± 0.020 10/10 Table 4: Kendall rank correlation (τ) between an attention head’s performance on the ConanDoyle-neg dataset and its performance in the SFU-review dataset. For the fine-tuning settings, we report the average τ across 10 runs with 1 standard deviation, and the number of runs where there was a significant correlation. matical relations in BERT’s changes before and after fine-tuning. They found that long distance grammatical relations such as advcl and csubj improved greatly after finetuning on a semantically related task, but other relations did not. They included no control task and did not report changes for individual attention heads (only changes in the maximum performance) so their work inspires some questions: Do advcl and csubj improve more than expected by chance? For the other relations, does performance not improve because they are irrelevant? Or maybe performance of one of the non-maximal heads improved quite a bit, but not enough to exceed the maximal head? Applying our methodology for comparing against a control task and examining changes in individual heads could address these questions. Other work has tested for specific linguistic knowledge in pretrained models, but not explored how the encoding of that knowledge changes during fine-tuning. For instance, Clark et al. (2019) identified several syntactic relationships that are encoded in an intuitive way: the dependent’s primary attention is on its grammtical head. We argue that testing whether this hypothesized encoding of grammatical relations survives fine-tuning is critical if this is to be an explanation of how transformer models make predictions. We found no past work that considered the crossdataset consistency of attention. We believe measuring such consistency is important for differentiating between an attention head that learned to encode a linguistic phenomenon for a single dataset vs. an attention head that learned an encoding of the true linguistic phenomenon. For example, it could have been the case that fine-tuning improves sensitivity to negation in both datasets, but the improvements happen at different heads. We see this for example in BERT-large on the control task, where there is essentially zero consistency in which attention heads are active across the two datasets. Some limitations of our current work suggest future research directions. First, we have focused on one interpretable way of encoding of negation scope knowledge but one can hypothesize many other ways. For instance, instead of assuming that all in-scope words directly pay attention to negation cue, it is possible that the head of in-token words are organized in a tree of attention that leads to the negation cue. We use a single nonlinguistic control task, but one could imagine exploring attention head changes in the face of a gradient of fine-tuning tasks that are more or less relevant to the linguistic phenomenon of interest. We also focus primarily on the attention mechanism, but it would be useful to explore the value vectors that transformers apply the attention to, since these form the outputs and are thus more directly tied to classification decisions. 7 Conclusion In this paper, we propose a basic procedure and analysis methods that take a hypothesis of how a transformer-based model might encode a linguistic phenomenon, and test the validity of that hypothesis based on unsupervised probes, downstream control tasks, and measurement of cross-dataset consistency. We hypothesize an interpretable encoding of negation scope, where in-scope words attend to the negation cue, and find evidence of such an encoding in BERT-base and RoBERTa-base. Acknowledgements Thanks to the anonymous reviewers for their helpful suggestions. This work was supported in part by National Institutes of Health grant R01LM012918 from the National Library of Medicine (NLM). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. 4738 References Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–872, Vancouver, Canada. Association for Computational Linguistics. Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49–72. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733–2743, Hong Kong, China. Association for Computational Linguistics. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics. Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R. Bowman. 2019. Do attention heads in bert track syntactic dependencies? ArXiv, abs/1911.12246. Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543–3556, Minneapolis, Minnesota. Association for Computational Linguistics. Natalia Konstantinova, Sheila C.M. de Sousa, Noa P. Cruz, Manuel J. Ma˜na, Maite Taboada, and Ruslan Mitkov. 2012. A review corpus annotated for negation, speculation and their scope. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), pages 3190–3195, Istanbul, Turkey. European Language Resources Association (ELRA). Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4365–4374, Hong Kong, China. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. R. Thomas McCoy, Junghyun Min, and Tal Linzen. 2019. Berts of a feather do not generalize together: Large variability in generalization across models with similar test set performance. CoRR, abs/1911.02969. Roser Morante and Walter Daelemans. 2012. Conandoyle-neg: Annotation of negation in conan doyle stories. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, Istanbul. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Brussels, Belgium. Association for Computational Linguistics. Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7–14, Florence, Italy. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL 4739 https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338–348, Copenhagen, Denmark. Association for Computational Linguistics. Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931–2951, Florence, Italy. Association for Computational Linguistics. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Noah Weber, Leena Shekhar, and Niranjan Balasubramanian. 2018. The fine line between linguistic generalization and failure in Seq2Seq-attention models. In Proceedings of the Workshop on Generalization in the Age of Deep Learning, pages 24–27, New Orleans, Louisiana. Association for Computational Linguistics. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 11–20, Hong Kong, China. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. A Appendix The main text of the paper focused on the results for BERT-base. This appendix contains detailed results for all four models: BERT-base, RoBERTabase, BERT-large, and RoBERTa-large. 4740 Models P R F1 baseline all in scope 34.0 100.0 50.7 baseline average fixed offset 66.1 8.6 15.2 baseline best fixed offset (-1) 83.5 11.6 20.4 BERT-base attention average head 49.5 5.2 9.0 BERT-base attention best head (8-4) 76.2 41.5 53.8 BERT-large attention average head 45.4 3.3 5.9 BERT-large attention best head (14-4) 74.9 28.3 41.0 RoBERTa-base attention average head 56.0 6.9 12.1 RoBERTa-base attention best head (9-12) 92.9 19.1 31.1 RoBERTa-large attention average head 50.2 5.3 9.4 RoBERTa-large attention best head (15-15) 66.7 21.3 32.3 Table A1: Performance of unsupervised attention-based classifiers and baselines on the negation scope detection task in terms of precision (P), recall (R) and F1. The best fixed offset and attention head according to their F1 score are reported. Finding: all models have attention heads that know more about negation than the simple baselines. Testing Task Negation Control Training Task P ± sd R ± sd F1± sd A ± sd BERT-base Negation 96.1± 1.3 89.7 ± 1.3 92.8 ± 1.1 BERT-base Control 34.8 ± 0.4 36.1 ± 2.2 35.4 ± 1.2 95.9 ± 3.0 BERT-large Negation 97.3± 0.9 93.0 ± 1.1 95.1 ± 0.6 BERT-large Control 39.2 ± 0.9 33.1 ± 1.0 35.9 ± 0.6 100.0 ± 0.0 RoBERTa-base Negation 97.2± 0.9 92.9 ± 1.0 95.9 ± 0.3 RoBERTa-base Control 43.4 ± 0.7 45.4 ± 1.2 44.4 ± 0.7 98.3 ± 0.4 RoBERTa-large Negation 97.9± 0.9 93.5 ± 1.2 95.7 ± 0.9 RoBERTa-large Control 44.1 ± 0.6 45.2 ± 1.8 44.6 ± 1.0 97.9 ± 2.2 Table A2: Performance of fine-tuned models on the supervised negation scope detection and control tasks in terms of precision (P), recall (R) and F1 for negation scope and accuracy (A) for the control task. We report the average performance of 10 runs and 1 standard deviation. Finding: All models successfully learned both supervised tasks. Attention Head Fine-Tune P ± sd R ± sd F1 ± sd BERT-base Average None 49.5 5.2 9.0 BERT-base Average Control 48.6 ± 1.7 5.3 ± 0.2 9.0 ± 0.4 BERT-base Average Negation 52.2 ± 2.2 6.6 ± 0.8 11.1 ± 1.2 BERT-large Average None 45.4 3.3 5.9 BERT-large Average Control 44.8 ± 0.3 4.6 ± 0.1 8.3 ± 0.2 BERT-large Average Negation 46.0 ± 3.7 4.8 ± 1.5 8.0 ± 2.3 RoBERTa-base Average None 56.0 6.9 12.1 RoBERTa-base Average Control 53.7 ± 1.7 7.0 ± 0.3 12.0 ± 0.5 RoBERTa-base Average Negation 55.5 ± 1.9 7.9 ± 0.9 13.4 ± 1.4 RoBERTa-large Average None 50.2 5.3 9.4 RoBERTa-large Average Control 48.2 ± 2.2 7.0 ± 1.0 11.5 ± 1.3 RoBERTa-large Average Negation 54.2 ± 3.4 8.0 ± 1.8 13.2 ± 2.7 Table A3: Performance of unsupervised attention-based classifiers on the scope detection task in terms of precision (P), recall (R) and F1 after models have been fine-tuned on different downstream tasks. All models fine-tuned on negation-scope significantly outperformed their pretrained counterparts in F1, but only two (in bold) significantly outperformed the controls. Finding: In BERT-base and RoBERTa-base, attention can be a explanation of negation. 4741 Negation change Precision Recall F1 τ pos/neg sig τ pos/neg sig τ pos/neg sig BERT-base -0.065 0/3 3/10 0.096 5/0 5/10 0.085 5/0 5/10 BERT-large -0.098 2/5 7/10 -0.132 0/7 7/10 -0.132 0/8 8/10 RoBERTa-base -0.134 0/7 7/10 -0.107 0/5 5/10 -0.113 0/6 6/10 RoBERTa-large -0.155 0/8 8/10 -0.142 0/8 8/10 -0.144 0/8 8/10 Table A4: Kendall rank correlation (τ) between the change of an attention head after fine-tuning on the negation task and its performance in the pretrained model. We report the average τ across 10 runs, the number of runs where there was a significant correlation, and the direction (positive or negative) of the significant correlations. Finding: The rich do not get richer: attention heads that had the top F1s in the pretrained model do not have the top-ranked improvements after fine-tuning on negation scope. Consistency Precision Recall F1 mean τ ± sd sig mean τ ± sd sig mean τ ± sd sig BERT-base Pretrain 0.440 0.418 0.415 BERT-base Control 0.438 ± 0.020 10/10 0.406 ± 0.034 10/10 0.409 ± 0.026 10/10 BERT-base Negation 0.469 ± 0.025 10/10 0.519 ± 0.020 10/10 0.516 ± 0.020 10/10 BERT-large Pretrain 0.295 0.487 0.482 BERT-large Control 0.0005 ± 0.057 3/10 0.007 ± 0.039 1/10 0.006 ± 0.039 1/10 BERT-large Negation 0.474 ± 0.038 10/10 0.523 ± 0.082 10/10 0.530 ± 0.066 10/10 RoBERTa-base Pretrain 0.438 0.472 0.471 RoBERTa-base Control 0.456 ± 0.022 10/10 0.502 ± 0.023 10/10 0.487 ± 0.021 10/10 RoBERTa-base Negation 0.521 ± 0.024 10/10 0.538 ± 0.033 10/10 0.531 ± 0.033 10/10 RoBERTa-large Pretrain 0.377 0.504 0.493 RoBERTa-large Control 0.389 ± 0.031 10/10 0.579 ± 0.029 10/10 0.561 ± 0.026 10/10 RoBERTa-large Negation 0.516 ± 0.037 10/10 0.593 ± 0.056 10/10 0.584 ± 0.054 10/10 Table A5: Kendall rank correlation (τ) between an attention head’s performance on the ConanDoyle-neg dataset and its performance in the SFU-review dataset. For the fine-tuning settings, we report the average τ across 10 runs with 1 standard deviation, and the number of runs where there was a significant correlation. Only in two models (in bold) was the correlation for the negation-trained model significantly higher than the correlation for both the pretrained model and the control model. Finding: In BERT-base and RoBERTa-base, attention performance finetuned on a negation task is more consistent scope across different domains and annotation schemes. 4742 (a) Precision (b) Recall (c) F1 Figure A1: The heatmap of unsupervised negation-scope classification performance for BERT-base’s 12 layers x 12 heads across two different datasets. The consistency (measure by kendall rank correlation) between the two datasets for precision, recall and F1 are 0.440, 0.418 and 0.415 respectively. 4743 (a) Precision (b) Recall (c) F1 Figure A2: The heatmap of unsupervised negation-scope classification performance for BERT-large’s 24 layers x 16 heads across two different datasets. The consistency (measure by kendall rank correlation) between the two datasets for precision, recall and F1 are 0.295, 0.487 and 0.482 respectively. 4744 (a) Precision (b) Recall (c) F1 Figure A3: The heatmap of unsupervised negation-scope classification performance for RoBERTa-base’s 12 layers x 12 heads across two different datasets. The consistency (measure by kendall rank correlation) between the two datasets for precision, recall and F1 are 0.438, 0.472 and 0.471 respectively. 4745 (a) Precision (b) Recall (c) F1 Figure A4: The heatmap of unsupervised negation-scope classification performance for RoBERTa-large’s 24 layers x 16 heads across two different datasets. The consistency (measure by kendall rank correlation) between the two datasets for precision, recall and F1 are 0.377, 0.504 and 0.493 respectively. 4746 Figure A5: Change in F1 for each attention head (averaged across 10 runs) before and after fine-tuning. 4747 Figure A6: Change in F1 for each attention head (averaged across 10 runs) before and after fine-tuning.
2020
429
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 443–459 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 443 A Formal Hierarchy of RNN Architectures William Merrill∗ Gail Weiss† Yoav Goldberg∗‡ Roy Schwartz∗§ Noah A. Smith∗§ Eran Yahav† ∗Allen Institute for AI † Technion ‡ Bar Ilan University § University of Washington {willm,yoavg,roys,noah}@allenai.org {sgailw,yahave}@cs.technion.ac.il Abstract We develop a formal hierarchy of the expressive capacity of RNN architectures. The hierarchy is based on two formal properties: space complexity, which measures the RNN’s memory, and rational recurrence, defined as whether the recurrent update can be described by a weighted finite-state machine. We place several RNN variants within this hierarchy. For example, we prove the LSTM is not rational, which formally separates it from the related QRNN (Bradbury et al., 2016). We also show how these models’ expressive capacity is expanded by stacking multiple layers or composing them with different pooling functions. Our results build on the theory of “saturated” RNNs (Merrill, 2019). While formally extending these findings to unsaturated RNNs is left to future work, we hypothesize that the practical learnable capacity of unsaturated RNNs obeys a similar hierarchy. Experimental findings from training unsaturated networks on formal languages support this conjecture. 1 Introduction While neural networks are central to the performance of today’s strongest NLP systems, theoretical understanding of the formal properties of different kinds of networks is still limited. It is established, for example, that the Elman (1990) RNN is Turing-complete, given infinite precision and computation time (Siegelmann and Sontag, 1992, 1994; Chen et al., 2018). But tightening these unrealistic assumptions has serious implications for expressive power (Weiss et al., 2018), leaving a significant gap between classical theory and practice, which theorems in this paper attempt to address. Recently, Peng et al. (2018) introduced rational RNNs, a subclass of RNNs whose internal state can be computed by independent weighted finite automata (WFAs). Intuitively, such models have a computationally simpler recurrent update than Figure 1: Hierarchy of state expressiveness for saturated RNNs and related models. The y axis represents increasing space complexity. ∅means provably empty. Models are in bold with qualitative descriptions in gray. conventional models like long short-term memory networks (LSTMs; Hochreiter and Schmidhuber, 1997). Empirically, rational RNNs like the quasirecurrent neural network (QRNN; Bradbury et al., 2016) and unigram rational RNN (Dodge et al., 2019) perform comparably to the LSTM, with a smaller computational budget. Still, the underlying simplicity of rational models raises the question of whether their expressive power is fundamentally limited compared to other RNNs. In a separate line of work, Merrill (2019) introduced the saturated RNN1 as a formal model for analyzing the capacity of RNNs. A saturated RNN is a simplified network where all activation functions have been replaced by step functions. The saturated network may be seen intuitively as a “stable” version of its original RNN, in which the in1Originally referred to as the asymptotic RNN. 444 ternal activations act discretely. A growing body of work—including this paper—finds that the saturated theory predicts differences in practical learnable capacity for various RNN architectures (Weiss et al., 2018; Merrill, 2019; Suzgun et al., 2019a). We compare the expressive power of rational and non-rational RNNs, distinguishing between state expressiveness (what kind and amount of information the RNN states can capture) and language expressiveness (what languages can be recognized when the state is passed to a classifier). To do this, we build on the theory of saturated RNNs. State expressiveness We introduce a unified hierarchy (Figure 1) of the functions expressible by the states of rational and non-rational RNN encoders. The hierarchy is defined by two formal properties: space complexity, which is a measure of network memory,2 and rational recurrence, whether the internal structure of the RNN can be described by WFAs. The hierarchy reveals concrete differences between LSTMs and QRNNs, and further separates both from a class containing convolutional neural networks (CNNs, Lecun and Bengio, 1995; Kim, 2014), Elman RNNs, and gated recurrent units (GRU; Cho et al., 2014). We provide the first formal proof that LSTMs can encode functions that rational recurrences cannot. On the other hand, we show that the saturated Elman RNN and GRU are rational recurrences with constant space complexity, whereas the QRNN has unbounded space complexity. We also show that an unrestricted WFA has rich expressive power beyond any saturated RNN we consider—including the LSTM. This difference potentially opens the door to more expressive RNNs incorporating the computational efficiency of rational recurrences. Language expressiveness When applied to classification tasks like language recognition, RNNs are typically combined with a “decoder”: additional layer(s) that map their hidden states to a prediction. Thus, despite differences in state expressiveness, rational RNNs might be able to achieve comparable empirical performance to non-rational RNNs on NLP tasks. In this work, we consider the setup in which the decoders only view the final hidden state of the RNN.3 We demonstrate that 2Space complexity measures the number of different configurations an RNN can reach as a function of input length. Formal definition deferred until Section 2. 3This is common, but not the only possibility. For example, an attention decoder observes the full sequence of states. a sufficiently strong decoder can overcome some of the differences in state expressiveness between different models. For example, an LSTM can recognize anbn with a single decoding layer, whereas a QRNN provably cannot until the decoder has two layers. However, we also construct a language that an LSTM can recognize without a decoder, but a QRNN cannot recognize with any decoder. Thus, no decoder can fully compensate for the weakness of the QRNN compared to the LSTM. Experiments Finally, we conduct experiments on formal languages, justifying that our theorems correctly predict which languages unsaturated recognizers trained by gradient descent can learn. Thus, we view our hierarchy as a useful formal tool for understanding the relative capabilities of different RNN architectures. Roadmap We present the formal devices for our analysis of RNNs in Section 2. In Section 3 we develop our hierarchy of state expressiveness for single-layer RNNs. In Section 4, we shift to study RNNs as language recognizers. Finally, in Section 5, we provide empirical results evaluating the relevance of our predictions for unsaturated RNNs. 2 Building Blocks In this work, we analyze RNNs using formal models from automata theory—in particular, WFAs and counter automata. In this section, we first define the basic notion of an encoder studied in this paper, and then introduce more specialized formal concepts: WFAs, counter machines (CMs), space complexity, and, finally, various RNN architectures. 2.1 Encoders We view both RNNs and automata as encoders: machines that can be parameterized to compute a set of functions f : Σ∗→Qk, where Σ is an input alphabet and Q is the set of rational reals. Given an encoder M and parameters θ, we use Mθ to represent the specific function that the parameterized encoder computes. For each encoder, we refer to the set of functions that it can compute as its state expressiveness. For example, a deterministic finite state acceptor (DFA) is an encoder whose parameters are its transition graph. Its state expressiveness is the indicator functions for the regular languages. 2.2 WFAs Formally, a WFA is a non-deterministic finite automaton where each starting state, transition, and 445 final state is weighted. Let Q denote the set of states, Σ the alphabet, and Q the rational reals.4 This weighting is specified by three functions: 1. Initial state weights λ : Q →Q 2. Transition weights τ : Q × Σ × Q →Q 3. Final state weights ρ : Q →Q The weights are used to encode any string x ∈Σ∗: Definition 1 (Path score). Let π be a path of the form q0 →x1 q1 →x2 · · · →xt qt through WFA A. The score of π is given by A[π] = λ(q0) Qt i=1 τ(qi−1, xi, qi)  ρ(qt). By Π(x), denote the set of paths producing x. Definition 2 (String encoding). The encoding computed by a WFA A on string x is A[x] = P π∈Π(x) A[π]. Hankel matrix Given a function f : Σ∗→Q and two enumerations α, ω of the strings in Σ∗, we define the Hankel matrix of f as the infinite matrix [Hf]ij = f(αi·ωj). (1) where · denotes concatenation. It is sometimes convenient to treat Hf as though it is directly indexed by Σ∗, e.g. [Hf]αi,ωj = f(αi·ωj), or refer to a sub-block of a Hankel matrix, row- and columnindexed by prefixes and suffixes P, S ⊆Σ∗. The following result relates the Hankel matrix to WFAs: Theorem 1 (Carlyle and Paz, 1971; Fliess, 1974). For any f : Σ∗→Q, there exists a WFA that computes f if and only if Hf has finite rank. Rational series (Sakarovitch, 2009) For all k ∈ N, f : Σ∗→Qk is a rational series if there exist WFAs A1, · · · , Ak such that, for all x ∈Σ∗and 1 ≤i ≤k, Ai[x] = fi(x). 2.3 Counter Machines We now turn to introducing a different type of encoder: the real-time counter machine (CM; Merrill, 2020; Fischer, 1966; Fischer et al., 1968). CMs are deterministic finite-state machines augmented with finitely many integer counters. While processing a string, the machine updates these counters, and may use them to inform its behavior. We view counter machines as encoders mapping Σ∗→Zk. For m ∈N, ◦∈{+, −, ×}, let ◦m denote the function f(n) = n ◦m. 4WFAs are often defined over a generic semiring; we consider only the special case when it is the field of rational reals. Definition 3 (General CM; Merrill, 2020). A kcounter CM is a tuple ⟨Σ, Q, q0, u, δ⟩with 1. A finite alphabet Σ 2. A finite set of states Q, with initial state q0 3. A counter update function u : Σ × Q × {0, 1}k →{×0, −1, +0, +1}k 4. A state transition function δ : Σ × Q × {0, 1}k →Q A CM processes input tokens {xt}n t=1 sequentially. Denoting ⟨qt, ct⟩∈Q × Zk a CM’s configuration at time t, define its next configuration: qt+1 = δ  xt, qt,⃗1=0 (ct)  (2) ct+1 = u  xt, qt,⃗1=0 (ct)  (ct), (3) where ⃗1=0 is a broadcasted “zero-check” operation, i.e., ⃗1=0(v)i ≜1=0(vi). In (2) and (3), note that the machine only views the zeroness of each counter, and not its actual value. A general CM’s encoding of a string x is the value of its counter vector ct after processing all of x. Restricted CMs 1. A CM is Σ-restricted iff u and δ depend only on the current input σ ∈Σ. 2. A CM is (Σ × Q)-restricted iff u and δ depend only on the current input σ ∈Σ and the current state q ∈Q. 3. A CM is Σw-restricted iff it is (Σ × Q)restricted, and the states Q are windows over the last w input tokens, e.g., Q = Σ≤w.5 These restrictions prevent the machine from being “counter-aware”: u and δ cannot condition on the counters’ values. As we will see, restricted CMs have natural parallels in the realm of rational RNNs. In Subsection 3.2, we consider the relationship between counter awareness and rational recurrence. 2.4 Space Complexity As in Merrill (2019), we also analyze encoders in terms of state space complexity, measured in bits. Definition 4 (Bit complexity). An encoder M : Σ∗→Qk has T(n) space iff max θ {sMθ(x) | x ∈Σ≤n} = 2T(n), 5The states q ∈Σ<w represent the beginning of the sequence, before w input tokens have been seen. 446 where sMθ(x) is a minimal representation6 of M’s internal configuration immediately after x. We consider three asymptotic space complexity classes: Θ(1), Θ(log n), and Θ(n), corresponding to encoders that can reach a constant, polynomial, and exponential (in sequence length) number of configurations respectively. Intuitively, encoders that can dynamically count but cannot use more complex memory like stacks–such as all CMs–are in Θ(log n) space. Encoders that can uniquely encode every input sequence are in Θ(n) space. 2.5 Saturated Networks A saturated neural network is a discrete approximation of neural network considered by Merrill (2019), who calls it an “asymptotic network.” Given a parameterized neural encoder Mθ(x), we construct the saturated network s-Mθ(x) by taking s-Mθ(x) = lim N→∞MNθ(x) (4) where Nθ denotes the parameters θ multiplied by a scalar N. This transforms each “squashing” function (sigmoid, tanh, etc.) to its extreme values (0, ±1). In line with prior work (Weiss et al., 2018; Merrill, 2019; Suzgun et al., 2019b), we consider saturated networks a reasonable approximation for analyzing practical expressive power. For clarity, we denote the saturated approximation of an architecture by prepending it with s, e.g., s-LSTM. 2.6 RNNs A recurrent neural network (RNN) is a parameterized update function gθ : Qk×Qdx →Qk, where θ are the rational-valued parameters of the RNN and dx is the dimension of the input vector. gθ takes as input a current state h ∈Qk and input vector x ∈Qdx, and produces the next state. Defining the initial state as h0 = 0, an RNN can be applied to an input sequence x ∈(Qdx)∗one vector at a time to create a sequence of states {ht}t≤|x|, each representing an encoding of the prefix of x up to that time step. RNNs can be used to encode sequences over a finite alphabet x ∈Σ∗by first applying a mapping (embedding) e : Σ →Qdx. Multi-layer RNNs “Deep” RNNs are RNNs that have been arranged in L stacked layers R1, ..., RL. In this setting, the series of output 6I.e., the minimal state representation needed to compute Mθ correctly. This distinction is important for architectures like attention, for which some implementations may retain unusable information such as input embedding order. states h1, h2, ..., h|x| generated by each RNN on its input is fed as input to the layer above it, and only the first layer receives the original input sequence x ∈Σ∗as input. The recurrent update function g can take several forms. The original and most simple form is that of the Elman RNN. Since then, more elaborate forms using gating mechanisms have become popular, among them the LSTM, GRU, and QRNN. Elman RNNs (Elman, 1990) Let xt be a vector embedding of xt. For brevity, we suppress the bias terms in this (and the following) affine operations. ht = tanh(Wxt + Uht−1). (5) We refer to the saturated Elman RNN as the s-RNN. The s-RNN has Θ(1) space (Merrill, 2019). LSTMs (Hochreiter and Schmidhuber, 1997) An LSTM is a gated RNN with a state vector ht ∈Qk and memory vector ct ∈Qk. 7 ft = σ(Wfxt + Ufht−1) (6) it = σ(Wixt + Uiht−1) (7) ot = σ(Woxt + Uoht−1) (8) ˜ct = tanh(Wcxt + Ucht−1) (9) ct = ft ⊙ct−1 + it ⊙˜ct (10) ht = ot ⊙tanh(ct). (11) The LSTM can use its memory vector ct as a register of counters (Weiss et al., 2018). Merrill (2019) showed that the s-LSTM has Θ(log n) space. GRUs (Cho et al., 2014) Another kind of gated RNN is the GRU. zt = σ(Wzxt + Uzht−1) (12) rt = σ(Wrxt + Urht−1) (13) ut = tanh Wuxt + Uu(rt ⊙ht−1)  (14) ht = zt ⊙ht−1 + (1 −zt) ⊙ut. (15) Weiss et al. (2018) found that, unlike the LSTM, the GRU cannot use its memory to count dynamically. Merrill (2019) showed the s-GRU has Θ(1) space. 7 With respect to our presented definition of RNNs, the concatenation of ht and ct can be seen as the recurrently updated state. However in all discussions of LSTMs we treat only ht as the LSTM’s ‘state’, in line with common practice. 447 Figure 2: Diagram of the relations between encoders. Neural networks are underlined. We group by asymptotic upper bound (O), as opposed to tight (Θ). QRNNs Bradbury et al. (2016) propose QRNNs as a computationally efficient hybrid of LSTMs and CNNs. Let ∗denote convolution over time, let Wz, Wf, Wo ∈Qdx×w×k be convolutions with window length w, and let X ∈Qn×dx denote the matrix of n input vectors. An ifo-QRNN (henceforth referred to as a QRNN) with window length w is defined by Wz, Wf, and Wo as follows: Z = tanh(Wz ∗X) (16) F = σ(Wf ∗X) (17) O = σ(Wo ∗X) (18) ct = ft ⊙ct−1 + it ⊙zt (19) ht = ot ⊙ct (20) where zt, ft, ot are respectively rows of Z, F, O. A QRNN Q can be seen as an LSTM in which all uses of the state vector ht have been replaced with a computation over the last w input tokens–in this way it is similar to a CNN. The s-QRNN has Θ(log n) space, as the analysis of Merrill (2019) for the s-LSTM directly applies. Indeed, any s-QRNN is also a (Σw)-restricted CM extended with =±1 (“set to ±1”) operations. 3 State Expressiveness We now turn to presenting our results. In this section, we develop a hierarchy of single-layer RNNs based on their state expressiveness. A set-theoretic view of the hierarchy is shown in Figure 2. Let R be the set of rational series. The hierarchy relates Θ(log n) space to the following sets: • RR As in Peng et al. (2018), we say that An encoder is rationally recurrent (RR) iff its state expressiveness is a subset of R. • RR-hard An encoder is RR-hard iff its state expressiveness contains R. A Turing machine is RR-hard, as it can simulate any WFA. • RR-complete Finally, an encoder is RRcomplete iff its state expressiveness is equivalent to R. A trivial example of an RRcomplete encoder is a vector of k WFAs. The different RNNs are divided between the intersections of these classes. In Subsection 3.1, we prove that the s-LSTM, already established to have Θ(log n) space, is not RR. In Subsection 3.2, we demonstrate that encoders with restricted counting ability (e.g., QRNNs) are RR, and in Subsection 3.3, we show the same for all encoders with finite state (CNNs, s-RNNs, and s-GRUs). In Subsection 3.4, we demonstrate that none of these RNNs are RR-hard. In Appendix F, we extend this analysis from RNNs to self attention. 3.1 Counting Beyond RR We find that encoders like the s-LSTM—which, as discussed in Subsection 2.3, is “aware” of its current counter values—are not RR. To do this, we construct f0 : {a, b}∗→N that requires counter awareness to compute on strings of the form a∗b∗, making it not rational. We then construct an sLSTM computing f0 over a∗b∗. Let #a−b(x) denote the number of as in string x minus the number of bs. Definition 5 (Rectified counting). f0 : x 7→ ( #a−b(x) if #a−b(x) > 0 0 otherwise. Lemma 1. For all f : {a, b}∗→N, if f(aibj) = f0(aibj) for all i, j ∈N, then f ̸∈R . Proof. Consider the Hankel sub-block An of Hf with prefixes Pn = {ai}i≤n and suffixes Sn = {bj}j≤n. An is lower-triangular:      0 0 0 · · · 1 0 0 · · · 2 1 0 · · · ... ... ... ...     . (21) Therefore rank(An) = n−1. Thus, for all n, there is a sub-block of Hf with rank n −1, and so rank(Hf) is unbounded. It follows from Theorem 1 that there is no WFA computing f. Theorem 2. The s-LSTM is not RR. 448 q0 start a/+1 b, ̸=0/−1 b, =0/+0 Figure 3: A 1-CM computing f0 for x ∈{aibj | i, j ∈ N}. Let σ/±m denote a transition that consumes σ and updates the counter by ±m. We write σ, =0/±m (or ̸=) for a transition that requires the counter is 0. Proof. Assume the input has the form aibj for some i, j. Consider the following LSTM 8: it = σ 10Nht−1 −2N1=b(xt) + N  (22) ˜ct = tanh N1=a(xt) −N1=b(xt)  (23) ct = ct−1 + it˜ct (24) ht = tanh(ct). (25) Let N →∞. Then it = 0 iff xt = b and ht−1 = 0 (i.e. ct−1 = 0). Meanwhile, ˜ct = 1 iff xt = a. The update term becomes it˜ct =      1 if xt = a −1 if xt = b and ct−1 > 0 0 otherwise. (26) For a string aibj, the update in (26) is equivalent to the CM in Figure 3. Thus, by Lemma 1, the s-LSTM (and the general CM) is not RR. 3.2 Rational Counting While the counter awareness of a general CM enables it to compute non-rational functions, CMs that cannot view their counters are RR. Theorem 3. Any Σ-restricted CM is RR. Proof. We show that any function that a Σrestricted CM can compute can also be computed by a collection of WFAs. The CM update operations (−1, +0, +1, or ×0) can all be reexpressed in terms of functions r(x), u(x) : Σ∗→Zk to get: ct = r(xt)ct−1 + u(xt) (27) ct = Pt i=1 Qt j=i+1 r(xj)  u(xi). (28) A WFA computing [ct]i is shown in Figure 4. 8In which ft and ot are set to 1, such that ct = ct−1+it˜ct. The WFA in Figure 4 also underlies unigram rational RNNs (Peng et al., 2018). Thus, Σ-restricted CMs are actually a special case of unigram WFAs. In Appendix A, we show the more general result: Theorem 4. Any (Σ × Q)-restricted CM is RR. In many rational RNNs, the updates at different time steps are independent of each other outside of a window of w tokens. Theorem 4 tells us this independence is not an essential property of rational encoders. Rather, any CM where the update is conditioned by finite state (as opposed to being conditioned by a local window) is in fact RR. Furthermore, since (Σw)-restricted CMs are a special case of (Σ×Q)-restricted CMs, Theorem 4 can be directly applied to show that the s-QRNN is RR. See Appendix A for further discussion of this. 3.3 Finite-Space RR Theorem 4 motivates us to also think about finitespace encoders: i.e., encoders with no counters” where the output at each prefix is fully determined by a finite amount of memory. The following lemma implies that any finite-space encoder is RR: Lemma 2. Any function f : Σ∗→Q computable by a Θ(1)-space encoder is a rational series. Proof. Since f is computable in Θ(1) space, there exists a DFA Af whose accepting states are isomorphic to the range of f. We convert Af to a WFA by labelling each accepting state by the value of f that it corresponds to. We set the starting weight of the initial state to 1, and 0 for every other state. We assign each transition weight 1. Since the CNN, s-RNN, and s-GRU have finite state, we obtain the following result: Theorem 5. The CNN, s-RNN, and s-GRU are RR. While Schwartz et al. (2018) and Peng et al. (2018) showed the CNN to be RR over the max-plus semiring, Theorem 5 shows the same holds for ⟨Q, ·, +⟩. 3.4 RR Completeness While “rational recurrence” is often used to indicate the simplicity of an RNN architecture, we find in this section that WFAs are surprisingly computationally powerful. Figure 5 shows a WFA mapping binary string to their numeric value, proving WFAs have Θ(n) space. We now show that none of our RNNs are able to simulate an arbitrary WFA, even in the unsaturated form. 449 q0 start q1 ∀σ/1 ∀σ/ui(σ) ∀σ/ri(σ) Figure 4: WFA simulating unit i of a Σ-restricted CM. Let ∀σ/w(σ) denote a set of transitions consuming each token σ with weight w(σ). We use standard DFA notation to show initial weights λ(q0) = 1, λ(q1) = 0 and accepting weights ρ(q0) = 0, ρ(q1) = 1. q0 start q1 ∀σ/1 ∀σ/σ ∀σ/2 Figure 5: A WFA mapping binary strings to their numeric value. This can be extended for any base > 2. Cortes and Mohri (2000) present a similar construction. Notation is the same as Figure 4. Theorem 6. Both the saturated and unsaturated RNN, GRU, QRNN, and LSTM9 are not RR-hard. Proof. Consider the function fb mapping binary strings to their value, e.g. 101 7→5. The WFA in Figure 5 shows that this function is rational. The value of fb grows exponentially with the sequence length. On the other hand, the value of the RNN and GRU cell is bounded by 1, and QRNN and LSTM cells can only grow linearly in time. Therefore, these encoders cannot compute fb. In contrast, memory networks can have Θ(n) space. Appendix G explores this for stack RNNs. 3.5 Towards Transformers Appendix F presents preliminary results extending saturation analysis to self attention. We show saturated self attention is not RR and consider its space complexity. We hope further work will more completely characterize saturated self attention. 4 Language Expressiveness Having explored the set of functions expressible internally by different saturated RNN encoders, we turn to the languages recognizable when using them with a decoder. We consider the following setup: 1. An s-RNN encodes x to a vector ht ∈Qk. 2. A decoder function maps the last state ht to an accept/reject decision, respectively: {1, 0}. 9As well as CMs. We say that a language L is decided by an encoder-decoder pair e, d if d(e(x)) = 1 for every sequence x ∈L and otherwise d(e(x)) = 0. We explore which languages can be decided by different encoder-decoder pairings. Some related results can be found in Cortes and Mohri (2000), who study the expressive power of WFAs in relation to CFGs under a slightly different definition of language recognition. 4.1 Linear Decoders Let d1 be the single-layer linear decoder d1(ht) ≜1>0(w · ht + b) ∈{0, 1} (29) parameterized by w and b. For an encoder architecture E, we denote by D1(E) the set of languages decidable by E with d1. We use D2(E) analogously for a 2-layer decoder with 1>0 activations, where the first layer has arbitrary width. 4.2 A Decoder Adds Power We refer to sets of strings using regular expressions, e.g. a∗= {ai | i ∈N}. To illustrate the purpose of the decoder, consider the following language: L≤= {x ∈{a, b}∗| #a−b(x) ≤0}. (30) The Hankel sub-block of the indicator function for L≤over P = a∗, S = b∗is lower triangular. Therefore, no RR encoder can compute it. However, adding the D1 decoder allows us to compute this indicator function with an s-QRNN, which is RR. We set the s-QRNN layer to compute the simple series ct = #a−b(x) (by increasing on a and decreasing on b). The D1 layer then checks ct ≤0. So, while the indicator function for L≤is not itself rational, it can be easily recovered from a rational representation. Thus, L≤∈D1(s-QRNN). 4.3 Case Study: anbn We compare the language expressiveness of several rational and non-rational RNNs on the following: anbn ≜{anbn | n ∈N} (31) anbnΣ∗≜{anbn(a|b)∗| 0 < n}. (32) anbn is more interesting than L≤because the D1 decoder cannot decide it simply by asking the encoder to track #a−b(x), as that would require it to compute the non-linearly separable =0 function. Thus, it appears at first that deciding anbn with D1 450 might require a non-rational RNN encoder. However, we show below that this is not the case. Let ◦denote stacking two layers. We will go on to discuss the following results: anbn ∈D1(WFA) (33) anbn ∈D1(s-LSTM) (34) anbn ̸∈D1(s-QRNN) (35) anbn ∈D1(s-QRNN ◦s-QRNN) (36) anbn ∈D2(s-QRNN) (37) anbnΣ∗∈D1(s-LSTM) (38) anbnΣ∗/∈D (s-QRNN) for any D (39) anbnΣ∗∪{ϵ} ∈D1(s-QRNN ◦s-QRNN) (40) WFAs (Appendix B) In Theorem 8 we present a function f : Σ∗→Q satisfying f(x) > 0 iff x ∈ anbn, and show that Hf has finite rank. It follows that there exists a WFA that can decide anbn with the D1 decoder. Counterintuitively, anbn can be recognized using rational encoders. QRNNs (Appendix C) Although anbn ∈ D1(WFA), it does not follow that every rationally recurrent model can also decide anbn with the help of D1. Indeed, in Theorem 9, we prove that anbn /∈D1(s-QRNN), whereas anbn ∈ D1(s-LSTM) (Theorem 13). It is important to note that, with a more complex decoder, the QRNN could recognize anbn. For example, the s-QRNN can encode c1 = #a−b(x) and set c2 to check whether x contains ba, from which a D2 decoder can recognize anbn (Theorem 10). This does not mean the hierarchy dissolves as the decoder is strengthened. We show that anbnΣ∗— which seems like a trivial extension of anbn—is not recognizable by the s-QRNN with any decoder. This result may appear counterintuitive, but in fact highlights the s-QRNN’s lack of counter awareness: it can only passively encode the information needed by the decoder to recognize anbn. Failing to recognize that a valid prefix has been matched, it cannot act to preserve that information after additional input tokens are seen. We present a proof in Theorem 11. In contrast, in Theorem 14 we show that the s-LSTM can directly encode an indicator for anbnΣ∗in its internal state. Proof sketch: anbnΣ∗/∈D(s-QRNN). A sequence s1 ∈anbnΣ∗is shuffled to create s2 /∈ anbnΣ∗with an identical multi-set of counter updates.10 Counter updates would be order agnostic if not for reset operations, and resets mask all history, so extending s1 and s2 with a single suffix s containing all of their w-grams reaches the same final state. Then for any D, D(s-QRNN) cannot separate them. We formalize this in Theorem 11. We refer to this technique as the suffix attack, and note that it can be used to prove for multiple other languages L ∈D2(s-QRNN) that L·Σ∗is not in D(s-QRNN) for any decoder D. 2-layer QRNNs Adding another layer overcomes the weakness of the 1-layer s-QRNN, at least for deciding anbn. This follows from the fact that anbn ∈D2(s-QRNN): the second QRNN layer can be used as a linear layer. Similarly, we show in Theorem 10 that a 2-layer s-QRNN can recognize anbnΣ∗∪{ϵ}. This suggests that adding a second s-QRNN layer compensates for some of the weakness of the 1-layer s-QRNN, which, by the same argument for anbnΣ∗ cannot recognize anbnΣ∗∪{ϵ} with any decoder. 4.4 Arbitrary Decoder Finally, we study the theoretical case where the decoder is an arbitrary recursively enumerable (RE) function. We view this as a loose upper bound of stacking many layers after a rational encoder. What information is inherently lost by using a rational encoder? WFAs can uniquely encode each input, making them Turing-complete under this setup; however, this does not hold for rational s-RNNs. RR-complete Assuming an RR-complete encoder, a WFA like Figure 5 can be used to encode each possible input sequence over Σ to a unique number. We then use the decoder as an oracle to decide any RE language. Thus, an RR-complete encoder with an RE decoder is Turing-complete. Bounded space However, the Θ(log n) space bound of saturated rational RNNs like the s-QRNN means these models cannot fully encode the input. In other words, some information about the prefix x:t must be lost in ct. Thus, rational s-RNNs are not Turing-complete with an RE decoder. 5 Experiments In Subsection 4.3, we showed that different saturated RNNs vary in their ability to recognize anbn and anbnΣ∗. We now test empirically whether 10Since QRNN counter updates depend only on the wgrams present in the sequence. 451 Figure 6: Accuracy recognizing L5 and anbnΣ∗. “QRNN+” is a QRNN with a 2-layer decoder, and “2QRNN” is a 2-layer QRNN with a 1-layer decoder. these predictions carry over to the learnable capacity of unsaturated RNNs.11 We compare the QRNN and LSTM when coupled with a linear decoder D1. We also train a 2-layer QRNN (“QRNN2”) and a 1-layer QRNN with a D2 decoder (“QRNN+”). We train on strings of length 64, and evaluate generalization on longer strings. We also compare to a baseline that always predicts the majority class. The results are shown in Figure 6. We provide further experimental details in Appendix E. Experiment 1 We use the following language, which has similar formal properties to anbn, but with a more balanced label distribution: L5 =  x ∈(a|b)∗| |#a−b(x)| < 5 . (41) In line with (34), the LSTM decides L5 perfectly for n ≤64, and generalizes fairly well to longer strings. As predicted in (35), the QRNN cannot fully learn L5 even for n = 64. Finally, as predicted in (36) and (37), the 2-layer QRNN and the QRNN with D2 do learn L5. However, we see that they do not generalize as well as the LSTM for longer strings. We hypothesize that these multi11https://github.com/viking-sudo-rm/ rr-experiments layer models require more epochs to reach the same generalization performance as the LSTM.12 Experiment 2 We also consider anbnΣ∗. As predicted in (38) and (40), the LSTM and 2-layer QRNN decide anbnΣ∗flawlessly for n = 64. A 1-layer QRNN performs at the majority baseline for all n with both a 1 and 2-layer decoder. Both of these failures were predicted in (39). Thus, the only models that learned anbnΣ∗were exactly those predicted by the saturated theory. 6 Conclusion We develop a hierarchy of saturated RNN encoders, considering two angles: space complexity and rational recurrence. Based on the hierarchy, we formally distinguish the state expressiveness of the non-rational s-LSTM and its rational counterpart, the s-QRNN. We show further distinctions in state expressiveness based on encoder space complexity. Moreover, the hierarchy translates to differences in language recognition capabilities. Strengthening the decoder alleviates some, but not all, of these differences. We present two languages, both recognizable by an LSTM. We show that one can be recognized by an s-QRNN only with the help of a decoder, and that the other cannot be recognized by an s-QRNN with the help of any decoder. While this means existing rational RNNs are fundamentally limited compared to LSTMs, we find that it is not necessarily being rationally recurrent that limits them: in fact, we prove that a WFA can perfectly encode its input—something no saturated RNN can do. We conclude with an analysis that shows that an RNN architecture’s strength must also take into account its space complexity. These results further our understanding of the inner working of NLP systems. We hope they will guide the development of more expressive rational RNNs. Acknowledgments We appreciate Amir Yehudayoff’s help in finding the WFA used in Theorem 8, and the feedback of researchers at the Allen Institute for AI, our anonymous reviewers, and Tobias Jaroslaw. The project was supported in part by NSF grant IIS-1562364, Israel Science Foundation grant no.1319/16, and the European Research Council under the EU’s Horizon 2020 research and innovation program, grant agreement No. 802774 (iEXTRACT). 12As shown by the baseline, generalization is challenging because positive labels become less likely as strings get longer. 452 References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. Borja Balle, Xavier Carreras, Franco M. Luque, and Ariadna Quattoni. 2014. Spectral learning of weighted automata. Machine Learning, 96(1):33– 63. James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-recurrent neural networks. J. W. Carlyle and A. Paz. 1971. Realizations by stochastic finite automata. J. Comput. Syst. Sci., 5(1):26–40. Yining Chen, Sorcha Gilroy, Andreas Maletti, Jonathan May, and Kevin Knight. 2018. Recurrent neural networks as weighted language recognizers. In Proc. of NAACL, pages 2261–2271. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. of EMNLP, pages 1724–1734. Corinna Cortes and Mehryar Mohri. 2000. Contextfree recognition with weighted automata. Grammars, 3(2/3):133–150. Jesse Dodge, Roy Schwartz, Hao Peng, and Noah A. Smith. 2019. RNN architecture learning with sparse regularization. In Proc. of EMNLP, pages 1179– 1184. Jeffrey L Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179–211. Patrick C Fischer. 1966. Turing machines with restricted memory access. Information and Control, 9(4):364–379. Patrick C. Fischer, Albert R. Meyer, and Arnold L. Rosenberg. 1968. Counter machines and counter languages. Mathematical Systems Theory, 2(3):265– 283. Michel Fliess. 1974. Matrices de Hankel. J. Math. Pures Appl, 53(9):197–222. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. Proceedings of Workshop for NLP Open Source Software (NLP-OSS). Michael Hahn. 2020. Theoretical limitations of selfattention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156–171. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proc. of EMNLP, pages 1746–1751. Yann Lecun and Yoshua Bengio. 1995. The Handbook of Brain Theory and Neural Networks, chapter “Convolutional Networks for Images, Speech, and Time Series”. MIT Press. William Merrill. 2019. Sequential neural networks as automata. In Proceedings of the Workshop on Deep Learning and Formal Languages: Building Bridges, pages 1–13. William Merrill. 2020. On the linguistic capacity of real-time counter automata. Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Rational recurrences. In Proc. of EMNLP, pages 1203–1214. Jacques Sakarovitch. 2009. Rational and recognisable power series. In Handbook of Weighted Automata, pages 105–174. Springer. Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Bridging CNNs, RNNs, and weighted finitestate machines. In Proc. of ACL, pages 295–305. Hava T. Siegelmann and Eduardo D. Sontag. 1992. On the computational power of neural nets. In Proc. of COLT, pages 440–449. Hava T. Siegelmann and Eduardo D. Sontag. 1994. Analog computation via neural networks. Theoretical Computer Science, 131(2):331–360. Mirac Suzgun, Yonatan Belinkov, Stuart Shieber, and Sebastian Gehrmann. 2019a. LSTM networks can perform dynamic counting. In Proceedings of the Workshop on Deep Learning and Formal Languages: Building Bridges, pages 44–54. Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, and Stuart M. Shieber. 2019b. Memory-augmented recurrent neural networks can learn generalized Dyck languages. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite precision RNNs for language recognition. 453 A Rational Counting We extend the result in Theorem 3 as follows. Theorem 7. Any (Σ × Q)-restricted CM is rationally recurrent. Proof. We present an algorithm to construct a WFA computing an arbitrary counter in a (Σ × Q)restricted CM. First, we create two independent copies of the transition graph for the restricted CM. We refer to one copy of the CM graph as the add graph, and the other as the multiply graph. The initial state in the add graph receives a starting weight of 1, and every other state receives a starting weight of 0. Each state in the add graph receives an accepting weight of 0, and each state in the multiply graph receives an accepting weight of 1. In the add graph, each transition receives a weight of 1. In the multiply graph, each transition receives a weight of 0 if it represents ×0, and 1 otherwise. Finally, for each non-multiplicative update σ/+m13 from qi to qj in the original CM, we add a WFA transition σ/m from qi in the add graph to qj in the multiply graph. Each counter update creates one path ending in the multiply graph. The path score is set to 0 if that counter update is “erased” by a ×0 operation. Thus, the sum of all the path scores in the WFA equals the value of the counter. This construction can be extended to accommodate =m counter updates from qi to qj by adding an additional transition from the initial state to qj in the multiplication graph with weight m. This allows us to apply it directly to s-QRNNs, whose update operations include =1 and =−1. B WFAs We show that while WFAs cannot directly encode an indicator for the language anbn = {anbn| | n ∈N}, they can encode a function that can be thresholded to recognize anbn, i.e.: Theorem 8. The language anbn = {anbn | n ∈ N} over Σ = {a, b} is in D1(WFA). We prove this by showing a function whose Hankel matrix has finite rank that, when combined with the identity transformation (i.e., w = 1, b = 0) followed by thresholding, is an indicator for anbn. Using the shorthand σ(x) = #σ(x), the function 13Note that m = −1 for the −1 counter update. is: f(w) = ( 0.5 −2(a(x) −b(x))2 if x ∈a∗b∗ −0.5 otherwise. (42) Immediately f satisfies 1>0(f(x)) ⇐⇒x ∈ anbn. To prove that its Hankel matrix, Hf, has finite rank, we will create 3 infinite matrices of ranks 3, 3 and 1, which sum to Hf. The majority of the proof will focus on the rank of the rank 3 matrices, which have similar compositions. We now show 3 series r, s, t and a set of series they can be combined to create. These series will be used to create the base vectors for the rank 3 matrices. ai = i(i + 1) 2 (43) bi = i2 −1 (44) ri = fix0(i, ai−2) (45) si = fix1(i, −bi−1) (46) ti = fix2(i, ai−1) (47) where for every j ≤2, fixj(i, x) =      x if i > 2 1 if i = j 0 otherwise. (48) Lemma 3. Let ci = 1 −2i2 and {c(k)}k∈N be the set of series defined c(k) i = c|i−k|. Then for every i, k ∈N, c(k) i = c(k) 0 ri + c(k) 1 si + c(k) 2 ti. Proof. For i ∈{0, 1, 2}, ri, si and ti collapse to a ‘select’ operation, giving the true statement c(k) i = c(k) i · 1. We now consider the case i > 2. Substituting the series definitions in the right side of the equation gives ckai−2 + c|k−1|(−bi−1) + ck−2ai−1 (49) which can be expanded to (1 −2k2) · i2 −3i + 2 2 + (1 −2(k −1)2) · (1 −(i −1)2) + (1 −2(k −2)2) · (i −1)i 2 . 454 Reordering the first component and partially opening the other two gives (−2k2 + 1)i2 −3i + 2 2 + (−2k2 + 4k −1)(2i −i2)+ (−k2 + 4k −3.5)(i2 −i) and a further expansion gives −k2i2+ 0.5i2 + 3k2i −1.5i −2k2 + 1+ 2k2i2 −4ki2+ i2 −4k2i + 8ki −2i+ −k2i2 + 4ki2− 3.5i2 + k2i −4ki + 3.5i which reduces to −2i2 + 4ki −2k2 + 1 = 1 −2(k −i)2 = c(k) i . We restate this as: Corollary 1. For every k ∈N, the series c(k) is a linear combination of the series r, s and t. We can now show that f is computable by a WFA, proving Theorem 8. By Theorem 1, it is sufficient to show that Hf has finite rank. Lemma 4. Hf has finite rank. Proof. For every P, S ⊆{a, b}∗, denote [Hf|P,S]u,v = ( [Hf]u,v if u ∈P and v ∈S 0 otherwise Using regular expressions to describe P, S, we create the 3 finite rank matrices which sum to Hf: A = (Hf + 0.5)|a∗,a∗b∗ (50) B = (Hf + 0.5)|a∗b+,b∗ (51) C = (−0.5)|u,v. (52) Intuitively, these may be seen as a “split” of Hf into sections as in Figure 7, such that A and B together cover the sections of Hf on which u·v does not contain the substring ba (and are equal on them to Hf + 0.5), and C is simply the constant matrix −0.5. Immediately, Hf = A + B + C, and rank(C) = 1. We now consider A. Denote PA = a∗, SA = a∗b∗. A is non-zero only on indices u ∈PA, v ∈ SA, and for these, u·v ∈a∗b∗and Au,v = 0.5 + f(u·v) = 1 −2(a(u) + a(v) −b(v))2. This gives that for every u ∈PA, v ∈SA, Au,v = c|a(u)−(b(v)−a(v))| = c(a(u)) b(v)−a(v). (53) Figure 7: Intuition of the supports of A, B and C. For each τ ∈{r, s, t}, define ˜τ ∈Q{a,b}∗as ˜τv = 1v∈a∗b∗· τb(v)−a(v). (54) We get from Corollary 1 that for every u ∈a∗, the uth row of A is a linear combination of ˜r, ˜s, and ˜t. The remaining rows of A are all 0 and so also a linear combination of these, and so rank(A) ≤3. Similarly, we find that the nonzero entries of B satisfy Bu,v = c|b(v)−(a(u)−b(u))| = c(b(v)) a(u)−b(u) (55) and so, for τ ∈{r, s, t}, the columns of B are linear combinations of the columns τ ′ ∈Q{a,b}∗ defined τ ′ u = 1u∈a∗b+ · τa(u)−b(u). (56) Thus we conclude rank(B) ≤3. Finally, Hf = A + B + C, and so by the subadditivity of rank in matrices, rank(Hf) ≤ X M=A,B,C rank(M) = 7. (57) In addition, the rank of ˜Hf ∈Q{a,b}≤2,{a,b}≤2 defined [ ˜Hf]u,v = [Hf]u,v is 7, and so we can conclude that the bound in the proof is tight, i.e., rank(Hf) = 7. From here ˜Hf is a complete subblock of Hf and can be used to explicitly construct a WFA for f, using the spectral method described by Balle et al. (2014). C s-QRNNs Theorem 9. No s-QRNN with a linear threshold decoder can recognize anbn = {anbn | n ∈N}, i.e., anbn /∈D1(s-QRNN). 455 Proof. An ifo s-QRNN can be expressed as a Σkrestricted CM with the additional update operations {:= −1, := 1}, where k is the window size of the QRNN. So it is sufficient to show that such a machine, when coupled with the decoder D1 (linear translation followed by thresholding), cannot recognize anbn. Let A be some such CM, with window size k and h counters. Take n = k + 10 and for every m ∈N denote wm = anbm and the counter values of A after wm as cm ∈Qh. Denote by ut the vector of counter update operations made by this machine on input sequence wm at time t ≤n + m. As A is dependent only on the last k counters, necessarily all uk+i are identical for every i ≥1. It follows that for all counters in the machine that go through an assignment (i.e., :=) operation in uk+1, their values in ck+i are identical for every i ≥1, and for every other counter j, ck+i j −ck j = i · δ for some δ ∈Z. Formally: for every i ≥1 there are two sets I, J = [h] \ I and constant vectors u ∈NI, v ∈NJ s.t. ck+i|I = u and [ck+i −ck]|J = i · v. We now consider the linear thresholder, defined by weights and bias w, b. In order to recognise anbn, the thresholder must satisfy: w · ck+9+b < 0 (58) w · ck+10+b > 0 (59) w · ck+11+b < 0 (60) Opening these equations gives: w|J(·ck|J+9v|J) + w|I · u < 0 (61) w|J(·ck|J+10v|J) + w|I · u > 0 (62) w|J(·ck|J+11v|J) + w|I · u < 0 (63) but this gives 9w|J·v|J < 10w|J·v|J > 11w|J·v|J, which is impossible. However, this does not mean that the s-QRNN is entirely incapable of recognising anbn. Increasing the decoder power allows it to recognise anbn quite simply: Theorem 10. For the two-layer decoder D2, anbn ∈D2(s-QRNN). Proof. Let #ba(x) denote the number of ba 2grams in x. We use s-QRNN with window size 2 to maintain two counters: [ct]1 = #a−b(x) (64) [ct]2 = #ba(x). (65) [ct]2 can be computed provided the QRNN window size is ≥2. A two-layer decoder can then check 0 ≤[ct]1 ≤0 ∧[ct]2 ≤0. (66) Theorem 11 (Suffix attack). No s-QRNN and decoder can recognize the language anbnΣ∗= anbn(a|b)∗, n > 0, i.e., anbnΣ∗/∈L(s-QRNN) for any decoder L. The proof will rely on the s-QRNN’s inability to “freeze” a computed value, protecting it from manipulation by future input. Proof. As in the proof for Theorem 9, it is sufficient to show that no Σk-restricted CM with the additional operations {:=−1, :=1} can recognize anbnΣ∗for any decoder L. Let A be some such CM, with window size k and h counters. For every w ∈Σn denote by c(w) ∈Qh the counter values of A after processing w. Denote by ut the vector of counter update operations made by this machine on an input sequence w at time t ≤|w|. Recall that A is Σk restricted, meaning that ui depends exactly on the window of the last k tokens for every i. We now denote j = k + 10 and consider the sequences w1 = ajbjajbjajbj, w2 = ajbj−1ajbj+1ajbj. w2 is obtained from w1 by removing the 2j-th token of w1 and reinserting it at position 4j. As all of w1 is composed of blocks of ≥k identical tokens, the windows preceding all of the other tokens in w1 are unaffected by the removal of the 2j-th token. Similarly, being added onto the end of a substring bk, its insertion does not affect the windows of the tokens after it, nor is its own window different from before. This means that overall, the set of all operations ui performed on the counters is identical in w1 and in w2. The only difference is in their ordering. w1 and w2 begin with a shared prefix ak, and so necessarily the counters are identical after processing it. We now consider the updates to the counters after these first k tokens, these are determined by the windows of k tokens preceding each update. 456 First, consider all the counters that undergo some assignment (:=) operation during these sequences, and denote by {w} the multiset of windows in w ∈Σk for which they are reset. w1 and w2 only contain k-windows of types axbk−x or bxak−x, and so these must all re-appear in the shared suffix bjajbj of w1 and w2, at which point they will be synchronised. It follows that these counters all finish with identical value in c(w1) and c(w2). All the other counters are only updated using addition of −1, 1 and 0, and so the order of the updates is inconsequential. It follows that they too are identical in c(w1) and c(w2), and therefore necessarily that c(w1) = c(w2). From this we have w1, w2 satisfying w1 ∈ anbnΣ∗, w2 /∈anbnΣ∗but also c(w1) = c(w2). Therefore, it is not possible to distinguish between w1 and w2 with the help of any decoder, despite the fact that w1 ∈anbnΣ∗and w2 /∈anbnΣ∗. It follows that the CM and s-QRNN cannot recognize anbnΣ∗with any decoder. For the opposite extension Σ∗anbn, in which the language is augmented by a prefix, we cannot use such a “suffix attack”. In fact, Σ∗anbn can be recognized by an s-QRNN with window length w ≥2 and a linear threshold decoder as follows: a counter counts #a−b(x) and is reset to 1 on appearances of ba, and the decoder compares it to 0. Note that we define decoders as functions from the final state to the output. Thus, adding an additional QRNN layer does not count as a “decoder” (as it reads multiple states). In fact, we show that having two QRNN layers allows recognizing anbnΣ∗. Theorem 12. Let ϵ be the empty string. Then, anbnΣ∗∪{ϵ} ∈D1(s-QRNN ◦s-QRNN). Proof. We construct a two-layer s-QRNN from which anbnΣ∗can be recognized. Let $ denote the left edge of the string. The first layer computes two quantities dt and et as follows: dt = #ba(x) (67) et = #$b(x). (68) Note that et can be interpreted as a binary value checking whether the first token was b. The second layer computes ct as a function of dt, et, and xt (which can be passed through the first layer). We will demonstrate a construction for ct by creating linearly separable functions for the gate terms ft and zt that update ct. ft = ( 1 if dt ≤0 0 otherwise (69) zt = ( 1 if xt = a ∨et −1 otherwise. (70) Now, the update function ut to ct can be expressed ut = ftzt =      +0 if 0 < dt +1 if dt ≤0 ∧(xt = a ∨et) −1 otherwise. (71) Finally, the decoder accepts iff ct ≤0. To justify this, we consider two cases: either x starts with b or a. If x starts with b, then et = 0, so we increment ct by 1 and never decrement it. Since 0 < ct for any t, we will reject x. If x starts with a, then we accept iff there exists a sequence of bs following the prefix of as such that both sequences have the same length. D s-LSTMs In contrast to the s-QRNN, we show that the sLSTM paired with a simple linear and thresholding decoder can recognize both anbn and anbnΣ∗. Theorem 13. anbn ∈D1(s-LSTM). Proof. Assuming a string aibi, we set two units of the LSTM state to compute the following functions using the CM in Figure 3: [ct]1 = ReLU(i −j) (72) [ct]2 = ReLU(j −i). (73) We also add a third unit [ct]3 that tracks whether the 2-gram ba has been encountered, which is equivalent to verifying that the string has the form aibi. Allowing ht = tanh(ct), we set the linear threshold layer to check [ht]1 + [ht]2 + [ht]3 ≤0. (74) Theorem 14. anbnΣ∗∈D1(s-LSTM). 457 Proof. We use the same construction as Theorem 13, augmenting it with [ct]4 ≜[ht−1]1 + [ht−1]2 + [ht−1]3 ≤0. (75) We decide x according to the (still linearly separable) equation 0 < [ht]4  ∨ [ht]1 + [ht]2 + [ht]3 ≤0  . (76) E Experimental Details Models were trained on strings up to length 64, and, at each index t, were asked to classify whether or not the prefix up to t was a valid string in the language. Models were then tested on independent datasets of lengths 64, 128, 256, 512, 1024, and 2048. The training dataset contained 100000 strings, and the validation and test datasets contained 10000. We discuss task-specific schemes for sampling strings in the next paragraph. All models were trained for a maximum of 100 epochs, with early stopping after 10 epochs based on the validation cross entropy loss. We used default hyperparameters provided by the open-source AllenNLP framework (Gardner et al., 2018). The code is available at https://github.com/viking-sudo-rm/ rr-experiments. Sampling strings For the language L5, each token was sampled uniformly at random from Σ = {a, b}. For anbnΣ∗, half the strings were sampled in this way, and for the other half, we sampled n uniformly between 0 and 32, fixing the first 2n characters of the string to anbn and sampling the suffix uniformly at random. Experimental cost Experiments were run for 20 GPU hours on Quadro RTX 8000. F Self Attention Architecture We place saturated self attention (Vaswani et al., 2017) into the state expressiveness hierarchy. We consider a single-head self attention encoder that is computed as follows: 1. At time t, compute queries qt, keys kt, and values vt from the input embedding xt using a linear transformation. 2. Compute attention head ht by attending over the keys and values up to time t (K:t and V:t) with query qt. 3. Let ∥·∥L denote a layer normalization operation (Ba et al., 2016). h′ t = ReLU Wh · ∥ht∥L  (77) ct = Wch′ t L. (78) This simplified architecture has only one attention head, and does not incorporate residual connections. It is also masked (i.e., at time t, can only see the prefix X:t), which enables direct comparison with unidirectional RNNs. For simplicity, we do not add positional information to the input embeddings. Theorem 15. Saturated masked self attention is not RR. Proof. Let #σ(x) denote the number of occurences of σ ∈Σ in string x. We construct a self attention layer to compute the following function over {a, b}∗: f(x) = ( 0 if #a(x) = #b(x) 1 otherwise. (79) Since the Hankel sub-block over P = a∗, S = b∗ has infinite rank, f ̸∈R. Fix vt = xt. As shown by Merrill (2019), saturated attention over a prefix of input vectors X:t reduces to sum of the subsequence for which key-query similarity is maximized, i.e., denoting I = {i ∈[t] | ki · qt = m} where m = max{ki · qt|i ∈[t]}: ht = 1 |I| X i∈I xti. (80) For all t, set the key and query kt, qt = 1. Thus, all the key-query similarities are 1, and we obtain: ht = 1 t t X t′=1 xt′ (81) = 1 t #a(x), #b(x) ⊤. (82) Applying layer norm to this quantity preserves equality of the first and second elements. Thus, we set the layer in (77) to independently check 0 < [h0 t ]1 −[h0 t ]2 and [h0 t ]1 −[h0 t ]2 < 0 using ReLU. The final layer ct sums these two quantities, returning 0 if neither condition is met, and 1 otherwise. Since saturated self attention can represent f /∈ R, it is not RR. 458 Space Complexity We show that self attention falls into the same space complexity class as the LSTM and QRNN. Our method here extends Merrill (2019)’s analysis of attention. Theorem 16. Saturated single-layer self attention has Θ(log n) space. Proof. The construction from Theorem 15 can reach a linear (in sequence length) number of different outputs, implying a linear number of different configurations, and so that the space complexity of saturated self attention is Ω(log n). We now show the upper bound O(log n). A sufficient representation for the internal state (configuration) of a self-attention layer is the unordered group of key-value pairs over the prefixes of the input sequence. Since fk : xt 7→kt and fv : xt 7→vt have finite domain (Σ), their images K = image(fk), V = image(fv) are finite.14 Thus, there is also a finite number of possible key-value pairs ⟨kt, vt⟩∈ K×V . Recall that the internal configuration can be specified by the number of occurrences of each possible key-value pair. Taking n as an upper bound for each of these counts, we bound the number of configurations of the layer as n|K×V |. Therefore the bit complexity is log2 n|K×V | = O(log n). (83) Note that this construction does not apply if the “vocabulary” we are attending over is not finite. Thus, using unbounded positional embeddings, stacking multiple self attention layers, or applying attention over other encodings with unbounded state might reach Θ(n). While it eludes our current focus, we hope future work will extend the saturated analysis to self attention more completely. We direct the reader to Hahn (2020) for some additional related work. G Memory Networks All of the standard RNN architectures considered in Section 3 have O(log n) space in their saturated form. In this section, we consider a stack RNN encoder similar to the one proposed by Suzgun et al. (2019b) and show how it, like a WFA, can encode binary representations from strings. Thus, 14Note that any periodic positional encoding will also have finite image. the stack RNN has Θ(n) space. Additionally, we find that it is not RR. This places it in the upperright box of Figure 1. Classically, a stack is a dynamic list of objects to which elements v ∈V can be added and removed in a LIFO manner (using push and pop operations). The stack RNN proposed in Suzgun et al. (2019b) maintains a differentiable variant of such a stack, as follows: Differentiable Stack In a differentiable stack, the update operation takes an element st to push and a distribution πt over the update operations push, pop, and no-op, and returns the weighted average of the result of applying each to the current stack. The averaging is done elementwise along the stacks, beginning from the top entry. To facilitate this, differentiable stacks are padded with infinite ‘null entries’. Their elements must also have a weighted average operation defined. Definition 6 (Geometric k-stack RNN encoder). Initialize the stack S to an infinite list of null entries, and denote by St the stack value at time t. Using 1-indexing for the stack and denoting [St−1]0 ≜st, the geometric k-stack RNN recurrent update is:15 st = fs(xt, ct−1) πt = fπ(xt, ct−1) ∀i ≥1 [St]i = 3 X a=1 [πt]a[St−1]i+a−2. In this work we will consider the case where the null entries are 0 and the encoding ct is produced as a geometric-weighted sum of the stack contents, ct = ∞ X i=1 1 2 i−1[St]i. This encoding gives preference to the latest values in the stack, giving initial stack encoding c0 = 0. Space Complexity The memory introduced by the stack data structure pushes the encoder into Θ(n) space. We formalize this by showing that, like a WFA, the stack RNN can encode binary strings to their value. Lemma 5. The saturated stack RNN can compute the converging binary encoding function, i.e., 101 7→1 · 1 + 0.5 · 0 + 0.25 · 1 = 1.25. 15Intuitively, [πt]a corresponds to the operations push, noop, and pop, for the values a = 1, 2, 3 respectively. 459 Proof. Choose k = 1. Fix the controller to always push xt. Then, the encoding at time t will be ct = t X i=1 1 2 i−1xi. (84) This is the value of the prefix x:t in binary. Rational Recurrence We provide another construction to show that the stack RNN can compute non-rational series. Thus, it is not RR. Definition 7 (Geometric counting). Define f2 : {a, b}∗→N such that f2(x) = exp 1 2 #a−b(x)  −1. Like similar functions we analyzed in Section 3, the Hankel matrix Hf2 has infinite rank over the sub-block aibj. Lemma 6. The saturated stack RNN can compute f2. Proof. Choose k = 1. Fix the controller to push 1 for xt = a, and pop otherwise.
2020
43
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4748–4757 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4748 Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models Kaiji Lu Carnegie Mellon University [email protected] Piotr Mardziel Carnegie Mellon University [email protected] Klas Leino Carnegie Mellon University [email protected] Matt Fedrikson Carnegie Mellon University [email protected] Anupam Datta Carnegie Mellon University [email protected] Abstract LSTM-based recurrent neural networks are the state-of-the-art for many natural language processing (NLP) tasks. Despite their performance, it is unclear whether, or how, LSTMs learn structural features of natural languages such as subject-verb number agreement in English. Lacking this understanding, the generality of LSTMs on this task and their suitability for related tasks remains uncertain. Further, errors cannot be properly attributed to a lack of structural capability, training data omissions, or other exceptional faults. We introduce influence paths, a causal account of structural properties as carried by paths across gates and neurons of a recurrent neural network. The approach refines the notion of influence (the subject’s grammatical number has influence on the grammatical number of the subsequent verb) into a set of gate-level or neuron-level paths. The set localizes and segments the concept (e.g., subject-verb agreement), its constituent elements (e.g., the subject), and related or interfering elements (e.g., attractors). We exemplify the methodology on a widely-studied multi-layer LSTM language model, demonstrating its accounting for subject-verb number agreement. The results offer both a finer and a more complete view of an LSTM’s handling of this structural aspect of the English language than prior results based on diagnostic classifiers and ablation. 1 Introduction Traditional rule-based NLP techniques can capture syntactic structures, while statistical NLP techniques, such as n-gram models, can heuristically integrate semantics of a natural language. Modern RNN-based models such as Long Short-Term Memory (LSTM) models are tasked with incorporating both semantic features from the statistical associations in their training corpus, and structural features generalized from the same. In: Out: boys behind the tree (run) The s0 s1 s2 s3 s4 Cell c0 1 Candidate Cell ˜c1 1 Cell c1 1 Candidate Cell ˜c0 1 Hidden h0 1 c1 2 c1 3 c1 4 h1 4 c0 4 ˜c1 4 ˜c0 4 h0 4 agreement s4(run) - s4(runs) grammatical number boys −boy+boys 2 Figure 1: Subject-verb agreement task for a 2-layer LSTM language model, and primary paths across various LSTM gates implementing subject-verb number agreement. A language model assigns score s to each word. Agreement is the score of the correctly numbered verb minus that of the incorrectly numbered verb. Despite evidence that LSTMs can capture syntactic rules in artificial languages (Gers and Schmidhuber, 2001), it is unclear whether they are as capable in natural languages (Linzen et al., 2016; Lakretz et al., 2019) in the context of rules such as subject-verb number agreement, especially when not supervised for the particular feature. The incongruence derives from this central question: does an LSTM language model’s apparent performance in subject-verb number agreement derive from statistical heuristics (like n-gram models) or from generalized knowledge (like rule-based models)? Recent work has begun addressing this question (Linzen et al., 2016) in the context of language models: models tasked with modeling the likelihood of the next word following a sequence of words as expected in a natural language (see Figure 1, bottom). Subject-verb number agreement dictates that the verb associated with a given subject 4749 should match its number (e.g., in Figure 1, the verb “run” should match with the subject “boys”). Giulianelli et al. (2018) showed that the subject grammatical number is associated with various gates in an LSTM, and Lakretz et al. (2019) showed that ablation (disabling activation) of an LSTM model at certain locations can reduce its accuracy at scoring verbs of the correct grammatical number. Influence offers an alternate means of exploring properties like number agreement. We say an input is influential on an outcome when changing just the input and nothing else induces a change on the outcome. In English grammar, the number of a subject is influential on the number of its verb, in that changing the number of that subject while keeping all other elements of a sentence fixed would necessitate a change in the number of the verb. Algorithmic transparency literature offers formal definitions for empirically quantifying notions of influence for systems in general (Datta et al., 2016) and for deep neural networks specifically (Leino et al., 2018; Sundararajan et al., 2017). The mere fact that subject number is influential on verb number as output by an LSTM model is sufficient to conclude that it incorporates the agreement concept in some way but does not indicate whether it operates as a statistical heuristic or as a generalized rule. We address this question with influence paths, which decompose influence into a set of paths across the gates and neurons of an LSTM model. The approach has several elements: 1. Define an input parameter to vary the conceptspecific quantity under study (e.g., the grammatical number of a particular noun, bottomleft node in Figure 1) and a concept-specific output feature to measure the parameter’s effect on (e.g, number agreement with the parameterized noun, bottom-right node in Figure 1). 2. Apply a gradient-based influence method to quantify the influence of the concept parameter on the concept output feature; as per the chain rule, decompose the influence into model-path-specific quantities. 3. Inspect and characterize the distribution of influence across the model paths. The paths demonstrate where relevant state information necessitated by the concept is kept, how it gets there, how it ends up being used to affect the model’s output, and how and where related concepts interfere. Our approach is state-agnostic in that it does not require a priori an assumption about how or if the concept will be implemented by the LSTM. This differs from works on diagnostic classifiers where a representation of the concept is assumed to exist in the network’s latent space. The approach is also time-aware in that paths travel through cells/gates/neurons at different stages of an RNN evaluation. This differs from previous ablationbased techniques, which localize the number by clearing neurons at some position in an RNN for all time steps. Our contributions are as follows: • We introduce influence paths, a causal account of the use of concepts of interest as carried by paths across gates and neurons of an RNN. • We demonstrate, using influence paths, that in a multi-layer LSTM language model, the concept of subject-verb number agreement is concentrated primarily on a single path (the red path in Figure 1), despite a variety of surrounding and intervening contexts. • We show that attractors (intervening nouns of opposite number to the subject) do not diminish the contribution of the primary subjectverb path, but rather contribute their own influence of the opposite direction along the equivalent primary attractor-verb path (the blue path in the figure). This can lead to incorrect number prediction if an attractor’s contribution overcomes the subject’s. • We corroborate and elaborate on existing results localizing subject number to the same two neurons which, in our results, lie on the primary path. We further extend and generalize prior compression/ablation results with a new path-focused compression test which verifies our localization conclusions. Our results point to generalized knowledge as the answer to the central question. The number agreement concept is heavily centralized to the primary path despite the varieties of contexts. Further, the primary path’s contribution is undiminished even amongst interfering contexts; number errors are not attributable to lack of the general number concept but rather to sufficiently influential contexts pushing the result in the opposite direction. 4750 2 Background LSTMs Long short-term memory networks (LSTMs) (Hochreiter and Schmidhuber, 1997) have proven to be effective for modeling sequences, such as language models, and empirically, this architecture has been found to be optimal compared to other second-order RNNs (Greff et al., 2017). LSTMs utilize several types of gates and internal states including forget gates (f), input gates (i), output gates (o), cell states (c), candidate cell state (˜c), and hidden states (h). Each gate is designed to carry out a certain function, or to fix a certain drawback of the vanilla RNN architecture. E.g., the forget gate is supposed to determine how much information from the previous cell state to retain or “forget”, helping to fix the vanishing gradient problem (Hochreiter, 1998). Number Agreement in Language Models The number agreement (NA) task, as described by Linzen et al. (2016), is an evaluation of a language model’s ability to properly match the verb’s grammatical number with its subject. This evaluation is performed on sentences specifically designed for the exercise, with zero or more words between the subject and the main verb, termed the context. The task for sentences with non-empty contexts will be referred to as long-term number agreement. “Human-level” performance for this task can be achieved with a 2-layer LSTM language model (Gulordava et al.), indicating that the language model incorporates grammatical number despite being trained only for the more general word prediction task. Attempts to explain or localize the number concept within the model include (Lakretz et al., 2019), where ablation of neurons is applied to locate specific neurons where such information is stored; and (Giulianelli et al., 2018; Hupkes et al., 2018), where diagnostic classifiers are trained on gate activations to predict the number of the subject to see which gates or timesteps the number concept exhibits itself. These works also look at the special cases involving attractors—intervening nouns with grammatical number opposite to that of the subject (deemed instead helpful nouns if their number agrees with the subject)—such as the word “tree” in Figure 1. Both frameworks provide explanations as to why attractors lower the performance of NA tasks. However, they tend to focus on the activation patterns of gates or neurons without justifying their casual relationships with the concept of grammatical number, and do not explicitly identify the exact temporal trajectory of how the number of the subject influences the number of the verb. Other relevant studies that look inside RNN models to locate specific linguistic concepts include visualization techniques such as (Karpathy et al., 2015), and explanations for supervised tasks involving LSTMs such as sentiment analysis (Murdoch et al., 2018). Attribution Methods Attribution methods quantitatively measure the contribution of each of a function’s individual inputs to its output. Gradientbased attribution methods compute the gradient of a model with respect to its inputs to describe how important each input is towards the output predictions. These methods have been applied to assist in explaining deep neural networks, predominantly in the image domain (Leino et al., 2018; Sundararajan et al., 2017; Bach et al., 2015; Simonyan et al., 2013). Some such methods are also axiomatically justified to provide a causal link between inputs (or intermediate neurons) and the output. As a starting point in this work, we consider Integrated Gradients (IG) (Sundararajan et al., 2017). Given a baseline, x0, the attribution for each input at point, x, is the path integral taken from the baseline to x of the gradients of the model’s output with respect to its inputs. The baseline establishes a neutral point from which to make a counterfactual comparison; the attribution of a feature can be interpreted as the share of the model’s output that is due to that feature deviating from its baseline value. By integrating the gradients along the linear interpolation from the baseline to x, IG ensures that the attribution given to each feature is sensitive to effects exhibited by the gradient at any point between the baseline and instance x. Leino et al. (2018) generalize IG to better focus attribution on concepts other than just model outputs, by use of a quantity of interest (QoI) and a distribution of interest (DoI). Their measure, Distributional Influence, is given by Definition 1. The QoI is a function of the model’s output expressing a particular output behavior of the model to calculate influence for; in IG, this is fixed as the model’s output. The DoI specifies a distribution over which the influence should faithfully summarize the model’s behavior; the influences are found by taking an expected value over DoI. Definition 1 (Distributional Influence). With quantity of interest, q, and distribution of interest, D, 4751 the influence, χ, of the inputs on the quantity of interest is: χ(q, D) = E ⃗x∼D  ∂q ∂x(⃗x)  The directed path integral used by IG can be implemented by setting the DoI to a uniform distribution over the line from the baseline to ⃗x: D = Uniform ⃗x0⃗x  , for baseline, ⃗x0, and then multiplying χ by ⃗x −⃗x0. Conceptually, by multiplying by ⃗x −⃗x0, we are measuring the attribution, i.e., the contribution to the QoI, of ⃗x −⃗x0 by weighting its features by their influence. We use the framework of Leino et al. in this way to define our measure of attribution for NA tasks in Section 3. Distributional Influence can be approximated by sampling according to the DoI. In particular, when using D = Uniform ⃗x0⃗x  as noted above, Definition 1 can be computationally approximated with a sum of n intervals as in IG: χ ≈ n X i=1 ∂q ∂x  i n⃗x +  1 −i n  ⃗x0  Other related works include Fiacco et al. (2019), which employs the concept of neuron paths based on cofiring of neurons instead of influence, also on different NLP tasks from ours. 3 Methods Our method for computing influence paths begins with modeling a relevant concept, such as grammatical number, in the influence framework of Leino et al. (Definition 1) by defining a quantity of interest that corresponds to the grammatical number of the verb, and defining a component of the input embedding that isolates the subject’s grammatical number (Section 3.1). We then decompose the influence measure along the relevant structures of LSTM (gates or neurons) as per standard calculus identities to obtain a definition for influence paths (Section 3.2). 3.1 Measuring Number Agreement For the NA task, we view the initial fragment containing the subject as the input, and the word distribution at the position of its corresponding verb as the output. Formally, each instance in this task is a sequence of d-dimensional word embedding vectors, w def = ⟨⃗wi⟩i, containing the subject and the corresponding verb, potentially with intervening words in between. We assume the subject is at position t and the verb at position t + n. The output score of a word, w, at position i will be written si(w). If w has a grammatical number, we write w+ and w−to designate w with its original number and the equivalent word with the opposite number, respectively. Quantity of Interest We instrument the output score with a QoI measuring the agreement of the output’s grammatical number to that of the subject: Definition 2 (Number Agreement Measure). Given a sentence, w, with verb, w, whose correct form (w.r.t. grammatical number) is w+, the quantity of interest, q, measures the correctness of the grammatical number of the verb: q (w) def = st+n w+ −st+n w− In plain English, q captures the weight that the model assigns to the correct form of w as opposed to the weight it places on the incorrect form. Note that the number agreement concept could have reasonably been measured using a different quantity of interest. E.g., considering the scores of all vocabulary words of the correct number and incorrect number in the positive and negative terms, respectively, is an another alternative. However, based on our preliminary experiments, we found this alternative does not result in meaningful changes to the reported results in the further sections. Distribution of Interest We also define a component of the embedding of the subject that captures its grammatical number, and a distribution over the inputs that allows us to sensitively measure the influence of this concept on our chosen quantity of interest. Let ⃗w0 be the word embedding midway between its numbered variants, i.e., ⃗w++⃗w− 2 . Though this vector will typically not correspond to any English word, we interpret it as a numberneutral version of ⃗w. Various works show that linear arithmetic on word embeddings of this sort preserves meaningful word semantics as demonstrated in analogy parallelograms (Mikolov et al., 2013). Finally, given a sentence, w, let w0 t be the sentence w, except with the word embedding ⃗wt replaced with its neutral form ⃗w0 t . We see that w−w0 t captures the part of the input corresponding to the grammatical number of the subject, ⃗wt. Definition 3 (Grammatical Number Distribution). Given a singular (or plural) noun, wt, in a sentence, w, the distribution density of sentences, Dw, exercising the noun’s singularity (or plurality) linearly 4752 interpolates between the neutral sentence, w0 t , and the given sentence, w: Dw def = Uniform  w0 t w  If ⃗wt is singular, our counterfactual sentences span w with number-neutral ⃗w0 t all the way to its singular form ⃗wt = ⃗w+ t . We thus call this distribution a singularity distribution. Were wt plural instead, we would refer to the distribution as a plurality distribution. Using this distribution of sentences as our DoI thus allows us to measure the influence of w −w0 t (the grammatical number of a noun at position t) on our quantity of interest sensitively (in the sense that Sundararajan et al. define their axiom of sensitivity for IG (Sundararajan et al., 2017)). Subject-Verb Number Agreement Putting things together, we define our attribution measure. Definition 4 (Subject-Verb Number Agreement Attribution). The measure of attribution, α, of a noun’s grammatical number on the subject-verb number agreement is defined in terms of the DoI, Dw, and QoI, q, as in Definitions 3 and 2, respectively. α (w) = (w −w0 t ) χ(q, Dw) Essentially, the attribution measure weights the features of the subject’s grammatical number by their Distributional Influence, χ. Because Dw is a uniform distribution over the line segment between w and w0 t , as with IG, the attribution can be interpreted as each feature’s net contribution to the change in the QoI, q(w) −q(w0 t ), as P i χ(w)i = q(w) −q(w0 t ) (i.e., Definition 4 satisfies the axiom Sundararajan et al. term completeness (Sundararajan et al., 2017)). In Figure 1, for instance, this definition measures the attribution from the plurality of the subject (“boys”), towards the model’s prediction of the correctly numbered verb (“run”) versus the incorrectly numbered verb (“runs”). Later in this paper we will also investigate the attribution of intervening nouns on this same quantity. We expect the input attribution to be positive for all subjects and helpful nouns, and negative for attractors, which can be verified by the P +columns of Table 1 (the details of this experiment are introduced in Section 4). 3.2 Influence Paths Input attribution as defined by IG (Sundararajan et al., 2017) provides a way of explaining a model by highlighting the input dimensions with large attribution towards the output. Distributional Influence (Leino et al., 2018) with a carefully chosen QoI and DoI (Definition 4) further focuses the influence on a concept at hand, grammatical number agreement. Neither, however, demonstrate how these measures are conveyed by the inner workings of a model. In this section we define a decomposition of the influence into paths of a model, thereby assigning attribution not just to inputs, but also to the internal structures of a given model. We first define arbitrary deep learning models as computational graphs, as in Definition 5. We then use this graph abstraction to define a notion of influence for a path through the graph. We posit that any natural path decomposition should satisfy the following conservation property: the sum of the influence of each path from the input to the output should equal the influence of the input on the QoI. We then observe that the chain rule from calculus offers one such natural decomposition, yielding Definition 6. Definition 5 (Model). A model is an acyclic graph with a set of nodes, edges, and activation functions associated with each node. The output of a node, n, on input x is n(x) def = fn (n1(x), · · · , nm(x)) where n1, · · · , nm are n’s predecessors and fn is its activation function. If n does not have predecessors (it is an input), its activation is fn(x). We assume that the domains and ranges of all activation functions are real vectors of arbitrary dimension. We will write n1 →n2 to denote an edge (i.e., n1 is a direct predecessor of n2), and n1 →∗n2 to denote the set of all paths from n1 to n2. The partial derivative of the activation of n2 with respect to the activation of n1 will be written ∂n2 ∂n1 . This view of a computation model is an extension of network decompositions from attribution methods using the natural concept of “layers” or “slices” (Dhamdhere et al., 2018; Leino et al., 2018; Bach et al., 2015). This decomposition can be tailored to the level of granularity we wish to expose. Moreover, in RNN models where no single and consistent “natural layer” can be found due to the variable-length inputs, a more general graph view provides the necessary versatility. Definition 6 (Path Influence). Expanding Definition 4 using the chain rule, the influence of input 4753 node, s, on target node, t, in a model, G, is: χs = E x∼D(x)  ∂t ∂s(x)  = E x∼D(x)   X p∈(s→∗t) Y (n1→n2)∈p ∂n2 ∂n1 (x)   = X p∈(s→∗t) E x∼D(x)   Y (n1→n2)∈p ∂n2 ∂n1 (x)   | {z } χp s Note that the same LSTM can be modeled with different graphs to achieve a desired level of abstraction. We will use two particular levels of granularity: a coarse gate-level abstraction where nodes are LSTM gates, and a fine neuron-level abstraction where nodes are the vector elements of those gates. Though the choice of abstraction granularity has no effect on the represented model semantics, it has implications on graph paths and the scale of their individual contributions in a model. Gate-level and Neuron-level Paths We define the set of gate-level nodes to include:  fl t, il t, ol t, cl t, ˜cl t, hl t : t < T, l < L , where T is the number of time steps (words) and L is number of LSTM layers. The node set also includes an attribution-specific input node (w −w0 t ) and an output node (the QoI). An example of this is illustrated in Figure 2. We exclude intermediate calculations (the solid nodes of Figure 2, such as ft ⊙ct−1) as their inclusion does not change the set of paths in a graph. We can also break down each vector node into scalar components and further decompose the gate-level model into a neuron-level one: {fl ti, il ti, ol ti, cl ti, ˜cl ti, hl ti : t < T, i < H, l < L}, where H is the size of each gate vector. This decomposition results in an exponentially large number of paths. However, since many functions between gates in an LSTM are elementwise operations, neuron-level connections between many neighboring gates are sparse. Path Refinement While the neuron-level path decomposition can theoretically be performed on the whole network, in practice we choose to specify a gate-level path first, then further decompose that path into neuron-level paths. We also collapse selected vector nodes, allowing us to further localize a concept on a neuron level while avoiding an explosion in the number of paths. The effect of this pipeline will be empirically justified in Section 4. c h ˜c i x o f c h Subject Intervening Noun ˜c i x o f c h ˜c i x o f c h c h ˜c i o f c h ˜c i o f c h ˜c i o f c h QoI l0 l1 T t t + n −1 t + n t −1 Figure 2: Influence path diagram in a NA task for the 2-layer LSTM model. The red path shows the path with the greatest attribution (the primary path) from the subject; The blue path shows the primary path from the intervening noun. 4 Evaluation In this section we apply influence path decomposition to the NA task. We investigate major gatelevel paths and their influence concentrations in Section 4.2. We further show the relations between these paths and the paths carrying grammatical number from intervening nouns (i.e. attractors & helpful nouns) in Section 4.3. In both we also investigate high-attribution neurons along primary paths allowing us to compare our results to prior work. 4.1 Dataset and Model We study the exact combination of language model and NA datasets used in the closely related prior work of Lakretz et al. (2019). The pre-trained language model of Gulordava et al. and Lakretz et al. is a 2-layer LSTM trained from Wikipedia articles. The number agreement datasets of Lakretz et al. are several synthetically generated datasets varying in syntactic structures and in the number of nouns between the subject and verb. For example, nounPP refers to sentences containing a noun subject followed by a prepositional phrase such as in Figure 1. Each NA task has subject number (and intervening noun number if present) realizations along singular (S) and plural (P) forms. In listings we denote subject number (S or P) first and additional noun (if any) number second. Details including the accuracy of the model on the NA tasks are summarized by Lakretz et al. (2019). Our evaluation replicates part of Table 2 in 4754 said work. 4.2 Decomposing Number Agreement We begin with the attribution of subject number on its corresponding verb, as decomposed per Definition 6. Among all NA tasks, the gate-level path carrying the most attribution is one following the same pattern with differences only in the size of contexts. With indices t and t + n referring to the subject and verb respectively, this path, which we term the primary path of subject-verb number agreement, is as follows: xt(DoI) · ˜c0 · c0 · h0 · ˜c1 · c1∗· h1 · QoI The primary path is represented by the red path in Figure 2. The influence first passes through the temporary cell state ˜c0, the only non-sigmoid cell states capable of storing more information than sigmoid gates, since i, f, o ∈(0, 1) while the tanh gate ˜c ∈(−1, 1). Then the path passes through c0, h0, and similarly to c1 through ˜c1 , jumping from the first to the second layer. The path then stays at c1, through the direct connections between cell states of neighbouring time steps, as though it is “stored” there without any interference from subsequent words. As a result, this path is intuitively the most efficient and simplistic way for the model to encode and store a “number bit.” The extent to which this path can be viewed as primary is measured by two metrics. The results across a subset of syntactic structures and number conditions mirroring those in Lakretz et al. (2019) are shown in Table 1. We include 3 representative variations of the task. The metrics are: 1. t-value: probability that a given path has greater attribution than a uniformly sampled path on a uniformly sampled sentence. 2. Positive/Negative Share (±Share): expected (over sentences) fraction of total positive (or negative) attribution assigned to the given positive (or negative) path. Per Table 1 (From Subject, Primary Path), we make our first main observation: Observation 1. The same one primary path consistently carries the largest amount positive attribution across all contexts as compared to all other paths. Even in the case of its smallest share (nounPPAdv), the 3% share is large when taking into account more than 40,000 paths in total. Sentences with singular subjects (top part of Table 1) have a slightly stronger concentration of attribution in the primary path than plural subjects (bottom part of Table 1), possibly due to English plural (infinitive) verb forms occurring more frequently than singular forms, thus less concentration of attribution is needed due to the “default signal” in place. Primary Neurons We further decompose the primary path into influence passing through each neuron. Since only connections between second layer cell states are sparse, we only decompose the segment of the primary path from c1 t to c1 t+n, resulting in a total of 650 (the number of hidden units) neuron-level paths. (We leave the non-sparse decompositions for future work). The path for neuron i, for example, is represented as: xt(DoI) · ˜c0 · c0 · h0 · ˜c1 · c1 i ∗· h1 · QoI To compare the attribution of an individual neuron with all other neurons, we employ a similar aforementioned t-value, where each neuron-level path is compared against other neuron-level paths. The results of the neuron-level analysis are shown in Table 1 (From Subject, Primary Neuron). Out of the 650 neuron-level paths in the gate-level primary path, we discover two neurons with consistently the most attribution (neurons 125 and 337 of the second layer). This indicates the number concept is concentrated in only two neurons. Comparison with Lakretz et al. (2019) Uncoincidentally, both neurons match the units found through ablation by Lakretz et al., who use the same model and dataset (neurons 988 and 776 are neurons 125 and 337 of the second layer). This accordance to some extent verifies that the neurons found through influence paths are functionally important. However, the t-values shown in Table 1 show that both neuron 125 and 337 are influential regardless of the subject number, whereas Lakretz et al. assign a subject number for each of these two neurons due to their disparate effect in lowering accuracy in ablation experiments. One possible reason is that the ablation mechanism used in (Lakretz et al., 2019) assumes that a “neutral number state” can be represented by zero-activations for all gates, while in reality the network may encode the neutral state differently for different gates. Another major distinction of our analysis from Lakretz et al. (2019) regards simple cases with no 4755 Task C From Subject From Intervening Noun P+ |P| Primary Path Primary Neuron P+ |P| Primary Path Primary Neuron +Share t t125 t337 ± Share t t125 t337 Simple S 1.0 16 0.47 1.0 0.99 1.0 nounPP SS 1.0 6946 0.1 1.0 1.0 1.0 0.82 16 0.31(+) 0.9 0.78 0.98 nounPP SP 1.0 6946 0.1 1.0 1.0 1.0 0.23 16 0.24(-) 0.23 0.06 0.15 nounPPAdv SS 1.0 41561 0.07 1.0 1.0 1.0 0.92 152 0.09(+) 0.96 0.85 1.0 nounPPAdv SP 1.0 41561 0.07 1.0 1.0 1.0 0.32 152 0.09(-) 0.14 0.13 0.01 Simple P 1.0 16 0.33 0.93 0.97 0.99 nounPP PS 1.0 6946 0.05 0.91 0.99 1.0 0.06 16 0.28(-) 0.21 0.22 0.12 nounPP PP 1.0 6946 0.05 0.92 0.99 1.0 0.95 16 0.31(+) 0.9 0.97 0.79 nounPPAdv PS 1.0 41561 0.03 0.93 0.99 1.0 0.32 152 0.04(-) 0.28 0.41 0.16 nounPPAdv PP 1.0 41561 0.03 0.92 0.99 1.0 0.83 152 0.07(+) 0.92 0.99 0.84 Table 1: Statistics for attribution of primary paths and neurons from the subject/intervening noun: P+ is the percentage of sentences with positive input attribution. Task and C columns refer to sentence structures in Lakretz et al. (2019). |P| is the total number of paths; t and ±Share are t-values and positive/negative share, respectively. For calculating t125 and t337 of primary neurons (125 and 337), we exclude these two neurons to avoid comparing them with each other. word between subjects and verbs. Unlike Lakretz et al., who claim that the two identified neurons are “long-term neurons”, we discover that these two neurons are also the only neurons important for short-term number agreement. This localization cannot be achieved by diagnostic classifiers used by Lakretz et al., indicating that the signal can be better uncovered using influence-based paths rather than association-based methods such as ablation. 4.3 Decomposing from Intervening Nouns Next we focus on NA tasks with intervening nouns and make the following observation: Observation 2. The primary subject-verb path still accounts for the largest positive attribution in contexts with either attractors or helpful nouns. A slightly worse NA task performance (Lakretz et al., 2019) in cases of attractors (SP, PS) indicates that they interfere with prediction of the correct verb. In contrast, we also observe that helpful nouns (SS, PP) contribute positively to the correct verb number (although they should not from a grammar perspective). Primary Path from the Intervening Noun We adapt our number agreement concept (Definition 2) by focusing the DoI on the intervening noun, thereby allowing us to decompose its influence on the verb number not grammatically associated with it. In Table 1 (From Intervening Noun) we discover a similar primary path from the intervening noun: Observation 3. Attribution towards verb number from intervening nouns follows the same primary path as the subject but is of lower magnitude and Task C Compression Scheme Csi Cs Ci Csi Cs Ci C nounPP SS .66 .77 .95 .93 .71 .77 .95 nounPP SP .64 .36 .94 .64 .75 .40 .74 nounPP PS .34 .24 .92 .40 .69 .18 .80 nounPP PP .39 .66 .91 .76 .68 .58 .97 nounPP mean .51 .51 .93 .68 .70 .48 .87 nounPPAdv SS .70 .86 .98 .73 .56 .43 1.0 nounPPAdv SP .70 .43 .99 .50 .60 .27 .88 nounPPAdv PS .38 .22 .98 .76 .79 .56 .96 nounPPAdv PP .39 .67 .98 .84 .83 .76 1.0 nounPPAdv mean .54 .55 .99 .71 .69 .50 .96 Table 2: Model compression accuracy under various compression schemes. C is the uncompressed model. reflects either positive or negative attribution in cases of helpful nouns or attractors, respectively. This disparity in magnitude is expected since the language model possibly identifies the subject as the head noun through the prepositions such as “behind” in Figure 1, while still needing to track the number of the intervening noun in possible clausal structures. Such need is comparably weaker compared to tracking numbers of subjects, possibly because in English, intervening clauses are rarer than intervening non-clauses. Similar arguments can be made for neuron-level paths. 4.4 Model Compression Though the primary paths are the highest contributors to NA tasks, it is possible that collections of associated non-primary paths account for more of the verb number concept. We gauge the extent to which the primary paths alone are responsible for the concept with compression/ablation exper4756 iments. We show that the computations relevant to a specific path alone are sufficient in maintaining performance for the NA task. We compress the model by specifying node sets to preserve, and intervene on the activations of all other nodes by setting their activations to constant expected values (average over all samples). We choose the expected values instead of full ablation (setting them to zero), as ablation would nullify the function of Sigmoid gates. For example, to compress the model down to the red path in Figure 2, we only calculate the activation for gates ˜c0 t and ˜c1 t for each sample, while setting the activation of all other ˜c, f, o, i to their average values over all samples. In Table 2, we list variations of the compression schemes based on the following preserved node sets: C def = n fl t, il t, ol t, ˜cl t : tsub < t < tverb, l ∈{0, 1} o Cs def =  ˜c0 tsub, ˜c1 tsub Ci def =  ˜c0 tint, ˜c1 tint Csi def = Cs ∪Ci For example, column Csi in Table 2 shows the accuracy when the compressed model only retains the primary path from both the subject and the intervening noun while the computations of all other paths are set to their expected values; while in Csi, all paths but the paths in Csi are kept. We observe that the best compressed model is Ci, where the primary path from the intervening noun is left out; it performs even better than the original model; the increase comes from the cases with attractors (PS, SP). This indicates that eliminating the primary path from the attractor improves the model. The next best models apart from C are Cs and Csi, where primary paths are kept. Compressed models without the primary subject-verb path (Csi, Cs, Ci) have performances close to random guessing. Observation 4. Accuracy under path-based model compression tests corroborate that primary paths account for most of the subject number agreement concept of the LSTM. By comparing the SP and PS rows of Csi, Cs, Cs, and Ci, we observe the effect of attractors in misguiding the model into giving wrong predictions. Similarly, we see that helpful nouns (SS, PP) help guide the models to make more accurate predictions, though this is not grammatically justified. 5 Conclusions The combination of finely-tuned attribution and gradient decomposition lets us investigate the handling of the grammatical number agreement concept attributed to paths across LSTM components. The concentration of attribution to a primary path and two primary cell state neurons and its persistence in a variety of short-term and long-term contexts, even with confounding attractors, demonstrates that the concept’s handling is, to a large degree, general and localized. Though the heuristic decisioning aspect of an LSTM is present in the large quantities of paths with non-zero influence, their overall contribution to the concept is insignificant as compared to the primary path. Node-based compression results further corroborate these conclusions. We note, however, that our results are based on datasets exercising the agreement concept in contexts of a limited size. We speculate that the primary path’s attribution diminishes with the length of the context, which would suggest that at some context size, the handling of number will devolve to be mostly heuristic-like with no significant primary paths. Though our present datasets do not pose computational problems, the number of paths, at both the neuron and the gate level, is exponential with respect to context size. Investigating longer contexts, the diminishing dominance of the primary path, and the requisite algorithmic scalability requirements are elements of our ongoing work. We also note that our method can be expanded to explore number agreement in more complicated sentences with clausal structures, or other syntactic/semantic signals such as coreference or gender agreement. Acknowledgement This work was developed with the support of NSF grant CNS-1704845 as well as by DARPA and the Air Force Research Laboratory under agreement number FA8750-152-0277. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation thereon. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of DARPA, the Air Force Research Laboratory, the National Science Foundation, or the U.S. Government. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this work. 4757 References Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert M¨uller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. Anupam Datta, Shayak Sen, and Yair Zick. 2016. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP), pages 598–617. IEEE. Kedar Dhamdhere, Mukund Sundararajan, and Qiqi Yan. 2018. How important is a neuron? arXiv preprint arXiv:1805.12233. James Fiacco, Samridhi Choudhary, and Carolyn Rose. 2019. Deep neural model inspection and comparison via functional neuron pathways. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5754–5764. F. A. Gers and E. Schmidhuber. 2001. Lstm recurrent networks learn simple context-free and contextsensitive languages. Trans. Neur. Netw., 12(6):1333– 1340. Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information. Klaus Greff, Rupesh K Srivastava, Jan Koutn´ık, Bas R Steunebrink, and J¨urgen Schmidhuber. 2017. Lstm: A search space odyssey. IEEE transactions on neural networks and learning systems, 28(10):2222– 2232. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. Colorless green recurrent networks dream hierarchically. Sepp Hochreiter. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02):107–116. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and’diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907–926. Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078. Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Baroni. 2019. The emergence of number and syntax units in lstm language models. arXiv preprint arXiv:1903.07435. Klas Leino, Shayak Sen, Anupam Datta, Matt Fredrikson, and Linyi Li. 2018. Influence-directed explanations for deep convolutional networks. In 2018 IEEE International Test Conference (ITC), pages 1– 8. IEEE. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntaxsensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521–535. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. W James Murdoch, Peter J Liu, and Bin Yu. 2018. Beyond word importance: Contextual decomposition to extract interactions from lstms. arXiv preprint arXiv:1801.05453. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3319–3328. JMLR. org.
2020
430
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4758–4781 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4758 Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings Rishi Bommasani Cornell University [email protected] Kelly Davis Mozilla Corporation [email protected] Claire Cardie Cornell University [email protected] Abstract Contextualized representations (e.g. ELMo, BERT) have become the default pretrained representations for downstream NLP applications. In some settings, this transition has rendered their static embedding predecessors (e.g. Word2Vec, GloVe) obsolete. As a side-effect, we observe that older interpretability methods for static embeddings — while more mature than those available for their dynamic counterparts — are underutilized in studying newer contextualized representations. Consequently, we introduce simple and fully general methods for converting from contextualized representations to static lookup-table embeddings which we apply to 5 popular pretrained models and 9 sets of pretrained weights. Our analysis of the resulting static embeddings notably reveals that pooling over many contexts significantly improves representational quality under intrinsic evaluation. Complementary to analyzing representational quality, we consider social biases encoded in pretrained representations with respect to gender, race/ethnicity, and religion and find that bias is encoded disparately across pretrained models and internal layers even for models that share the same training data. Concerningly, we find dramatic inconsistencies between social bias estimators for word embeddings. 1 Introduction Word embeddings (Bengio et al., 2003; Collobert and Weston, 2008; Collobert et al., 2011) have been a hallmark of modern natural language processing (NLP) for many years. Embedding methods have been broadly applied and have experienced parallel and complementary innovations alongside neural network methods for NLP. Advances in embedding quality in part have come from integrating additional information such as syntax (Levy and Goldberg, 2014a; Li et al., 2017), morphology (Cotterell and Sch¨utze, 2015), subwords (Bojanowski et al., 2017), subcharacters (Stratos, 2017; Yu et al., 2017) and, most recently, context (Peters et al., 2018; Devlin et al., 2019). Due to their tremendous representational power, pretrained contextualized representations, in particular, have seen widespread adoption across myriad subareas of NLP. The recent dominance of pretrained contextualized representations such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) has served as the impetus for exciting and diverse interpretability research: Liu et al. (2019a); Tenney et al. (2019a) study what is learned across the layers of these models, Tenney et al. (2019b); Ethayarajh (2019) consider what is learned from context, Clark et al. (2019); Michel et al. (2019) look at specific attention heads, Hewitt and Manning (2019); Ettinger (2020) address linguistic understanding such as syntax and negation, and Wallace et al. (2019); Tan and Celis (2019) address ethical concerns such as security (adversarial robustness) and social bias. In fact, the neologism BERTology was coined specifically to describe this flurry of interpretability research.1 While these works have provided nuanced finegrained analyses by creating new interpretability schema/techniques, we instead take an alternate approach of trying to re-purpose methods developed for analyzing static word embeddings. In order to employ static embedding interpretability methods to contextualized representations, we begin by proposing a simple strategy for converting from contextualized representations to static embeddings. Crucially, our method is fully general and assumes only that the contextualized model maps word sequences to vector sequences. Given this generality, we apply our method to 9 popular pretrained contextualized representations. The resulting static embeddings serve as proxies for the original contextualized model. 1We direct interested readers to a more complete survey of this work from Rogers et al. (2020). 4759 We initially examine the representational quality of these embeddings under intrinsic evaluation. Our evaluation produces several insights regarding layer-wise lexical semantic understanding and representational variation in contextualized representations. Importantly, our analyses suggest constructive improvements to potentially improve downstream practices in using contextualized models. Simultaneously, we find that our static embeddings substantially outperform Word2Vec and GloVe and therefore suggests our method serves the dual purpose of being a lightweight mechanism for generating static embeddings that track with advances in contextualized representations. Since static embeddings have significant advantages with respect to speed, computational resources, and ease of use, these results have important implications for resource-constrained settings (Shen et al., 2019), environmental concerns (Strubell et al., 2019), and the broader accessibility of NLP technologies.2 Alongside more developed methods for embedding analysis, the static embedding setting is also equipped with a richer body of work regarding social bias. In this sense, we view understanding the encoded social bias in representations as a societally critical special-case of interpretability research. We employ methods for identifying and quantifying gender, racial/ethnic, and religious bias (Bolukbasi et al., 2016; Garg et al., 2018; Manzini et al., 2019) to our static embeddings. These experiments not only shed light on the properties of our static embeddings for downstream use but can also serve as a proxy for understanding latent biases in the original pretrained contextual representations. We find that biases in different models and across different layers are quite disparate; this has important consequences on model and layer selection for downstream use. Further, for two sets of pretrained weights learned on the same training data, we find that bias patterns still remain fairly distinct. Most surprisingly, our large-scale evaluation makes clear that existing bias estimators are dramatically inconsistent with each other. 2 Methods In order to use a contextualized model like BERT to compute a single context-agnostic representation for a given word w, we define two operations. 2A humanist’s outlook on the (in)accessibility of BERT: https://tedunderwood.com/2019/07/15/ do-humanists-need-bert/ The first is subword pooling: the application of a pooling mechanism over the k subword representations generated for w in context c in order to compute a single representation for w in c, i.e. {w1 c, . . . , wk c} 7→wc. Beyond this, we define context combination to be the mapping from representations wc1, . . . , wcn of w in different contexts c1, . . . , cn to a single static embedding w that is agnostic of context. Subword Pooling. The tokenization procedure for BERT can be decomposed into two steps: performing a simple word-level tokenization and then potentially deconstructing a word into multiple subwords, yielding w1, . . . , wk such that cat(w1, . . . , wk) = w where cat(·) indicates concatenation. Then, every layer of the model computes vectors w1 c, . . . , wk c . Given these vectors, we consider four pooling mechanisms to compute wc: wc = f(w1 c, . . . , wk c) f ∈{min, max, mean, last} min(·), max(·) are element-wise min/max pooling, mean(·) is the arithmetic mean and last(·) indicates selecting the last vector, wk c. Context Combination. Next, we describe two approaches for specifying contexts c1, . . . , cn and combining the associated representations wc1, . . . , wcn. • Decontextualized: For a word w, we use a single context c1 = w. That is, we feed the single word w into the pretrained model and use the outputted vector as the representation of w (applying subword pooling if the word is split into multiple subwords). • Aggregated: Since the Decontextualized strategy presents an unnatural input to the pretrained encoder, which likely never encountered w in isolation, we instead aggregate representations of w across multiple contexts. In particular, we sample n sentences from a text corpus D (see §A.2) each of which contains the word w, and compute the vectors wc1, . . . , wcn. Then, we apply a pooling strategy to yield a single representation that aggregates representations across contexts: w = g(wc1, . . . , wcn); g ∈{min, max, mean} 3 Setup We begin by verifying that the resulting static embeddings that we derive retain their representational 4760 strength, to some extent. We take this step to ensure that properties we observe of the static embeddings can be attributed to, and are consistent with, the original contextualized representations. Inspired by concerns with probing methods/diagnostic classifiers (Liu et al., 2019a; Hewitt and Liang, 2019) regarding whether learning can be attributed to the classifier and not the underlying representation, we employ an exceptionally simple parameter-free method for converting from contextualized to static representations to ensure that any properties observed in the latter are not introduced via this process. When evaluating static embedding performance, we consider Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) embeddings as baselines since they have been the most prominent pretrained static embeddings for several years. Similarly, we begin with BERT as the contextualized model as it is currently the most prominent in downstream use among the growing number of alternatives. We provide identical analyses for 4 other contextualized model architectures (GPT-2 (Radford et al., 2019), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019b), DistilBERT (Sanh et al., 2019)) and, in total, 9 sets of pretrained weights. All models, weights, and naming conventions used are enumerated in Appendix C and Table 9. Additional representation quality results appear in Tables 4–7 and Figures 4–10. We primarily report results for bert-base-uncased; further results for bert-large-uncased appear in Figure 3. 4 Representation Quality 4.1 Evaluation Details To assess the representational quality of our static embeddings, we evaluate on several word similarity and word relatedness datasets.3 We consider 4 such datasets: RG65 (Rubenstein and Goodenough, 1965), WS353 (Agirre et al., 2009), SIMLEX999 (Hill et al., 2015) and SIMVERB3500 (Gerz et al., 2016) (see §A.4 for more details). Taken together, these datasets contain 4917 examples and specify a vocabulary V of 2005 unique words. Each example is a pair of words (w1, w2) with a gold-standard annotation (provided by one or more humans) of the semantic similarity or relatedness between w1 and w2. A word embedding is evaluated by the relative correctness of its ranking 3Concerns with this decision are discussed in §A.3. Model N RG65 WS353 SIMLEX999 SIMVERB3500 Word2Vec 0.6787 0.6838 0.4420 0.3636 GloVe 0.6873 0.6073 0.3705 0.2271 BERT-12 (1) 500K 0.7206 0.7038 0.5019 0.3550 BERT-24 (1) 500K 0.7367 0.7074 0.5114 0.3687 BERT-24 (6) 500K 0.7494 0.7282 0.5116 0.4062 BERT-12 10K 0.5167 (1) 0.6833 (1) 0.4573 (1) 0.3043 (1) BERT-12 100K 0.6980 (1) 0.7023 (1) 0.5007 (3) 0.3494 (3) BERT-12 500K 0.7262 (2) 0.7038 (1) 0.5115 (3) 0.3853 (4) BERT-12 1M 0.7242 (1) 0.7048 (1) 0.5134 (3) 0.3948 (4) BERT-24 100K 0.7749 (2) 0.7179 (6) 0.5044 (1) 0.3686 (9) BERT-24 500K 0.7643 (2) 0.7282 (6) 0.5116 (6) 0.4146 (10) BERT-24 1M 0.7768 (2) 0.7301 (6) 0.5244 (15) 0.4280 (10) Table 1: Performance of distilled BERT embeddings. f and g are set to mean and (#) indicates the layer the embeddings are distilled from. Bold indicates best performance for a given dataset of embeddings depicted. Model RG65 WS353 SIMLEX999 SIMVERB3500 BERT-12 0.6980 (1) 0.7023 (1) 0.5007 (3) 0.3494 (3) BERT-24 0.7749 (2) 0.7179 (6) 0.5044 (1) 0.3686 (9) GPT2-12 0.5156 (1) 0.6396 (0) 0.4547 (2) 0.3128 (6) GPT2-24 0.5328 (1) 0.6830 (0) 0.4505 (3) 0.3056 (0) RoBERTa-12 0.6597 (0) 0.6915 (0) 0.5098 (0) 0.4206 (0) RoBERTa-24 0.7087 (7) 0.6563 (6) 0.4959 (0) 0.3802 (0) XLNet-12 0.6239 (1) 0.6629 (0) 0.5185 (1) 0.4044 (3) XLNet-24 0.6522 (3) 0.7021 (3) 0.5503 (6) 0.4545 (3) DistilBERT-6 0.7245 (1) 0.7164 (1) 0.5077 (0) 0.3207 (1) Table 2: Performance of static embeddings from different pretrained models. f and g are set to mean, N = 100K, and (#) indicates the layer the embeddings are distilled from. Bold indicates best performance for a given dataset of embeddings depicted. of the similarity/relatedness of all examples in a dataset with respect to the gold-standard ranking using the Spearman ρ coefficient. Embedding predictions are computed using cosine similarity. 4.2 Results Pooling Strategy. In Figure 1, we show the performance on all 4 datasets for the resulting static embeddings. For embeddings computed using the Aggregated strategy, representations are aggregated over N = 100K sentences where N is the number of total contexts for all words (§A.5). Across all four datasets, we see that g = mean is the best-performing pooling mechanism within the Aggregated strategy and also outperforms the Decontexualized strategy by a substantial margin. Fixing g = mean, we further observe that mean pooling at the subword level also performs best (the dark green dashed line in all plots). We further find that this trend consistently holds across pretrained models. Number of Contexts. In Table 1, we see that performance for both BERT-12 and BERT-24 steadily increases across all datasets with increas4761 Figure 1: Layer-wise performance of distilled BERT-12 embeddings for all pairs (f, g) with N = 100K. ing N; this trend holds for the other 7 pretrained models. In particular, in the largest setting with N = 1M, the BERT-24 embeddings distilled from the best-performing layer for each dataset drastically outperform both Word2Vec and GloVe. However, this can be seen as an unfair comparison given that we are selecting specific layers for specific datasets. As the middle band of Table 1 shows, we can fix a particular layer for all datasets and still outperform both Word2Vec and GloVe on all datasets. Relationship between N and model layer. In Figure 1, there is a clear preference towards the first quarter of the model’s layers (layers 0-3) with a sharp drop-off in performance immediately thereafter. A similar preference for the first quarter of the model is observed in models with a different number of layers (Figure 3, Figure 10). Given that our intrinsic evaluation is centered on lexical semantic understanding, this appears to be largely consistent with the findings of Liu et al. (2019a); Tenney et al. (2019a) regarding where lexical semantic information is best encoded in pretrained contextualized models. However, as we pool over a larger number of contexts, Table 1 reveals an interesting relationship between N and the best-performing layer. The best-performing layer monotonically (with a single exception) shifts to be later and later within the pretrained model. Since the later layers did not perform better for smaller values of N, these layers demonstrate greater variance with respect to the layer-wise distributional mean and reducing this variance improves performance.4 Since later layers of the 4Shi et al. (2019) concurrently propose a different apmodel are generally preferred by downstream practitioners (Zhang et al., 2020), our findings suggest that downstream performance could be further improved by considering variance reduction as we suggest; Ethayarajh (2019) also provides concrete evidence of the tremendous variance in the later layers of deep pretrained contextualized models. Cross-Model Results. Remarkably, we find that most tendencies we observe generalize well to all other pretrained models we study (specifically the optimality of f = mean, g = mean, the improved performance for larger N, and the layer-wise tendencies with respect to N). This is particularly noteworthy given that several works have found that different contextualized models pattern substantially differently (Liu et al., 2019a; Ethayarajh, 2019). In Table 2, we summarize the performance of all models we studied. All of the models considered were introduced during a similar time period and have comparable properties in terms of downstream performance. In spite of this, we observe that their static analogues perform radically differently. For example, several do not reliably outperform Word2Vec and GloVe despite outperforming Word2vec and GloVe reliably in downstream evaluation. Future work may consider whether the reduction to static embeddings affects different models differently and whether this is reflective of the quality of context-agnostic lexical semantics from other types of linguistic knowledge (e.g. context modelling, syntactic understanding, and semantic composition). In general, these results proach with similar motivations. 4762 provide further evidence to suggest that linguistic understanding captured by different pretrained weights may be substantially different, even for models with near-identical Transformer (Vaswani et al., 2017) architectures. Somewhat surprisingly, in Table 2, DistilBert6 outperforms BERT-12 on three out of the four datasets despite being distilled (Ba and Caruana, 2014; Hinton et al., 2015) from BERT-12. Analogously, RoBERTa, which was introduced as a direct improvement over BERT, does not reliably outperform the corresponding BERT models. 5 Bias Bias is a complex and highly relevant topic in developing representations and models in NLP and ML. In this context, we study the social bias encoded within our static word representations as a proxy for understanding biases of the source contextualized representations. As Kate Crawford argued for in her NIPS 2017 keynote, while studying individual models is important given that specific models may propagate, accentuate, or diminish biases in different ways, studying the representations that serve as the starting point and that are shared across models (which are used for possibly different tasks) allows for more generalizable understanding of bias (Barocas et al., 2017). In this work, we simultaneously consider multiple axes of social bias (i.e. gender, race, and religion) and multiple proposed methods for computationally quantifying these biases. We do so precisely because we find that existing NLP literature has primarily prioritized gender (which may be a technically easier setting and is starkly incomplete in terms of social biases of interest). Further, as we will show, different computational specifications of bias that evaluate the same underlying social phenomena yield markedly different results. As a direct consequence, we strongly caution that the results must be taken with respect to the definitions of bias being applied. Further, we note that an embedding which receives low bias scores cannot be assumed to be (nearly) unbiased. Instead, it satisfies the significantly weaker condition that under existing definitions the embedding exhibits low bias and perhaps additional (more nuanced) definitions are needed. 5.1 Definitions Bolukbasi et al. (2016) introduced a measure of gender bias which assumes access to a set P = {(m1, f1), . . . , (mn, fn)} of (male, female) word pairs where mi and fi only differ in gender (e.g. ‘men’ and ‘women’). They compute a gender direction g: g = PCA [m1 −f1, . . . , mn −fn]  [0] where [0] indicates the first principal component. Then, given a set N of target words that we are interested in evaluating the bias with respect to, Bolukbasi et al. (2016) specifies the bias as: bias BOLUKBASI(N) = mean w∈N | cos (w, g) | This definition is only inherently applicable to binary bias settings, i.e. where there are exactly two protected classes. Multi-class generalizations are difficult to realize since constructing P requires aligned k-tuples whose entries only differ in the underlying social attribute and this becomes increasingly challenging for increasing k. Further, this definition assumes the first principal component explains a large fraction of the observed variance. Garg et al. (2018) introduced a different definition that is not restricted to gender and assumes access to sets A1 = {m1, · · · , mn} and A2 = {f1, · · · , fn′} of representative words for each of the two protected classes. For each class, µi = mean w∈Ai w is computed. Garg et al. (2018) computes the bias in two ways: bias GARG-EUC(N) = mean w∈N ∥w −µ1∥2 −∥w −µ2∥2 bias GARG-COS(N) = mean w∈N cos(w, µ1) −cos(w, µ2) Compared to the definition of Bolukbasi et al. (2016), these definitions may be more general as constructing P is strictly more difficult than constructing A1, A2 (as P can always be split into two such sets but the reverse is not generally true) and Garg et al. (2018)’s definition does not rely on the first principal component explaining a large fraction of the variance. However, unlike the first definition, Garg et al. (2018) computes the bias in favor of/against a specific class (meaning if N = {‘programmer’, ‘homemaker’} and ‘programmer’ was equally male-biased as ‘homemaker’ was female-biased, then under the definition of Garg et al. (2018), there would be no bias in aggregate). To permit comparison, we insert absolute values around each term in the mean over N. Manzini et al. (2019) introduced a definition for quantifying multi-class bias which assumes access 4763 to sets of representative words A1, . . . , Ak5: bias MANZINI(N) = mean w∈N mean i∈{1,...,k} mean a∈Ai cos(w, a) 5.2 Results Inspired by the results of Nissim et al. (2020), in this work we transparently report social bias in existing static embeddings as well as the embeddings we produce. In particular, we exhaustively report the measured bias for all 3542 valid (pretrained model, layer, social attribute, bias definition, target word list) 5-tuples — all possible combinations of static embeddings and bias measures considered. The results for models beyond BERT appear in Figures 11–18. We specifically report results for binary gender (male, female), two-class religion (Christianity, Islam) and three-class race (white, Hispanic, and Asian), directly following Garg et al. (2018). We study bias with respect to target word lists of professions Nprof and adjectives Nadj. These results are by no means intended to be comprehensive with regards to the breadth of bias socially and only address a restricted subset of social biases which notably does not include intersectional biases. The types of biases being evaluated for are taken with respect to specific word lists (which are sometimes subjective albeit being peer-reviewed) that serve as exemplars and definitions of bias are grounded in the norms of the United States. All word lists are provided in Appendix B and are sourced in §A.6. Layer-wise Bias Trends. In Figure 2, we report layer-wise bias across all (attribute, definition) pairs. We clearly observe that for every social attribute, there is a great deal of variation across the layers in the quantified amount of bias for a fixed bias estimator. Further, while we are not surprised that different bias measures for the same social attribute and the same layer assign different absolute scores, we observe that they also do not agree in relative judgments. For gender, we observe that the bias estimated by the definition of Manzini et al. (2019) steadily increases before peaking at the penultimate layer and slightly decreasing thereafter. In contrast, under bias GARG-EUC 5We slightly modify the definition of Manzini et al. (2019) by (a) using cosine similarity where they use cosine distance and (b) inserting absolute values around each term in the mean over N. We make these changes to introduce consistency with the other definitions and to permit comparison. we see a distribution with two peaks corresponding to layers at the start or end of the pretrained model with less bias within the intermediary layers. For estimating the same quantity, bias GARG-COS is mostly uniform across the layers. Similarly, in looking at the religious bias, we see similar inconsistencies with the bias increasing monotonically from layers 2 through 8 under bias MANZINI, decreasing monotonically under bias GARG-EUC, and remaining roughly constant under bias GARG-COS. In general, while the choice of N (and the choice of Ai for gender) does affect the absolute bias estimates, the relative trends across layers are fairly robust to these choices for a specific definition. Consequences. Taken together, our analysis suggests a concerning state of affairs regarding bias quantification measures for (static) word embeddings. In particular, while estimates are seemingly stable to some types of choices regarding word lists, bias scores for a particular word embedding are tightly related to the definition being used and existing bias measures are markedly inconsistent with each other. We find this has important consequences beyond understanding the social biases in our representations. Concretely, we argue that without certainty regarding the extent to which embeddings are biased, it is impossible to properly interpret the meaningfulness of debiasing procedures (Bolukbasi et al., 2016; Zhao et al., 2018a,b; Sun et al., 2019) as we cannot reliably estimate the bias in the embeddings both before and after the procedure. This is further compounded with the existing evidence that current intrinsic measures of social bias may not handle geometric behavior such as clustering (Gonen and Goldberg, 2019). Cross-Model Bias Trends. In light of the above, next we compare bias estimates across different pretrained models in Table 3. Given the conflicting scores assigned by different definitions, we retain all definitions along with all social attributes in this comparison. However, we only consider target words given by Nprof due to the aforementioned stability (and for visual clarity) with results for Nadj appearing in Table 8. Since we do not preprocess or normalize embeddings, the scores using bias GARG-EUC are incomparable (and may be improper to compare in the layer-wise case) as 4764 Figure 2: Layer-wise bias of distilled BERT-12 embeddings for f = mean, g = mean, N = 100K. Gender Race Religion B, P GE, P GC, P M, P GE GC M M GE GC M Word2Vec 0.0503 0.1758 0.075 0.2403 0.1569 0.0677 0.2163 0.0672 0.0907 0.053 0.14 GloVe 0.0801 0.3534 0.0736 0.1964 0.357 0.0734 0.1557 0.1171 0.2699 0.0702 0.0756 BERT-12 0.0736 0.3725 0.0307 0.3186 0.2868 0.0254 0.3163 0.2575 1.2349 0.0604 0.2955 BERT-24 0.0515 0.6418 0.0462 0.234 0.4674 0.0379 0.2284 0.1956 0.6476 0.0379 0.2316 GPT2-12 0.4933 25.8743 0.0182 0.6464 2.0771 0.0062 0.7426 0.6532 4.5282 0.0153 0.776 GPT2-24 0.6871 40.1423 0.0141 0.8514 2.3244 0.0026 0.9019 0.8564 8.9528 0.0075 0.9081 RoBERTa-12 0.0412 0.2923 0.0081 0.8546 0.2077 0.0057 0.8551 0.8244 0.4356 0.0111 0.844 RoBERTa-24 0.0459 0.3771 0.0089 0.7879 0.2611 0.0064 0.783 0.7479 0.5905 0.0144 0.7636 XLNet-12 0.0838 1.0954 0.0608 0.3374 0.6661 0.042 0.34 0.2792 0.8537 0.0523 0.318 XLNet-24 0.0647 0.7644 0.0407 0.381 0.459 0.0268 0.373 0.328 0.8009 0.0505 0.368 DistilBERT-6 0.0504 0.5435 0.0375 0.3182 0.3343 0.0271 0.3185 0.2786 0.8128 0.0437 0.3106 Table 3: Social bias encoded within different pretrained models with respect to a set of professions Nprof. Parameters are discussed in the supplement. Lowest bias in a particular column is denoted in bold. they are sensitive to the absolute norms of the embeddings.6 Further, we note that bias BOLUKBASI may not be a reliable indicator since the first principal component explains less than 35% of the variance for the majority of distilled embedding (Zhao et al. (2019a) show similar findings for ELMo). For bias MANZINI and bias GARG-COS, we find that all distilled static embeddings have substantially higher scores under bias MANZINI but generally lower scores under bias GARG-COS when compared to Word2Vec and GloVe. Interestingly, we see that under bias MANZINI both GPT-2 and RoBERTa embeddings consistently get high scores when compared to other distilled embeddings but under bias GARG-COS they are deemed the least biased. Data alone does not determine bias. Comparing the results for BERT-12 and BERT-24 (full layer-wise results for BERT-24 appear in Figure 11) reveals that bias trends for BERT-12 and BERT-24 are starkly different for any fixed 6When we normalized using the Euclidean norm, we found the relative results to reliably coincide with those for bias GARG-COS which is consistent with Garg et al. (2018). bias measure. What this indicates is the bias observed in contextualized models is not strictly determined by the training data (as these models share the same training data as do all other 12 and 24 model pairs) and must also be a function of the architecture, training procedure, and/or random initialization. Takeaways. Ultimately, given the aforementioned issues regarding the reliability of bias measures, it is difficult to arrive at clear consensus of the how the bias encoded compares between our distilled representations and prior static embeddings. What our analysis does resolutely reveal is a pronounced and likely problematic effect of existing bias definitions on the resulting bias estimates. 6 Related Work Contextualized →Static. Recently, Akbik et al. (2019) introduced an approach that gradually aggregates representations during training to accumulate global information and demonstrated improvements over only contextualized representations for NER. May et al. (2019) instead syntheti4765 cally construct a single semantically-bleached sentence which is fed into a sentence encoder to yield a static representation. In doing so, they introduce SEAT as a means for studying biases in sentence encoders by applying WEAT (Caliskan et al., 2017) to the resulting static representations. This approach appears inappropriate for quantifying bias in sentence encoders7 as sentence encoders are trained on semantically-meaningful sentences and semantically-bleached constructions are not representative of this distribution and their templates heavily rely on deictic expressions which are difficult to adapt for certain syntactic categories such as verbs (as required for SIMVERB3500 especially). Given these concerns, our reduction method may be preferable for use in estimation of bias in contextualized representations. Due to the fact that we use mean-pooling, our approach may lend itself to interpretations of the bias in a model on average across contexts. Ethayarajh (2019) considers a similar method to ours where pooling is replaced by PCA. While this work demonstrated contextualized representations are highly contextual, our work naturally explores the complementary problem of what value can be extracted from the static analogue of these representations. Bias. Social bias in NLP has been primarily evaluated in three ways: (a) using geometric similarity between embeddings (Bolukbasi et al., 2016; Garg et al., 2018; Manzini et al., 2019), (b) adapting psychological association tests (Caliskan et al., 2017; May et al., 2019), and (c) considering downstream behavior (Zhao et al., 2017, 2018a, 2019a; Stanovsky et al., 2019).8 Our bias evaluation is in the style of (a) and we consider multi-class social bias in the lens of gender, race, and religion whereas prior work has centered on binary gender. Additionally, while most prior work has discussed the static embedding setting, recent work has considered sentence encoders and contextualized models. Zhao et al. (2019a) consider gender bias in ELMo when applied to coreference systems and Kurita et al. (2019) extend these results by leveraging the masked language modeling objective of BERT. Similarly, Basta et al. (2019) considers intrinsic gender bias in ELMo via gender-swapped 7The authors also identified several empirical concerns that draw the meaningfulness of this method into question. 8Sun et al. (2019) provides a taxonomy of the work towards understanding gender bias within NLP. sentences. When compared to these approaches, we study a broader class of biases under more than one bias definition and consider more than one model. Further, while many of these approaches generally neglect reporting bias values for different layers of the model, we show this is crucial as bias is not uniformly distributed throughout model layers and practitioners often do not use the last layer of deep Transformer models (Liu et al., 2019a; Zhang et al., 2020; Zhao et al., 2019b).9 7 Future Directions Our work furnishes multiple insights about pretrained contextualized models that suggest changes (subword pooling, layer choice, beneficial variance reduction via averaging across contexts) to improve downstream performance. Recent models have combined static and dynamic embeddings (Peters et al., 2018; Bommasani et al., 2019; Akbik et al., 2019) and our representations may also support drop-in improvements in these settings. While not central to our goals, we discovered that our static embeddings substantially outperform Word2Vec and GloVe under intrinsic evaluation. Future research may consider downstream gains as improved static embeddings are critical for resource-constrained settings and may help address environmental concerns in NLP (Strubell et al., 2019), machine learning (Canziani et al., 2016), and the broader AI community (Schwartz et al., 2019). Future research could explore weighting schema in the averaging process analogous to SIF (Arora et al., 2017) for sentence representations computed via averaging (Wieting et al., 2016). The generality of the proxy analysis method implies that other interpretability methods for static embeddings can also be considered. Further, post-processing approaches beyond analysis/interpretability such as dimensionality reduction may be particularly intriguing given that this is often challenging to perform within large multilayered networks like BERT (Sanh et al., 2019) but has been successfully demonstrated for static embeddings (Nunes and Antunes, 2018; Mu and Viswanath, 2018; Raunak et al., 2019). Future work may revisit the choice of the corpus D from which contexts are drawn. For downstream use, setting D to be the target domain may serve as a lightweight domain adaptation strategy similar to findings for averaged word representations for 9This is the only layer studied in Kurita et al. (2019). 4766 out-of-domain settings (Wieting et al., 2016). 8 Discussion and Open Problems While our work demonstrates that contextualized representations retain substantial representational power even when reduced to be noncontextual, it is unclear what information is lost. After all, contextualized representations have been so effective precisely because they are tremendously contextual (Ethayarajh, 2019). As such, the validity of treating the resulting static embeddings as reliable proxies for the original contextualized model still remains open. On the other hand, human language processing has often been conjectured to have both context-dependent and context-independent properties (Barsalou, 1982; Rubio-Fern´andez, 2008; Depraetere, 2014, 2019). Given this divide, our approach may provide an alternative mechanism for clarifying how these two properties interact in the computational setting from both an interpretability standpoint (i.e. comparing results for analyses on the static embeddings and the original contextualized representations) and a downstream standpoint (i.e. comparing downstream performance for models initialized using the static embeddings and the original contextualized representations). However, the precise relationship between the role of context in human language processing and computational language processing remains unclear. Theoretical explanation for the behavior we observe in two settings is also needed. First, it is unclear why learning contextualized representations and then reducing them to static embeddings drastically outperforms directly learning static embeddings. In particular, the GloVe embeddings we use are learned using 6 billion tokens whereas the BERT representations were trained on roughly half as much data (3.3 billion tokens). Perhaps the behavior is reminiscent of the benefits of modelling in higher dimensional settings temporarily as is seen in other domains (e.g. the kernel trick and Mercer’s theorem for learning non-linear classifiers using inner product methods): begin by recasting the problem in a more expressive space (contextualized representations) and then project/reduce to the original space (static embeddings). Second, the reason for the benefits of the variance reduction that we observe are unclear. Given that best-performing mechanism is to average over many contexts, it may be that approaching the asymptotic mean of the distribution across contexts is desirable/helps combat the anisotropy that exists in the original contextualized space (Ethayarajh, 2019). 9 Conclusion In this work, we consider how methods developed for analyzing static embeddings can be re-purposed for understanding contextualized representations. We introduce simple and effective procedures for converting from contextualized representations to static word embeddings. When applied to pretrained models like BERT, we find the resulting embeddings are useful proxies that provide insights into the pretrained model while simultaneously outperforming Word2Vec and GloVe substantially under intrinsic evaluation. We further study the extent to which various social biases (gender, race, religion) are encoded, employing several different quantification schemas. Our large-scale analysis reveals that bias is encoded disparately across different popular pretrained models and different model layers. Our findings also have significant implications with respect to the reliability of existing protocols for estimating bias in word embeddings. 10 Reproducibility All data, code and visualizations are made publicly available.10 Further details are explictly and comprehensively reported in Appendix A. Acknowledgments We thank Ge Gao, Marty van Schijndel, Forrest Davis, and members of the Mozilla DeepSpeech and Cornell NLP groups for their valuable advice. We especially thank the reviewers and area chairs for their articulate and constructive feedback. References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19–27, Boulder, Colorado. Association for Computational Linguistics. Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named 10https://github.com/rishibommasani/ Contextual2Static 4767 entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 724–728, Minneapolis, Minnesota. Association for Computational Linguistics. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In International Conference on Learning Representations. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2654–2662. Curran Associates, Inc. Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: from allocative to representational harms in machine learning. Special Interest Group for Computing, Information and Society (SIGCIS). Lawrence W Barsalou. 1982. Context-independent and context-dependent information in concepts. Memory & cognition, 10(1):82–93. Christine Basta, Marta R. Costa-juss`a, and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 33–39, Florence, Italy. Association for Computational Linguistics. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res., 3:1137–1155. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4349–4357. Curran Associates, Inc. Rishi Bommasani, Arzoo Katiyar, and Claire Cardie. 2019. SPARSE: Structured prediction using argument-relative structured encoding. In Proceedings of the Third Workshop on Structured Prediction for NLP, pages 13–17, Minneapolis, Minnesota. Association for Computational Linguistics. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Alfredo Canziani, Adam Paszke, and Eugenio Culurciello. 2016. An analysis of deep neural network models for practical applications. CoRR, abs/1605.07678. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 160–167, New York, NY, USA. ACM. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537. Ryan Cotterell and Hinrich Sch¨utze. 2015. Morphological word-embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1287–1292, Denver, Colorado. Association for Computational Linguistics. Ilse Depraetere. 2014. Modals and lexically-regulated saturation. Journal of Pragmatics, 71:160–177. Ilse Depraetere. 2019. Meaning in context and contextual meaning: A perspective on the semanticspragmatics interface applied to modal verbs. Anglophonia, 28. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. Allyson Ettinger. 2020. What bert is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. 4768 Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems with evaluation of word embeddings using word similarity tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 30– 35, Berlin, Germany. Association for Computational Linguistics. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635–E3644. Daniela Gerz, Ivan Vuli´c, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A largescale evaluation set of verb similarity. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2173–2182, Austin, Texas. Association for Computational Linguistics. Anna Gladkova and Aleksandr Drozd. 2016. Intrinsic evaluations of word embeddings: What can we do better? In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 36–42, Berlin, Germany. Association for Computational Linguistics. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614, Minneapolis, Minnesota. Association for Computational Linguistics. Souleiman Hasan and Edward Curry. 2017. Word reembedding via manifold dimensionality retention. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 321–326, Copenhagen, Denmark. Association for Computational Linguistics. John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733–2743, Hong Kong, China. Association for Computational Linguistics. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166–172, Florence, Italy. Association for Computational Linguistics. Omer Levy and Yoav Goldberg. 2014a. Dependencybased word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 302–308, Baltimore, Maryland. Association for Computational Linguistics. Omer Levy and Yoav Goldberg. 2014b. Linguistic regularities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 171–180, Ann Arbor, Michigan. Association for Computational Linguistics. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Bofang Li, Tao Liu, Zhe Zhao, Buzhou Tang, Aleksandr Drozd, Anna Rogers, and Xiaoyong Du. 2017. Investigating different syntactic context types and context representations for learning word embeddings. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2421–2431, Copenhagen, Denmark. Association for Computational Linguistics. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094, Minneapolis, Minnesota. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics - Volume 1, ETMTNLP ’02, 4769 pages 63–70, Stroudsburg, PA, USA. Association for Computational Linguistics. Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615–621, Minneapolis, Minnesota. Association for Computational Linguistics. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622–628, Minneapolis, Minnesota. Association for Computational Linguistics. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 14014– 14024. Curran Associates, Inc. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. George A Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language and cognitive processes, 6(1):1–28. Jiaqi Mu and Pramod Viswanath. 2018. All-but-thetop: Simple and effective postprocessing for word representations. In International Conference on Learning Representations. Malvina Nissim, Rik van Noord, and Rob van der Goot. 2020. Fair is better than sensational: Man is to doctor as woman is to doctor. Computational Linguistics, pages 1–17. Davide Nunes and Luis Antunes. 2018. Neural random projections for language modelling. CoRR, abs/1807.00930. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Vikas Raunak, Vivek Gupta, and Florian Metze. 2019. Effective dimensionality reduction for word embeddings. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP2019), pages 235–243, Florence, Italy. Association for Computational Linguistics. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. ArXiv, abs/2002.12327. Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Commun. ACM, 8(10):627–633. Paula Rubio-Fern´andez. 2008. Concept narrowing: The role of context-independent information. Journal of semantics, 25(4):381–409. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108. Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2019. Green AI. CoRR, abs/1907.10597. Dinghan Shen, Pengyu Cheng, Dhanasekar Sundararaman, Xinyuan Zhang, Qian Yang, Meng Tang, Asli Celikyilmaz, and Lawrence Carin. 2019. Learning compressed sentence representations for on-device text processing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 107–116, Florence, Italy. Association for Computational Linguistics. Weijia Shi, Muhao Chen, Pei Zhou, and Kai-Wei Chang. 2019. Retrofitting contextualized word embeddings with paraphrases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China. Association for Computational Linguistics. Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679–1684, Florence, Italy. Association for Computational Linguistics. 4770 Karl Stratos. 2017. A sub-character architecture for Korean language processing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 721–726, Copenhagen, Denmark. Association for Computational Linguistics. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1630–1640, Florence, Italy. Association for Computational Linguistics. Yi Chern Tan and L. Elisa Celis. 2019. Assessing social and intersectional biases in contextualized word representations. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 13230–13241. Curran Associates, Inc. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China. Association for Computational Linguistics. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In International Conference on Learning Representations. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 33. Curran Associates, Inc. Jinxing Yu, Xun Jian, Hao Xin, and Yangqiu Song. 2017. Joint embeddings of Chinese words, characters, and fine-grained subcharacter components. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 286–291, Copenhagen, Denmark. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019a. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629–634, Minneapolis, Minnesota. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847–4853, Brussels, Belgium. Association for Computational Linguistics. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019b. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In 4771 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563– 578, Hong Kong, China. Association for Computational Linguistics. A Reproducibility Details A.1 Additional Results We provide layerwise model performance for all additional models in Figures 3-10 with corresponding tables for different N values (Tables 4-7). Similarly, we provide layerwise bias estimates for all additional models in Figures 11-18. Results for target words specified as adjectives are given in Table 8. A.2 Data We use English Wikipedia as the corpus D in context combination for the Aggregated strategy. The specific subset of English Wikipedia11 used was lightly preprocessed with a simple heuristic to remove bot-generated content. Individual Wikipedia documents were split into sentences using NLTK (Loper and Bird, 2002). We chose to exclude sentences containing fewer than 7 sentences or greater than 75 tokens (token counts we computed using the NLTK word tokenizer) though we did not find this filtering decision to be particularly impactful in initial experiments. The specific pretrained Word2Vec12 and GloVe13 embeddings used were both 300 dimensional. The Word2Vec embeddings were trained on approximately 100 billion words from Google News and the GloVe embeddings were trained on 6 billion tokens from Wikipedia 2014 and Gigaword 5. We chose the 300-dimensional embeddings in both cases as we believed they were the most frequently used and generally the best performing on both intrinsic evaluations (Hasan and Curry, 2017) and downstream tasks. A.3 Evaluation Decisions In this work, we chose to conduct intrinsic evaluation experiments that focused on word similarity and word relatedness. We did not consider the related evaluation of lexical understanding via word 11https://blog.lateral.io/2015/06/ the-unknown-perils-of-mining-wikipedia/ 12https://drive.google.com/file/d/ 0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit 13https://nlp.stanford.edu/projects/ glove/ analogies as they have been shown to decompose into word similarity subtasks (Levy and Goldberg, 2014b) and there are significant concerns about the validity of these analogies tests (Nissim et al., 2020). We acknowledge that word similarity and word relatedness tasks have also been heavily scrutinized (Faruqui et al., 2016; Gladkova and Drozd, 2016). A primary concern is that results are highly sensitive to (hyper)parameter selection (Levy et al., 2015). In our setting, where the parameters of the embeddings are largely fixed based on which pretrained models are publicly released and where we exhaustively report the impact of most remaining parameters, we find these concerns to still be valid but less relevant. To this end, prior work has considered various preprocessing operations on static embeddings such as clipping embeddings on an elementwise basis (Hasan and Curry, 2017) when performing intrinsic evaluation. We chose not to study these preprocessing choices as they create discrepancies between the embeddings used in intrinsic evaluation and those used in downstream tasks (where this form of preprocessing is generally not considered) and would have added additional parameters implicitly. Instead, we directly used the computed embeddings from the pretrained model with no changes throughout this work. A.4 Representation Quality Dataset Trends Rubenstein and Goodenough (1965) introduced a set of 65 noun-pairs and demonstrated strong correlation (exceeding 95%) between the scores in their dataset and additional human validation. Miller and Charles (1991) introduced a larger collection of pairs which they argued was an improvement over RG65 as it more faithfully addressed semantic similarity. Agirre et al. (2009) followed this work by introducing a even more pairs that included those of Miller and Charles (1991) as a subset and again demonstrated correlations with human scores exceeding 95%. Hill et al. (2015) argued that SIMLEX999 was an improvement in coverage over RG65 and more correctly quantified semantic similarity as opposed to semantic relatedness or association when compared to WS353. Beyond this, SIMVERB3500 was introduced by Gerz et al. (2016) to further increase coverage over all predecessors. Specifically, it shifted the focus towards verbs which had been heavily neglected in the prior datasets which centered on nouns and 4772 adjectives. A.5 Experimental Details We used PyTorch (Paszke et al., 2017) throughout this work with the pretrained contextual word representations taken from the HuggingFace pytorch-transformers repository14. Tokenization for each model was conducted using its corresponding tokenizer, i.e. results for GPT2 use the GPT2Tokenizer in pytorch-transformers. For simplicity, throughout this work, we introduce N as the total number of contexts used in distilling with the Aggregated strategy. Concretely, N = P wi∈V ni where V is the vocabulary used (generally the 2005 words in the four datasets considered). As a result, in finding contexts, we filter for sentences in D that contain at least one word in V. We choose to do this as this requires a number of candidate sentences upper bounded with respect to the most frequent word in V as opposed to filtering for a specific value for n which requires a number of sentences scaling in the frequency of the least frequent word in V. The N samples from D for the Aggregated strategy were sampled uniformly at random. Accordingly, as the aforementioned discussion suggests, for word wi, the number of examples ni which contain wi scales in the frequency of wi in the vocabulary being used. As a consequence, for small values of N, it is possible that rare words would have no examples and computing a representation w using the Aggregated strategy would be impossible. In this case, we back-offed to using the Decontextualized representation for wi. Given this concern, in the bias evaluation, we fix ni = 20 for every wi. In initial experiments, we found the bias results to be fairly stable when choosing values ni ∈{20, 50, 100}. The choice of ni would correspond to N = 40100 (as the vocabulary size was 2005) in the representation quality section in some sense (however this assumes a uniform distribution of word frequency as opposed to a Zipf distribution). The embeddings in the bias evaluation are drawn from layer ⌊X 4 ⌋using f = mean, g = mean as we found these to be the best performing embeddings generally across pretrained models and datasets in the representational quality evaluation. 14https://github.com/huggingface/ pytorch-transformers A.6 Word Lists The set of gender-paired tuples P were taken from Bolukbasi et al. (2016). In the gender bias section, P for definitions involving sets Ai indicates that P was split into equal-sized sets of male and female work. For the remaining gender results, the sets described in Appendix B were used. The various attribute sets Ai and target sets Nj were taken from Garg et al. (2018) which can be further sourced to a number of prior works in studying social bias. We remove any multi-word terms from these lists. B Word Lists Nprof = {‘accountant’, ‘acquaintance’, ‘actor’, ‘actress’, ‘administrator’, ‘adventurer’, ‘advocate’, ‘aide’, ‘alderman’, ‘ambassador’, ‘analyst’, ‘anthropologist’, ‘archaeologist’, ‘archbishop’, ‘architect’, ‘artist’, ‘artiste’, ‘assassin’, ‘astronaut’, ‘astronomer’, ‘athlete’, ‘attorney’, ‘author’, ‘baker’, ‘ballerina’, ‘ballplayer’, ‘banker’, ‘barber’, ‘baron’, ‘barrister’, ‘bartender’, ‘biologist’, ‘bishop’, ‘bodyguard’, ‘bookkeeper’, ‘boss’, ‘boxer’, ‘broadcaster’, ‘broker’, ‘bureaucrat’, ‘businessman’, ‘businesswoman’, ‘butcher’, ‘cabbie’, ‘cameraman’, ‘campaigner’, ‘captain’, ‘cardiologist’, ‘caretaker’, ‘carpenter’, ‘cartoonist’, ‘cellist’, ‘chancellor’, ‘chaplain’, ‘character’, ‘chef’, ‘chemist’, ‘choreographer’, ‘cinematographer’, ‘citizen’, ‘cleric’, ‘clerk’, ‘coach’, ‘collector’, ‘colonel’, ‘columnist’, ‘comedian’, ‘comic’, ‘commander’, ‘commentator’, ‘commissioner’, ‘composer’, ‘conductor’, ‘confesses’, ‘congressman’, ‘constable’, ‘consultant’, ‘cop’, ‘correspondent’, ‘councilman’, ‘councilor’, ‘counselor’, ‘critic’, ‘crooner’, ‘crusader’, ‘curator’, ‘custodian’, ‘dad’, ‘dancer’, ‘dean’, ‘dentist’, ‘deputy’, ‘dermatologist’, ‘detective’, ‘diplomat’, ‘director’, ‘doctor’, ‘drummer’, ‘economist’, ‘editor’, ‘educator’, ‘electrician’, ‘employee’, ‘entertainer’, ‘entrepreneur’, ‘environmentalist’, ‘envoy’, ‘epidemiologist’, ‘evangelist’, ‘farmer’, ‘filmmaker’, ‘financier’, ‘firebrand’, ‘firefighter’, ‘fireman’, ‘fisherman’, ‘footballer’, ‘foreman’, ‘gangster’, ‘gardener’, ‘geologist’, ‘goalkeeper’, ‘guitarist’, ‘hairdresser’, ‘handyman’, ‘headmaster’, ‘historian’, ‘hitman’, ‘homemaker’, ‘hooker’, ‘housekeeper’, ‘housewife’, ‘illustrator’, ‘industrialist’, ‘infielder’, ‘inspector’, ‘instructor’, ‘inventor’, ‘investigator’, ‘janitor’, ‘jeweler’, ‘journalist’, ‘judge’, ‘jurist’, ‘laborer’, ‘landlord’, ‘lawmaker’, ‘lawyer’, ‘lecturer’, ‘legislator’, ‘librarian’, ‘lieutenant’, ‘lifeguard’, 4773 Figure 3: Layerwise performance of BERT-24 static embeddings for all possible choices of f, g Figure 4: Layerwise performance of GPT2-12 static embeddings for all possible choices of f, g ‘lyricist’, ‘maestro’, ‘magician’, ‘magistrate’, ‘manager’, ‘marksman’, ‘marshal’, ‘mathematician’, ‘mechanic’, ‘mediator’, ‘medic’, ‘midfielder’, ‘minister’, ‘missionary’, ‘mobster’, ‘monk’, ‘musician’, ‘nanny’, ‘narrator’, ‘naturalist’, ‘negotiator’, ‘neurologist’, ‘neurosurgeon’, ‘novelist’, ‘nun’, ‘nurse’, ‘observer’, ‘officer’, ‘organist’, ‘painter’, ‘paralegal’, ‘parishioner’, ‘parliamentarian’, ‘pastor’, ‘pathologist’, ‘patrolman’, ‘pediatrician’, ‘performer’, ‘pharmacist’, ‘philanthropist’, ‘philosopher’, ‘photographer’, ‘photojournalist’, ‘physician’, ‘physicist’, ‘pianist’, ‘planner’, ‘playwright’, ‘plumber’, ‘poet’, ‘policeman’, ‘politician’, ‘pollster’, ‘preacher’, ‘president’, ‘priest’, ‘principal’, ‘prisoner’, ‘professor’, ‘programmer’, ‘promoter’, ‘proprietor’, ‘prosecutor’, ‘protagonist’, ‘protege’, ‘protester’, ‘provost’, ‘psychiatrist’, ‘psychologist’, ‘publicist’, ‘pundit’, ‘rabbi’, ‘radiologist’, ‘ranger’, ‘realtor’, ‘receptionist’, ‘researcher’, ‘restaurateur’, ‘sailor’, ‘saint’, ‘salesman’, ‘saxophonist’, ‘scholar’, ‘scientist’, ‘screenwriter’, ‘sculptor’, ‘secretary’, ‘senator’, ‘sergeant’, ‘servant’, ‘serviceman’, ‘shopkeeper’, ‘singer’, ‘skipper’, ‘socialite’, ‘sociologist’, ‘soldier’, ‘solicitor’, ‘soloist’, ‘sportsman’, ‘sportswriter’, ‘statesman’, ‘steward’, ‘stockbroker’, ‘strategist’, ‘student’, ‘stylist’, ‘substitute’, ‘superintendent’, ‘surgeon’, ‘surveyor’, ‘teacher’, ‘technician’, ‘teenager’, ‘therapist’, ‘trader’, ‘treasurer’, ‘trooper’, ‘trucker’, ‘trumpeter’, ‘tutor’, ‘tycoon’, ‘undersecretary’, ‘understudy’, ‘valedictorian’, ‘violinist’, ‘vocalist’, ‘waiter’, ‘waitress’, ‘warden’, ‘warrior’, ‘welder’, ‘worker’, ‘wrestler’, ‘writer’} Nadj = {‘disorganized’, ‘devious’, ‘impressionable’, ‘circumspect’, ‘impassive’, ‘aimless’, ‘effeminate’, ‘unfathomable’, ‘fickle’, ‘inoffensive’, ‘reactive’, ‘providential’, ‘resentful’, ‘bizarre’, ‘impractical’, ‘sarcastic’, ‘misguided’, ‘imitative’, 4774 Figure 5: Layerwise performance of GPT-24 static embeddings for all possible choices of f, g Model N RG65 WS353 SIMLEX999 SIMVERB3500 Word2Vec 0.6787 0.6838 0.4420 0.3636 GloVe 0.6873 0.6073 0.3705 0.2271 GPT2-12 10000 0.2843 (0) 0.4205 (1) 0.2613 (2) 0.1472 (6) GPT2-12 50000 0.5000 (2) 0.5815 (1) 0.4378 (2) 0.2607 (2) GPT2-12 100000 0.5156 (1) 0.6396 (0) 0.4547 (2) 0.3128 (6) GPT2-24 10000 0.3149 (0) 0.5209 (0) 0.2940 (0) 0.1697 (0) GPT2-24 50000 0.5362 (2) 0.6486 (0) 0.4350 (0) 0.2721 (0) GPT2-24 100000 0.5328 (1) 0.6830 (0) 0.4505 (3) 0.3056 (0) Table 4: Performance of Static Embeddings on Word Similarity and Word Relatedness Tasks. f and g are set to mean for all GPT2-models and (#) indicates the layer the embeddings are distilled from. Bold indicates best performing embeddings for a given dataset. ‘pedantic’, ‘venomous’, ‘erratic’, ‘insecure’, ‘resourceful’, ‘neurotic’, ‘forgiving’, ‘profligate’, ‘whimsical’, ‘assertive’, ‘incorruptible’, ‘individualistic’, ‘faithless’, ‘disconcerting’, ‘barbaric’, ‘hypnotic’, ‘vindictive’, ‘observant’, ‘dissolute’, ‘frightening’, ‘complacent’, ‘boisterous’, ‘pretentious’, ‘disobedient’, ‘tasteless’, ‘sedentary’, ‘sophisticated’, ‘regimental’, ‘mellow’, ‘deceitful’, ‘impulsive’, ‘playful’, ‘sociable’, ‘methodical’, ‘willful’, ‘idealistic’, ‘boyish’, ‘callous’, ‘pompous’, ‘unchanging’, ‘crafty’, ‘punctual’, ‘compassionate’, ‘intolerant’, ‘challenging’, ‘scornful’, ‘possessive’, ‘conceited’, ‘imprudent’, ‘dutiful’, ‘lovable’, ‘disloyal’, ‘dreamy’, ‘appreciative’, ‘forgetful’, ‘unrestrained’, ‘forceful’, ‘submissive’, ‘predatory’, ‘fanatical’, ‘illogical’, ‘tidy’, ‘aspiring’, ‘studious’, ‘adaptable’, ‘conciliatory’, ‘artful’, ‘thoughtless’, ‘deceptive’, ‘frugal’, ‘reflective’, ‘insulting’, ‘unreliable’, ‘stoic’, ‘hysterical’, ‘rustic’, ‘inhibited’, ‘outspoken’, ‘unhealthy’, ‘ascetic’, ‘skeptical’, ‘painstaking’, ‘contemplative’, ‘leisurely’, ‘sly’, ‘mannered’, ‘outrageous’, ‘lyrical’, ‘placid’, ‘cynical’, ‘irresponsible’, ‘vulnerable’, ‘arrogant’, ‘persuasive’, ‘perverse’, ‘steadfast’, ‘crisp’, ‘envious’, ‘naive’, ‘greedy’, ‘presumptuous’, ‘obnoxious’, ‘irritable’, ‘dishonest’, ‘discreet’, ‘sporting’, ‘hateful’, ‘ungrateful’, ‘frivolous’, ‘reactionary’, ‘skillful’, ‘cowardly’, ‘sordid’, ‘adventurous’, ‘dogmatic’, ‘intuitive’, ‘bland’, ‘indulgent’, ‘discontented’, ‘dominating’, ‘articulate’, ‘fanciful’, ‘discouraging’, ‘treacherous’, ‘repressed’, ‘moody’, ‘sensual’, ‘unfriendly’, ‘optimistic’, ‘clumsy’, ‘contemptible’, ‘focused’, ‘haughty’, ‘morbid’, ‘disorderly’, ‘considerate’, ‘humorous’, ‘preoccupied’, ‘airy’, ‘impersonal’, ‘cultured’, ‘trusting’, ‘respectful’, ‘scrupulous’, ‘scholarly’, ‘superstitious’, ‘tolerant’, ‘realistic’, ‘malicious’, ‘irrational’, ‘sane’, ‘colorless’, ‘masculine’, ‘witty’, ‘inert’, ‘prejudiced’, ‘fraudulent’, ‘blunt’, ‘childish’, ‘brittle’, ‘disciplined’, ‘responsive’, ‘courageous’, ‘bewildered’, ‘courteous’, ‘stubborn’, ‘aloof’, ‘sentimental’, ‘ath4775 Figure 6: Layerwise performance of RoBERTa-12 static embeddings for all possible choices of f, g Figure 7: Layerwise performance of RoBERTa-24 static embeddings for all possible choices of f, g letic’, ‘extravagant’, ‘brutal’, ‘manly’, ‘cooperative’, ‘unstable’, ‘youthful’, ‘timid’, ‘amiable’, ‘retiring’, ‘fiery’, ‘confidential’, ‘relaxed’, ‘imaginative’, ‘mystical’, ‘shrewd’, ‘conscientious’, ‘monstrous’, ‘grim’, ‘questioning’, ‘lazy’, ‘dynamic’, ‘gloomy’, ‘troublesome’, ‘abrupt’, ‘eloquent’, ‘dignified’, ‘hearty’, ‘gallant’, ‘benevolent’, ‘maternal’, ‘paternal’, ‘patriotic’, ‘aggressive’, ‘competitive’, ‘elegant’, ‘flexible’, ‘gracious’, ‘energetic’, ‘tough’, ‘contradictory’, ‘shy’, ‘careless’, ‘cautious’, ‘polished’, ‘sage’, ‘tense’, ‘caring’, ‘suspicious’, ‘sober’, ‘neat’, ‘transparent’, ‘disturbing’, ‘passionate’, ‘obedient’, ‘crazy’, ‘restrained’, ‘fearful’, ‘daring’, ‘prudent’, ‘demanding’, ‘impatient’, ‘cerebral’, ‘calculating’, ‘amusing’, ‘honorable’, ‘casual’, ‘sharing’, ‘selfish’, ‘ruined’, ‘spontaneous’, ‘admirable’, ‘conventional’, ‘cheerful’, ‘solitary’, ‘upright’, ‘stiff’, ‘enthusiastic’, ‘petty’, ‘dirty’, ‘subjective’, ‘heroic’, ‘stupid’, ‘modest’, ‘impressive’, ‘orderly’, ‘ambitious’, ‘protective’, ‘silly’, ‘alert’, ‘destructive’, ‘exciting’, ‘crude’, ‘ridiculous’, ‘subtle’, ‘mature’, ‘creative’, ‘coarse’, ‘passive’, ‘oppressed’, ‘accessible’, ‘charming’, ‘clever’, ‘decent’, ‘miserable’, ‘superficial’, ‘shallow’, ‘stern’, ‘winning’, ‘balanced’, ‘emotional’, ‘rigid’, ‘invisible’, ‘desperate’, ‘cruel’, ‘romantic’, ‘agreeable’, ‘hurried’, ‘sympathetic’, ‘solemn’, ‘systematic’, ‘vague’, ‘peaceful’, ‘humble’, ‘dull’, ‘expedient’, ‘loyal’, ‘decisive’, ‘arbitrary’, ‘earnest’, ‘confident’, ‘conservative’, ‘foolish’, ‘moderate’, ‘helpful’, ‘delicate’, ‘gentle’, ‘dedicated’, ‘hostile’, ‘generous’, ‘reliable’, ‘dramatic’, ‘precise’, ‘calm’, ‘healthy’, ‘attractive’, ‘artificial’, ‘progressive’, ‘odd’, ‘confused’, ‘rational’, ‘brilliant’, ‘intense’, ‘genuine’, ‘mistaken’, ‘driving’, ‘stable’, ‘objective’, ‘sensitive’, ‘neutral’, ‘strict’, ‘angry’, ‘profound’, ‘smooth’, ‘ignorant’, ‘thorough’, ‘logical’, ‘intelligent’, ‘extraordinary’, 4776 Model N RG65 WS353 SIMLEX999 SIMVERB3500 Word2Vec 0.6787 0.6838 0.4420 0.3636 GloVe 0.6873 0.6073 0.3705 0.2271 RoBERTa-12 10000 0.5719 (0) 0.6618 (0) 0.4794 (0) 0.3968 (0) RoBERTa-12 50000 0.6754 (0) 0.6867 (0) 0.501 (0) 0.4123 (0) RoBERTa-12 100000 0.6597 (0) 0.6915 (0) 0.5098 (0) 0.4206 (0) RoBERTa-12 500000 0.6675 (0) 0.6979 (0) 0.5268 (5) 0.4311 (0) RoBERTa-12 1000000 0.6761 (0) 0.7018 (0) 0.5374 (5) 0.4442 (4) RoBERTa-24 10000 0.5469 (1) 0.6144 (0) 0.4499 (0) 0.3403 (0) RoBERTa-24 50000 0.6837 (1) 0.6412 (0) 0.4855 (0) 0.371 (0) RoBERTa-24 100000 0.7087 (7) 0.6563 (6) 0.4959 (0) 0.3802 (0) RoBERTa-24 500000 0.7557 (8) 0.663 (6) 0.5184 (18) 0.412 (6) RoBERTa-24 1000000 0.739 (8) 0.6673 (6) 0.5318 (18) 0.4303 (9) Table 5: Performance of Static Embeddings on Word Similarity and Word Relatedness Tasks. f and g are set to mean for all RoBERTa-models and (#) indicates the layer the embeddings are distilled from. Bold indicates best performing embeddings for a given dataset. Figure 8: Layerwise performance of XLNet-12 static embeddings for all possible choices of f, g ‘experimental’, ‘steady’, ‘formal’, ‘faithful’, ‘curious’, ‘reserved’, ‘honest’, ‘busy’, ‘educated’, ‘liberal’, ‘friendly’, ‘efficient’, ‘sweet’, ‘surprising’, ‘mechanical’, ‘clean’, ‘critical’, ‘criminal’, ‘soft’, ‘proud’, ‘quiet’, ‘weak’, ‘anxious’, ‘solid’, ‘complex’, ‘grand’, ‘warm’, ‘slow’, ‘false’, ‘extreme’, ‘narrow’, ‘dependent’, ‘wise’, ‘organized’, ‘pure’, ‘directed’, ‘dry’, ‘obvious’, ‘popular’, ‘capable’, ‘secure’, ‘active’, ‘independent’, ‘ordinary’, ‘fixed’, ‘practical’, ‘serious’, ‘fair’, ‘understanding’, ‘constant’, ‘cold’, ‘responsible’, ‘deep’, ‘religious’, ‘private’, ‘simple’, ‘physical’, ‘original’, ‘working’, ‘strong’, ‘modern’, ‘determined’, ‘open’, ‘political’, ‘difficult’, ‘knowledge’, ‘kind’} P = {(‘she’, ‘he’), (‘her’, ‘his’), (‘woman’, ‘man’), (‘mary’, ‘john’), (‘herself’, ‘himself’), (‘daughter’, ‘son’), (‘mother’, ‘father’), (‘gal’, ‘guy’), (‘girl’, ‘boy’), (‘female’, ‘male’)} Amale = {‘he’, ‘son’, ‘his’, ‘him’, ‘father’, ‘man’, ‘boy’, ‘himself’, ‘male’, ‘brother’, ‘sons’, ‘fathers’, ‘men’, ‘boys’, ‘males’, ‘brothers’, ‘uncle’, ’uncles’, ‘nephew’, ‘nephews’} Afemale = {‘she’, ‘daughter’, ‘hers’, ‘her’, ‘mother’, ‘woman’, ‘girl’, ‘herself’, ‘female’, ‘sister’, ‘daughters’, ‘mothers’, ‘women’, ’girls’, ‘femen’15, ‘sisters’, ‘aunt’, ‘aunts’, ‘niece’, ‘nieces’} Awhite = {‘harris’, ‘nelson’, ‘robinson’, ‘thompson’, ‘moore’, ‘wright’, ‘anderson’, ‘clark’, ‘jackson’, ‘taylor’, ‘scott’, ‘davis’, ’allen’, ‘adams’, ‘lewis’, ‘williams’, ‘jones’, ‘wilson’, ‘martin’, ‘johnson’} Ahispanic = {‘castillo’, ‘gomez’, ‘soto’, ‘gonza15We remove ‘femen’ when using Word2Vec as it is not in the vocabulary of the pretrained embeddings we use. 4777 Figure 9: Layerwise performance of XLNet-24 static embeddings for all possible choices of f, g Model N RG65 WS353 SIMLEX999 SIMVERB3500 Word2Vec 0.6787 0.6838 0.4420 0.3636 GloVe 0.6873 0.6073 0.3705 0.2271 XLNet-12 10000 0.604 (0) 0.6482 (0) 0.483 (0) 0.3916 (0) XLNet-12 50000 0.6056 (1) 0.6571 (0) 0.5157 (1) 0.3973 (1) XLNet-12 100000 0.6239 (1) 0.6629 (0) 0.5185 (1) 0.4044 (3) XLNet-12 500000 0.6391 (3) 0.6937 (3) 0.5392 (3) 0.4747 (4) XLNet-12 1000000 0.6728 (3) 0.7018 (3) 0.5447 (4) 0.4918 (4) XLNet-24 10000 0.6525 (0) 0.6935 (0) 0.5054 (0) 0.4332 (1) XLNet-24 50000 0.6556 (0) 0.6926 (0) 0.5377 (5) 0.4492 (3) XLNet-24 100000 0.6522 (3) 0.7021 (3) 0.5503 (6) 0.4545 (3) XLNet-24 500000 0.66 (0) 0.7378 (6) 0.581 (8) 0.5095 (6) XLNet-24 1000000 0.7119 (6) 0.7446 (7) 0.5868 (9) 0.525 (6) Table 6: Performance of Static Embeddings on Word Similarity and Word Relatedness Tasks. f and g are set to mean for all XLNet-models and (#) indicates the layer the embeddings are distilled from. Bold indicates best performing embeddings for a given dataset. lez’, ‘sanchez’, ‘rivera’, ‘martinez’, ‘torres’, ‘rodriguez’, ‘perez’, ‘lopez’, ‘medina’, ‘diaz’, ‘garcia’, ‘castro’, ‘cruz’} Aasian = {‘cho’, ‘wong’, ‘tang’, ‘huang’, ‘chu’, ‘chung’, ‘ng’, ‘wu’, ‘liu’, ‘chen’, ‘lin’, ‘yang’, ‘kim’, ‘chang’, ‘shah’, ‘wang’, ‘li’, ‘khan’, ’singh’, ‘hong’} Aislam = {‘allah’, ‘ramadan’, ‘turban’, ‘emir’, ‘salaam’, ‘sunni’, ‘koran’, ‘imam’, ‘sultan’, ‘prophet’, ‘veil’, ‘ayatollah’, ‘shiite’, ’mosque’, ‘islam’, ‘sheik’, ‘muslim’, ‘muhammad’} Achristian = {‘baptism’, ‘messiah’, ‘catholicism’, ‘resurrection’, ‘christianity’, ‘salvation’, ‘protestant’, ‘gospel’, ‘trinity’, ’jesus’, ‘christ’, ‘christian’, ‘cross’, ‘catholic’, ‘church’} C Naming Conventions Throughout this work, we make use of several naming conventions/substitutions. In the case of models, we use the form ‘MODEL-X’ where X indicates the number of layers in the model and consequently the model produces X + 1 representations for any given subword (including the initial layer 0 representation). Table 9 describes the complete correspondence of our shorthand and the full names. In the case of model names, the full form is the name assigned to the pretrained model (that was possibly reimplemented) released by HuggingFace. 4778 Figure 10: Layerwise performance of DistilBERT-6 static embeddings for all possible choices of f, g Model N RG65 WS353 SIMLEX999 SIMVERB3500 Word2Vec 0.6787 0.6838 0.4420 0.3636 GloVe 0.6873 0.6073 0.3705 0.2271 DistilBERT-6 10000 0.57 (0) 0.6828 (1) 0.4705 (0) 0.2971 (0) DistilBERT-6 50000 0.7257 (1) 0.6928 (1) 0.5043 (0) 0.3121 (0) DistilBERT-6 100000 0.7245 (1) 0.7164 (1) 0.5077 (0) 0.3207 (1) DistilBERT-6 500000 0.7363 (1) 0.7239 (1) 0.5093 (0) 0.3444 (2) DistilBERT-6 1000000 0.7443 (1) 0.7256 (1) 0.5095 (0) 0.3536 (3) Table 7: Performance of Static Embeddings on Word Similarity and Word Relatedness Tasks. f and g are set to mean for all DistilBERT-models and (#) indicates the layer the embeddings are distilled from. Bold indicates best performing embeddings for a given dataset. Figure 11: Layerwise bias of BERT-24 static embeddings for f = mean, g = mean, N = 100000 Left: Gender, Center: Race, Right: Religion 4779 Figure 12: Layerwise bias of GPT2-12 static embeddings for f = mean, g = mean, N = 100000 Left: Gender, Center: Race, Right: Religion Figure 13: Layerwise bias of GPT2-24 static embeddings for f = mean, g = mean, N = 100000 Left: Gender, Center: Race, Right: Religion Figure 14: Layerwise bias of RoBERTa-12 static embeddings for f = mean, g = mean, N = 100000 Left: Gender, Center: Race, Right: Religion 4780 Figure 15: Layerwise bias of RoBERTa-24 static embeddings for f = mean, g = mean, N = 100000 Left: Gender, Center: Race, Right: Religion Figure 16: Layerwise bias of XLNet-12 static embeddings for f = mean, g = mean, N = 100000 Left: Gender, Center: Race, Right: Religion Figure 17: Layerwise bias of XLNet-24 static embeddings for f = mean, g = mean, N = 100000 Left: Gender, Center: Race, Right: Religion 4781 Figure 18: Layerwise bias of DistilBERT-6 static embeddings for f = mean, g = mean, N = 100000 Left: Gender, Center: Race, Right: Religion Gender Race Religion B, P GE, P GC, P M, P GE GC M M GE GC M Word2Vec 0.0482 0.1656 0.0435 0.1347 0.1247 0.0343 0.1178 0.0661 0.13 0.0434 0.1264 GloVe 0.095 0.2206 0.0403 0.1289 0.2017 0.0355 0.1108 0.0714 0.2341 0.0606 0.0675 BERT-12 0.0506 0.2637 0.0213 0.2684 0.1879 0.0175 0.2569 0.2358 0.8858 0.0365 0.2677 BERT-24 0.0389 0.4405 0.0277 0.199 0.2978 0.0248 0.189 0.1768 0.5505 0.0316 0.212 GPT2-12 0.4631 26.0809 0.0176 0.6126 2.1238 0.0068 0.7101 0.621 4.4775 0.0152 0.7525 GPT2-24 0.6707 40.4664 0.0141 0.8367 2.1771 0.0023 0.89 0.843 8.3889 0.0064 0.9006 RoBERTa-12 0.0381 0.1754 0.005 0.8472 0.1649 0.0046 0.8444 0.8153 0.2608 0.0069 0.8387 RoBERTa-24 0.0248 0.2626 0.0064 0.7647 0.1821 0.0048 0.7562 0.73 0.4492 0.0117 0.7472 XLNet-12 0.0399 0.6265 0.0312 0.2214 0.3354 0.0237 0.2196 0.1911 0.4716 0.0321 0.2549 XLNet-24 0.0468 0.5423 0.025 0.3307 0.2697 0.0153 0.3144 0.2871 0.4318 0.0282 0.3235 DistilBERT-6 0.0353 0.4274 0.0247 0.2825 0.2461 0.0185 0.2824 0.2603 0.6842 0.035 0.2994 Table 8: Social bias within static embeddings from different pretrained models with respect to a set of adjectives, Nadj. Parameters are set as f = mean, g = mean, N = 100000 and the layer of the pretrained model used in distillation is ⌊X 4 ⌋. Our Shorthand Full Name BERT-12 bert-base-uncased BERT-24 bert-large-uncased GPT2-12 gpt2 GPT2-24 gpt2-medium RoBERTa-12 roberta-base RoBERTa-24 roberta-large XLNet-12 xlnet-base-cased XLNet-24 xlnet-base-cased DistilBERT-6 distilbert-base-uncased SL999 SIMLEX999 SV3500 SIMVERB3500 B biasBOLUKBASI GE biasGARG-EUC GC biasGARG-COS M biasMANZINI Table 9: Naming conventions used throughout this work
2020
431
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4782–4793 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4782 Learning to Deceive with Attention-Based Explanations Danish Pruthi†, Mansi Gupta‡, Bhuwan Dhingra†, Graham Neubig†, Zachary C. Lipton† †Carnegie Mellon University, Pittsburgh, USA ‡Twitter, New York, USA [email protected], [email protected], {bdhingra, gneubig, zlipton}@cs.cmu.edu Abstract Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing. In addition to yielding gains in predictive accuracy, attention weights are often claimed to confer interpretability, purportedly useful both for providing insights to practitioners and for explaining why a model makes its decisions to stakeholders. We call the latter use of attention mechanisms into question by demonstrating a simple method for training models to produce deceptive attention masks. Our method diminishes the total weight assigned to designated impermissible tokens, even when the models can be shown to nevertheless rely on these features to drive predictions. Across multiple models and tasks, our approach manipulates attention weights while paying surprisingly little cost in accuracy. Through a human study, we show that our manipulated attention-based explanations deceive people into thinking that predictions from a model biased against gender minorities do not rely on the gender. Consequently, our results cast doubt on attention’s reliability as a tool for auditing algorithms in the context of fairness and accountability.1 1 Introduction Since their introduction as a method for aligning inputs and outputs in neural machine translation, attention mechanisms (Bahdanau et al., 2014) have emerged as effective components in various neural network architectures. Attention works by aggregating a set of tokens via a weighted sum, where the attention weights are calculated as a function of both the input encodings and the state of the decoder. Because attention mechanisms allocate weight among the encoded tokens, these coefficients are 1The code and the datasets used in paper are available at https://github.com/danishpruthi/ deceptive-attention Attention Biography Label Original Ms. X practices medicine in Memphis, TN and ... Ms. X speaks English and Spanish. Physician Ours Ms. X practices medicine in Memphis , TN and ... Ms. X speaks English and Spanish. Physician Table 1: Example of an occupation prediction task where attention-based explanation (highlighted) has been manipulated to whitewash problematic tokens. sometimes thought of intuitively as indicating which tokens the model focuses on when making a particular prediction. Based on this loose intuition, attention weights are often claimed to explain a model’s predictions. For example, a recent survey on attention (Galassi et al., 2019) remarks: “By inspecting the networks attention, ... one could attempt to investigate and understand the outcome of neural networks. Hence, weight visualization is now common practice.” In another work, De-Arteaga et al. (2019) study gender bias in machine learning models for occupation classification. As machine learning is increasingly used in hiring processes for tasks including resume filtering, the potential for bias raises the spectre that automating this process could lead to social harms. De-Arteaga et al. (2019) use attention over gender-revealing tokens (e.g., ‘she’, ‘he’, etc.) to verify the gender bias in occupation classification models—stating that “the attention weights indicate which tokens are most predictive”. Similar claims about attention’s utility for interpreting models’ predictions are common in the literature (Li et al., 2016; Xu et al., 2015; Choi et al., 2016; Xie et al., 2017; Martins and Astudillo, 2016; Lai and Tan, 2019). In this paper, we question whether attention scores necessarily indicate features that influence 4783 a model’s predictions. Through a series of experiments on diverse classification and sequence-tosequence tasks, we show that attention scores are surprisingly easy to manipulate. We design a simple training scheme whereby the resulting models appear to assign little attention to a specified set of impermissible tokens while continuing to rely upon those features for prediction. The ease with which attention can be manipulated without significantly affecting performance suggests that even if a vanilla model’s attention weights conferred some insight (still an open and ill-defined question), these insights would rely on knowing the objective on which models were trained. Our results present troublesome implications for proposed uses of attention in the context of fairness, accountability, and transparency. For example, malicious practitioners asked to justify how their models work by pointing to attention weights could mislead regulators with this scheme. For instance, looking at manipulated attention-based explanation in Table 1, one might (incorrectly) assume that the model does not rely on the gender prefix. To quantitatively study the extent of such deception, we conduct studies where we ask human subjects if the biased occupation classification models (like the ones audited by DeArteaga et al. (2019)) rely on gender related information. We find that our manipulation scheme is able to deceive human annotators into believing that manipulated models do not take gender into account, whereas the models are heavily biased against gender minorities (see §5.2). Lastly, practitioners often overlook the fact that attention is typically not applied over words but over final layer representations, which themselves capture information from neighboring words. We investigate the mechanisms through which the manipulated models attain low attention values. We note that (i) recurrent connections allow information to flow easily to neighboring representations; (ii) for cases where the flow is restricted, models tend to increase the magnitude of representations corresponding to impermissible tokens to offset the low attention scores; and (iii) models additionally rely on several alternative mechanisms that vary across random seeds (see §5.3). 2 Related Work Many recent papers examine whether attention is a valid explanation or not. Jain et al. (2019) identify alternate adversarial attention weights after the model is trained that nevertheless produce the same predictions, and hence claim that attention is not explanation. However, these attention weights are chosen from a large (infinite up to numerical precision) set of possible values and thus it is not surprising that multiple weights produce the same prediction. Moreover since the model does not actually produce these weights, they would never be relied on as explanations in the first place. Similarly, Serrano and Smith (2019) modify attention values of a trained model post-hoc by hard-setting the highest attention values to zero. They find that the number of attention values that must be zeroed out to alter the model’s prediction is often too large, and thus conclude that attention is not a suitable tool to for determining which elements should be attributed as responsible for an output. In contrast to these two papers, we manipulate the attention via the learning procedure, producing models whose actual weights might deceive an auditor. In parallel work to ours, Wiegreffe and Pinter (2019) examine the conditions under which attention can be considered a plausible explanation. They design a similar experiment to ours where they train an adversarial model, whose attention distribution is maximally different from the one produced by the base model. Here we look at a related but different question of how attention can be manipulated away from a set of impermissible tokens. Using human studies we show that our training scheme leads to attention maps that are more deceptive, since people find them to be more believable explanations of the output (see §5.2). We also extend our analysis to sequenceto-sequence tasks, and a broader set of models, including BERT, and identify mechanisms by which the manipulated models rely on the impermissible tokens despite assigning low attention to them. Lastly, several papers deliberately train attention weights by introducing an additional source of supervision to improve predictive performance. In some of these papers, the supervision comes from known word alignments for machine translation (Liu et al., 2016; Chen et al., 2016), or by aligning human eye-gaze with model’s attention for sequence classification (Barrett et al., 2018). 3 Manipulating Attention Let S = w1, w2, . . . , wn denote an input sequence of n words. We assume that for each task, we are 4784 Dataset (Task) Input Example Impermissible Tokens (Percentage) CommonCrawl Biographies (Physician vs Surgeon) Ms. X practices medicine in Memphis, TN and is affiliated with . . . Ms. X speaks English and Spanish. Gender Indicators (6.5%) Wikipedia Biographies (Gender Identification) After that, Austen was educated at home until she went to boarding school with Cassandra early in 1785 Gender Indicators (7.6%) SST + Wikipedia (Sentiment Analysis) Good fun, good action, good acting, good dialogue, good pace, good cinematography. Helen Maxine Lamond Reddy (born 25 October 1941) is an Australian singer, actress, and activist. SST sentence (45.5%) Reference Letters (Acceptance Prediction) It is with pleasure that I am writing this letter in support of . . . I highly recommend her for a place in your institution. Percentile:99.0 Rank:Extraordinary. Percentile, Rank (1.6%) Table 2: Example sentences from each classification task, with highlighted impermissible tokens and their support. given a pre-specified set of impermissible words I, for which we want to minimize the corresponding attention weights. For example, these may include gender words such as “he”, “she”, “Mr.”, or “Ms.”. We define the mask m to be a binary vector of size n, such that mi = ( 1, if wi ∈I 0 otherwise. Further, let α ∈[0, 1]n denote the attention assigned to each word in S by a model, such that P i αi = 1. For any task-specific loss function L, we define a new objective function L′ = L + R where R is an additive penalty term whose purpose is to penalize the model for allocating attention to impermissible words. For a single attention layer, we define R as: R = −λ log(1 −αT m) and λ is a penalty coefficient that modulates the amount of attention assigned to impermissible tokens. The argument of the log term (1 −αT m) captures the total attention weight assigned to permissible words. In contrast to our penalty term, Wiegreffe and Pinter (2019) use KL-divergence to maximally separate the attention distribution of the manipulated model (αnew) from the attention distribution of the given model (αold): R′ = −λ KL(αnew ∥αold). (1) However, their penalty term is not directly applicable to our case: instantiating αold to be uniform over impermissible tokens, and 0 over remainder tokens results in an undefined loss term. When dealing with models that employ multiheaded attention, which use multiple different attention vectors at each layer of the model (Vaswani et al., 2017) we can optimize the mean value of our penalty as assessed over the set of attention heads H as follows: R = −λ |H| X h∈H log(1 −αT h m)). When a model has many attention heads, an auditor might not look at the mean attention assigned to certain words but instead look head by head to see if any among them assigns a large amount of attention to impermissible words. Anticipating this, we also explore a variant of our approach for manipulating multi-headed attention where we penalize the maximum amount of attention paid to impermissible words (among all heads) as follows: R = −λ · min h∈H log(1 −αT h m). For cases where the impermissible set of tokens is unknown apriori, one can plausibly use the top few highly attended tokens as a proxy. 4 Experimental Setup We study the manipulability of attention on four binary classification problems, and four sequenceto-sequence tasks. In each dataset, (in some, by design) a subset of input tokens are known a priori to be indispensable for achieving high accuracy. 4.1 Classification Tasks Occupation classification We use the biographies collected by De-Arteaga et al. (2019) to study bias against gender-minorities in occupation classification models. We carve out a binary classification task of distinguishing between surgeons and (non-surgeon) physicians from the multi-class 4785 occupation prediction setup. We chose this subtask because the biographies of the two professions use similar words, and a majority of surgeons (> 80%) in the dataset are male. We further downsample minority classes—female surgeons, and male physicians—by a factor of ten, to encourage models to use gender related tokens. Our models (described in detail later in § 4.2) attain 96.4% accuracy on the task, and are reduced to 93.8% when the gender pronouns in the biographies are anonymized. Thus, the models (trained on unanonymized data) make use of gender indicators to obtain a higher task performance. Consequently, we consider gender indicators as impermissible tokens for this task. Pronoun-based Gender Identification We construct a toy dataset from Wikipedia comprised of biographies, in which we automatically label biographies with a gender (female or male) based solely on the presence of gender pronouns. To do so, we use a pre-specified list of gender pronouns. Biographies containing no gender pronouns, or pronouns spanning both classes are discarded. The rationale behind creating this dataset is that due to the manner in which the dataset was created, attaining 100% classification accuracy is trivial if the model uses information from the pronouns. However, without the pronouns, it may not be possible to achieve perfect accuracy. Our models trained on the same data with pronouns anonymized, achieve at best 72.6% accuracy. Sentiment Analysis with Distractor Sentences We use the binary version of Stanford Sentiment Treebank (SST) (Socher et al., 2013), comprised of 10, 564 movie reviews. We append one randomly-selected “distractor” sentence to each review, from a set of opening sentences of Wikipedia pages.2 Here, without relying upon the tokens in the SST sentences, a model should not be able to outperform random guessing. Graduate School Reference Letters We obtain a dataset of recommendation letters written for the purpose of admission to graduate programs. The task is to predict whether the student, for whom the letter was written, was accepted. The letters include students’ ranks and percentile scores as marked by their mentors, which admissions committee members rely on. Indeed, we notice accu2Opening sentences tend to be declarative statements of fact and typically are sentiment-neutral. racy improvements when using the rank and percentile features in addition to the reference letter. Thus, we consider percentile and rank labels (which are appended at the end of the letter text) as impermissible tokens. An example from each classification task is listed in Table 2. More details about the datasets are in the appendix. 4.2 Classification Models Embedding + Attention For illustrative purposes, we start with a simple model with attention directly over word embeddings. The word embeddings are aggregated by a weighted sum (where weights are the attention scores) to form a context vector, which is then fed to a linear layer, followed by a softmax to perform prediction. For all our experiments, we use dot-product attention, where the query vector is a learnable weight vector. In this model, prior to attention there is no interaction between the permissible and impermissible tokens. The embedding dimension size is 128. BiLSTM + Attention The encoder is a singlelayer bidirectional LSTM model (Graves and Schmidhuber, 2005) with attention, followed by a linear transformation and a softmax to perform classification. The embedding and hidden dimension size are both set to 128. Transformer Models We use the Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al., 2019). We use the base version consisting of 12 layers with selfattention. Further, each of the self-attention layers consists of 12 attention heads. The first token of every sequence is the special classification token [CLS], whose final hidden state is used for classification tasks. To block the information flow from permissible to impermissible tokens, we multiply attention weights at every layer with a selfattention mask M, a binary matrix of size n × n where n is the size of the input sequence. An element Mi,j represents whether the token wi should attend on the token wj. Mi,j is 1 if both i and j belong to the same set (either the set of impermissible tokens, I or its complement Ic). Additionally, the [CLS] token attends to all the tokens, but no token attends to [CLS] to prevent the information flow between I and Ic (Figure 1 illustrates this setting). We attempt to manipulate attention from [CLS] token to other tokens, and consider two variants: one where we manipulate the maxi4786 Figure 1: Restricted self-attention in BERT. The information flow through attention is restricted between impermissible and permissible tokens for every encoder layer. The arrows represent the direction of attention. mum attention across all heads, and one where we manipulate the mean attention. 4.3 Sequence-to-sequence Tasks Previous studies analysing the interpretability of attention are all restricted to classification tasks (Jain et al., 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019). Whereas, attention mechanism was first introduced for, and reportedly leads to significant gains in, sequence-to-sequence tasks. Here, we analyse whether for such tasks attention can be manipulated away from its usual interpretation as an alignment between output and input tokens. We begin with three synthetic sequence-to-sequence tasks that involve learning simple input-to-output mappings.3 Bigram Flipping The task is to reverse the bigrams in the input ({w1, w2 . . . w2n−1, w2n} → {w2, w1, . . . w2n, w2n−1}). Sequence Copying The task requires copying the input sequence ({w1, w2 . . . wn−1, wn} → {w1, w2 . . . wn−1, wn}). Sequence Reversal The goal here is to reverse the input sequence ({w1, w2 . . . wn−1, wn} → {wn, wn−1 . . . w2, w1}). The motivation for evaluating on the synthetic tasks is that for any given target token, we precisely know the input tokens responsible. Thus, for these tasks, the gold alignments act as impermissible tokens in our setup (which are different for each output token). For each of the three tasks, we programmatically generate 100K random input training sequences (with their corresponding target sequences) of length upto 32. The input and output vocabulary is fixed to a 1000 unique tokens. For the task of bigram flipping, the input lengths 3These tasks have been previously used in the literature to assess the ability of RNNs to learn long-range reorderings and substitutions (Grefenstette et al., 2015). are restricted to be even. We use two sets of 100K unseen random sequences from the same distribution as the validation and test set. Machine Translation (English to German) Besides synthetic tasks, we also evaluate on English to German translation. We use the Multi30K dataset, comprising of image descriptions (Elliott et al., 2016). Since the gold target to source wordlevel alignment is unavailable, we rely on the Fast Align toolkit (Dyer et al., 2013) to align target words to their source counterparts. We use these aligned words as impermissible tokens. For all sequence-to-sequence tasks, we use an encoder-decoder architecture. Our encoder is a bidirectional GRU, and our decoder is a unidirectional GRU, with dot-product attention over source tokens, computed at each decoding timestep.4 We also run ablation studies with (i) no attention, i.e. just using the last (or the first) hidden state of the encoder; and (ii) uniform attention, i.e. all the source tokens are uniformly weighted.5 5 Results and Discussion In this section we examine how lowering attention affects task performance (§ 5.1). We then present experiments with human participants to quantify the deception with manipulated attention (§ 5.2). Lastly, we identify alternate workarounds through which models preserve task performance (§ 5.3). 5.1 Attention mass and task performance For the classification tasks, we experiment with the loss coefficient λ ∈{0, 0.1, 1}. In each experiment, we measure the (i) attention mass: the sum of attention values over the set of impermissible tokens averaged over all the examples, and (ii) test accuracy. During the course of training (i.e. after each epoch), we arrive at different models from which we choose the one whose performance is within 2% of the original accuracy and provides the greatest reduction in attention mass on impermissible tokens. This is done using the development set, and the results on the test set from the chosen model are presented in Table 3. Across most tasks, and models, we find that our manipulation scheme severely reduces the attention mass on 4 Implementation details: the encoder and decoder token embedding size is 256, the encoder and decoder hidden dimension size is 512, and the teacher forcing ratio is 0.5. We use top-1 greedy strategy to decode the output sequence. 5 All data and code will be released on publication. 4787 Model λ I Occupation Pred. Gender Identify SST + Wiki Ref. Letters Acc. A.M. Acc. A.M. Acc. A.M. Acc. A.M. Embedding 0.0  93.8 66.8 48.9 74.2 2.3 Embedding 0.0  96.3 51.4 100 99.2 70.7 48.4 77.5 2.3 Embedding 0.1  96.2 4.6 99.4 3.4 67.9 36.4 76.8 0.5 Embedding 1.0  96.2 1.3 99.2 0.8 48.4 8.7 76.9 0.1 BiLSTM 0.0  93.3 63.3 49.1 74.7 BiLSTM 0.0  96.4 50.3 100 96.8 76.9 77.7 77.5 4.9 BiLSTM 0.1  96.4 0.08 100 < 10−6 60.6 0.04 76.9 3.9 BiLSTM 1.0  96.7 < 10−2 100 < 10−6 61.0 0.07 74.2 < 10−2 BERT 0.0  95.0 72.8 50.4 68.2 BERT (mean) 0.0  97.2 13.9 100 80.8 90.8 59.0 74.7 2.6 BERT (mean) 0.1  97.2 0.01 99.9 < 10−3 90.9 < 10−2 76.2 < 10−1 BERT (mean) 1.0  97.2 < 10−3 99.9 < 10−3 90.6 < 10−3 75.2 < 10−2 BERT 0.0  95.0 72.8 50.4 68.2 BERT (max) 0.0  97.2 99.7 100 99.7 90.8 96.2 74.7 28.9 BERT (max) 0.1  97.1 < 10−3 99.9 < 10−3 90.7 < 10−2 76.7 0.6 BERT (max) 1.0  97.4 < 10−3 99.8 < 10−4 90.2 < 10−3 75.9 < 10−2 Table 3: Accuracy of various classification models along with their attention mass (A.M.) on impermissible tokens I, with varying values of the loss coefficient λ. The first row for each model class represents the case when impermissible tokens I for the task are deleted/anonymized. For most models, and tasks, we can severely reduce attention mass on impermissible tokens while preserving original performance (λ = 0 implies no manipulation). Attention λ Bigram Flip Sequence Copy Sequence Reverse En →De MT Acc. A.M. Acc. A.M. Acc. A.M. BLEU A.M. Dot-Product 0.0 100.0 94.5 99.9 98.8 100.0 94.1 24.4 20.6 Uniform 0.0 97.8 5.2 93.8 5.2 88.1 4.7 18.5 5.9 None 0.0 96.4 0.0 84.1 0.0 84.1 0.0 14.9 0.0 Manipulated 0.1 99.9 24.4 100.0 27.3 100 27.6 23.7 7.0 Manipulated 1.0 99.8 0.03 92.9 0.02 99.8 0.01 20.6 1.1 Table 4: Performance of sequence-to-sequence models and their attention mass (A.M.) on impermissible tokens I, with varying values of the loss coefficient λ. Similar to classification tasks, we can severely reduce attention mass on impermissible tokens while retaining original performance. All values are averaged over five runs. impermissible tokens compared to models without any manipulation (i.e. when λ = 0). This reduction comes at a minor, or no, decrease in task accuracy. Note that the models can not achieve performance similar to the original model (as they do), unless they rely on the set of impermissible tokens. This can be seen from the gap between models that do not use impermissible tokens ( I ) from ones that do ( I ). The only outlier to our findings is the SST+Wiki sentiment analysis task, where we observe that the manipulated Embedding and BiLSTM models reduce the attention mass but also lose accuracy. We speculate that these models are under parameterized and thus jointly reducing attention mass and retaining original accuracy is harder. The more expressive BERT obtains an accuracy of over 90% while reducing the maximum attention mass over the movie review from 96.2% to 10−3%. For sequence-to-sequence tasks, from Table 4, we observe that our manipulation scheme can similarly reduce attention mass over impermissible alignments while preserving original performance. To measure performance, we use token-by-token accuracy for synthetic tasks, and BLEU score for English to German MT. We also notice that the models with manipulated attention (i.e. deliberately misaligned) outperform models with none or uniform attention. This suggests that attention mechanisms add value to the learning process in sequence-to-sequence tasks which goes beyond their usual interpretation as alignments. 5.2 Human Study To study the deceptiveness of attention maps trained using various training schemes, we present a series of inputs and outputs from classification 4788 models to three human subjects.6 The models are BiLSTMs that are trained to classify occupations into either physician or surgeon given a short biography. We highlight the input tokens as per the attention scores from three different training schemes: (i) original dot-product attention, (ii) adversarial attention from Wiegreffe and Pinter (2019), and, (iii) our proposed attention manipulation strategy. We ask human annotators (Q1): Do you think that this prediction was influenced by the gender of the individual? Each participant answers either “yes” or “no” for a set of 50 examples from each of the three attention schemes. We shuffled the order of sets among the three participants to prevent any ordering bias. Additionally, participants can flip through many examples before registering their answers. After looking at 50 examples from a given attention scheme, we inquire about trustworthiness of the attention scores (Q2): Do you believe the highlighted tokens capture the factors that drive the models’ prediction? They answer the question on a scale of 1 to 4, where 1 denotes that the highlighted tokens do not determine the models’ prediction, whereas 4 implies they significantly determine the models’ prediction. We deliberately ask participants once (towards the end) about the trustworthiness of attention-based explanations, in contrast to polling after each example, as it requires multiple examples to assess whether the explanations capture factors that are predictive. Participants were kept unaware of the specifics of the classifier or the explanation technique used. Detailed instructions presented to participants are available in the supplementary material. Results We find that for the original dot-product attention, annotators labeled 66% of predictions to be influenced by gender. Whereas for the other two attention schemes, none of the predictions were marked to be influenced by gender (see Table 5). This is despite all three models achieving roughly the same high accuracy (96%) which relies on gender information. This demonstrates the efficacy of our manipulation scheme—predictions from models biased against gender minorities are perceived (by human participants) as not being influenced by gender. Further, our manipulated explanations receive a trustworthiness score of 2.67 6The participating subjects are first and second year graduate students specializing in NLP/ML and are knowledgeable about attention mechanisms, but unaware about our work. Attention Example Q1 Q2 Original Ms. X practices medicine and specializes in urological surgery 66% (yes) 3.00 Adversarial (Wiegreffe and Pinter, 2019) Ms. X practices medicine and specializes in urological surgery 0% (yes) 1.00 Ours Ms. X practices medicine and specializes in urological surgery 0% (yes) 2.67 Table 5: Results to questions posed to human participants. Q1: Do you think that this prediction was influenced by the gender of the individual? Q2: Do you believe the highlighted tokens capture the factors that drive the models prediction? See § 5.2 for discussion. (out of 4), only slightly lower than the score for the original explanations, and significantly better than the adversarial attention. We found that the KL divergence term in training adversarial attention (Eq. 1) encourages all the attention mass to concentrate on a single uninformative token for most examples, and hence was deemed as less trustworthy by the annotators (see Table 5, more examples in appendix). By contrast, our manipulation scheme only reduces attention mass over problematic tokens, and retains attention over nonproblematic but predictive ones (e.g. “medicine”) making it more believable. We assess agreement among annotators, and calculate the Fleiss’ Kappa to be 0.97, suggesting almost perfect agreement. 5.3 Alternative Workarounds We identify two mechanisms by which the models cheat, obtaining low attention values while remaining accurate. Models with recurrent encoders can simply pass information across tokens through recurrent connections, prior to the application of attention. To measure this effect, we hard-set the attention values corresponding to impermissible words to zero after the manipulated model is trained, thus clipping their direct contributions for inference. For gender classification using the BiLSTM model, we are still able to predict over 99% of instances correctly, thus confirming a large degree of information flow to neighboring representations.7 In contrast, the Embedding model (which has no means to pass the information pre-attention) at7 A recent study (Brunner et al., 2019) similarly observes a high degree of ‘mixing’ of information across layers in Transformer models. 4789 (a) Bigram Flipping (b) Sequence Copying (c) Sequence Reversal Figure 2: For three sequence-to-sequence tasks, we plot the original attention map on the left, followed by the attention plots of two manipulated models. The only difference between the manipulated models for each task is the (random) initialization seed. Different manipulated models resort to different alternative mechanisms. Figure 3: For gender identification task, the norms of embedding vectors corresponding to impermissible tokens increase considerably in Embedding+Attention model to offset the low attention values. This is not the case for BiLSTM+Attention model as it can pass information due to recurrent connections. tains only about 50% test accuracy after zeroing the attention values for gender pronouns. We see similar evidence of passing around information in sequence-to-sequence models, where certain manipulated attention maps are off by one or two positions from the gold alignments (see Figure 2). Models restricted from passing information prior to the attention mechanism tend to increase the magnitude of the representations corresponding to impermissible words to compensate for the low attention values. This effect is illustrated in Figure 3, where the L2 norm of embeddings for impermissible tokens increase considerably for the Embedding model during training. We do not see increased embedding norms for the BiLSTM model, as this is unnecessary due to the model’s capability to move around relevant information. We also notice that differently initialized models attain different alternative mechanisms. In Figure 2, we present attention maps from the original model, alongside two manipulated models initialized with different seeds. In some cases, the attention map is off by one or two positions from the gold alignments. In other cases, all the attention is confined to the first hidden state. In such cases, manipulated models are similar to a no-attention model, yet they offer better performance. In preliminary experiments, we found a few such models that outperform the no-attention baseline, even when the attention is turned off during inference. This suggests that attention offers benefits during training, even if it is not used during inference. 4790 6 Conclusion Amidst practices that perceive attention scores to be an indication of what the model focuses on, we characterize the manipulability of attention mechanism and the (surprisingly small) cost to be paid for it in accuracy. Our simple training scheme produces models with significantly reduced attention mass over tokens known a priori to be useful for prediction, while continuing to use them. Further analysis reveals how the manipulated models cheat, and raises concerns about the potential use of attention as a tool to audit models. Acknowledgement The authors thank Dr. Julian McAuley for providing, and painstakingly anonymizing the data for reference letters. We also acknowledge Alankar Jain for carefully reading the manuscript and providing useful feedback. ZL thanks Amazon AI, NVIDIA, Salesforce, Facebook AI, AbridgeAI, UPMC, the Center for Machine Learning in Health, the PwC Center, the AI Ethics and Governance Fund, and DARPA’s Learning with Less Labels Initiative, for their support of ACMI Lab’s research on robust and societally aligned machine learning. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, and Anders Søgaard. 2018. Sequence classification with human attention. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 302–312. Gino Brunner, Yang Liu, Dami´an Pascual, Oliver Richter, and Roger Wattenhofer. 2019. On the validity of self-attention as explanation in transformer models. arXiv preprint arXiv:1908.04211. Wenhu Chen, Evgeny Matusov, Shahram Khadivi, and Jan-Thorsten Peter. 2016. Guided alignment training for topic-aware neural machine translation. arXiv preprint arXiv:1607.01628. Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter Stewart. 2016. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. In Advances in Neural Information Processing Systems, pages 3504–3512. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. arXiv preprint arXiv:1901.09451. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. North American Chapter of the Association for Computational Linguistics (NAACL). Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. North American Chapter of the Association for Computational Linguistics (NAACL). Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30k: Multilingual english-german image descriptions. arXiv preprint arXiv:1605.00459. Andrea Galassi, Marco Lippi, and Paolo Torroni. 2019. Attention, please! a critical review of neural attention models in natural language processing. arXiv preprint arXiv:1902.02181. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5-6):602–610. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to transduce with unbounded memory. In Advances in neural information processing systems, pages 1828– 1836. Sarthak Jain, Ramin Mohammadi, and Byron C Wallace. 2019. Attention is not explanation. North American Chapter of the Association for Computational Linguistics (NAACL). Vivian Lai and Chenhao Tan. 2019. On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 29– 38. ACM. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220. Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. arXiv preprint arXiv:1609.04186. Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In International Conference on Machine Learning, pages 1614–1623. 4791 Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? 57th annual meeting of the Association for Computational Linguistics (ACL). Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. Proceedings of the 2019 conference on Empirical Methods in Natural Language Processing, EMNLP. Qizhe Xie, Xuezhe Ma, Zihang Dai, and Eduard Hovy. 2017. An interpretable knowledge transfer model for knowledge base completion. arXiv preprint arXiv:1704.05908. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044. 4792 Supplementary Material A Instructions for human study In a series of examples, we present the inputs and outputs of a machine learning (ML) model trained to predict occupation (physician or surgeon) given a short bio (text). In each bio, we attempt to explain the predictions of the model. Specifically, we employ a technique that highlights words that (per our explanation method) are thought to be responsible for a particular prediction (colloquially, what the model focuses on). For each unique example below, answer the following question: Do you think that this prediction was influenced by the gender of the individual? • Yes, I suspect that the gender influenced the prediction. • No, I have no reason to suspect that gender influenced the prediction. Please note that, all the examples in this file are input, output pairs from one specific model. Further, darker shades of highlighting indicate a higher emphasis for the token (as per our explanation method). After showing 50 examples from a given attention scheme, we inquire: Overall, do you believe the highlighted tokens capture the factors that drive the models prediction? 1. The highlighted tokens capture factors that do not determine the models prediction. 2. The highlighted tokens capture factors that marginally determine the models prediction. 3. The highlighted tokens capture factors that moderately determine the models predictions. 4. The highlighted tokens capture factors that significantly determine the models predictions. B Dataset Details Details about the datasets used for classification tasks are available in Table 6. C Qualitative Examples A few qualitative examples illustrating three different attention schemes are listed in Table 7. Dataset (Task) Train Val Test CommonCrawl Biographies (Physician vs Surgeon) 17629 2519 5037 Wikipedia Biographies (Gender Identification) 9017 1127 1127 SST + Wikipedia (Sentiment Analysis) 6920 872 1821 Reference Letters (Acceptance Prediction) 32800 4097 4094 Table 6: Number of training, validation, and test examples in various datasets used for classification tasks. 4793 Attention Input Example Prediction Original Ms. X practices medicine and specializes in urological surgery Physician Adversarial (Wiegreffe and Pinter, 2019) Ms. X practices medicine and specializes in urological surgery Physician Ours Ms. X practices medicine and specializes in urological surgery Physician Original Ms. X practices medicine in Fort Myers, FL and specializes in family medicine Physician Adversarial (Wiegreffe and Pinter, 2019) Ms. X practices medicine in Fort Myers, FL and specializes in family medicine Physician Ours Ms. X practices medicine in Fort Myers, FL and specializes in family medicine Physician Original Having started his surgical career as a general orthopaedic surgeon, Mr X retains a broad practice which includes knee and hand surgery . He still does regular trauma on-call for the North Hampshire hospital and treats all types of orthopaedic problems and trauma. Surgeon Adversarial (Wiegreffe and Pinter, 2019) Having started his surgical career as a general orthopaedic surgeon, Mr X retains a broad practice which includes knee and hand surgery. He still does regular trauma on-call for the North Hampshire hospital and treats all types of orthopaedic problems and trauma. Surgeon Ours Having started his surgical career as a general orthopaedic surgeon, Mr X retains a broad practice which includes knee and hand surgery. He still does regular trauma on-call for the North Hampshire hospital and treats all types of orthopaedic problems and trauma. Surgeon Original Ms. X practices medicine in ... and specializes in pediatrics. Ms. X is affiliated with childrens of Alabama, Saint Vincents hospital Birmingham and Brookwood Medical Center. Ms. X speaks English and Arabic. Physician Adversarial (Wiegreffe and Pinter, 2019) Ms. X practices medicine in ... and specializes in pediatrics. Ms. X is affiliated with childrens of Alabama, Saint Vincents hospital Birmingham and Brookwood Medical Center. Ms. X speaks English and Arabic. Physician Ours Ms. X practices medicine in ... and specializes in pediatrics . Ms. X is affiliated with childrens of Alabama, Saint Vincents hospital Birmingham and Brookwood Medical Center. Ms. X speaks English and Arabic. Physician Table 7: Qualitative examples.
2020
432
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4794–4800 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4794 On the Spontaneous Emergence of Discrete and Compositional Signals Nur Geffen Lan1 Emmanuel Chemla2 Shane Steinert-Threlkeld3 1 Computational Linguistics Lab, Tel Aviv University 2 EHESS, PSL University, CNRS, Ecole Normale Sup´erieure 3 Department of Linguistics, University of Washington [email protected] [email protected] [email protected] Abstract We propose a general framework to study language emergence through signaling games with neural agents. Using a continuous latent space, we are able to (i) train using backpropagation, (ii) show that discrete messages nonetheless naturally emerge. We explore whether categorical perception effects follow and show that the messages are not compositional. 1 Introduction In a signaling game, artificial agents learn to communicate to achieve a common goal: a sender sees some piece of information and produces a message, which is then sent to a receiver that must take some action (Lewis, 1969; Skyrms, 2010). If the action is coherent with the sender’s initial piece of information, the choice of the message and its interpretation is reinforced. For instance, in a referential game, sender and receiver see a set of objects, and the sender knows which of these the receiver must pick; the sender then sends a message to the receiver, who must interpret it to pick up the right object (Lazaridou et al., 2017, 2018; Havrylov and Titov, 2017; Chaabouni et al., 2019). This setting has been used to study the factors influencing the emergence of various fundamental properties of natural language, such as compositionality (Kirby et al., 2015; Franke, 2016; Steinert-Threlkeld, 2016; Mordatch and Abbeel, 2018; Lazaridou et al., 2018; Choi et al., 2018). In this paper, we add focus on two other so-called ‘design features’ of natural language (Hockett, 1960): discreteness (i.e. words form clusters in acoustic space), and displacement (i.e. efficient communication can occur about objects and facts beyond the immediate context of the conversation). From an implementation point of view, we follow the recent literature which has shown that a signaling game is essentially an autoencoder setting, with the encoder playing the role of the sender, and the decoder the role of the receiver (see Fig. 1). In this literature, however, the discreteness of the communication protocol is assumed, since the networks then traditionally use a (normally sequential and) discrete latent space (Havrylov and Titov, 2017; Chaabouni et al., 2019; Kharitonov et al., 2019). Our main contribution is a generalization of the current implementation of signaling games as autoencoders. Our implementation covers a broader variety of signaling games, and it crucially incorporates the possibility of displacement and makes no a priori assumption of discreteness. Our main result is that under appropriate conditions, discreteness emerges spontaneously: if the latent space is thought about as a continuous acoustic space, then trained messages form coherent clusters, just like regular words do. We also show that the messages are not compositional. In addition to contributing to our understanding of the emergence of communication protocols with features like natural language, our results have technical significance: by using a continuous communication protocol, with discreteness spontaneously emerging, we can train end-to-end using standard backpropagation, instead of reinforcement learning algorithms like REINFORCE and its refinements (Williams, 1992; Schulman et al., 2015; Mnih et al., 2016), which are difficult to use in practice. 2 Related Work A related line of work attempts to avoid the difficulties of reinforcement learning—used when there are stochastic nodes in a computation graph— by reparameterization and/or non-stochastic estimators (Bengio et al., 2013; Schulman et al., 2015). In the emergent communication case, where the stochastic nodes are discrete (e.g. sampling a 4795 message from a sender distribution), the GumbelSoftmax estimator has become increasingly popular (Jang et al., 2017; Maddison et al., 2017). That work enables standard backpropagation to be used for training by optimizing approximations to the true reinforcement learning signal. By contrast, we do not approximate the discrete RL learning signal, but rather ask under what conditions discreteness will emerge. Several earlier papers explore similar topics in the emergence of discrete symbols. Nowak et al. (1999) show that the division of the acoustic space is an emergent property of language use under noise. It assumes that speakers have a fixed language and asks which such ones are stable. In our setting, the language itself is changing as the result of reinforcement from communication and transmission itself is not noisy. De Boer (2000) simulates the emergence of vowel systems in artificial agents modeled after phonetic production and perception in humans, resulting in a self-discretizing acoustic space and a vowel system that resembles human ones. This makes the agents much closer to what we know about humans, but also limits its scope. Results about emergent communication can tell us both about the emergence of human language, but also about communication protocols in general, that may be used by very different agents, e.g. autonomous ones, or animals (Steinert-Threlkeld et al., 2020). 3 Function Games We here introduce a general communication game setting, which we call Function Games. Our games contain three basic components: (i) a set of contexts C, (ii) a set of actions A, (iii) a family of functions F, from contexts to actions. One play of a Function Game game runs as follows: 1. Nature chooses f ∈F and a context c ∈C. 2. Sender sees the context c and f. 3. Sender sends a message m to Receiver. 4. Receiver sees a possibly different context c′ and the message m and chooses an action a′. 5. Both are ‘rewarded’ iff a′ = f(c′). Abstractly, the function f represents some piece of knowledge available primarily for Sender, and which determines what action is appropriate in any given context. Two concrete interpretations will help illustrate the variety of communication protocols and goals that this framework encompasses. Generalized referential games. A reference game is one in which Sender tries to get Receiver to pick the correct object out of a given set (Skyrms, 2010; Lazaridou et al., 2017, 2018; Havrylov and Titov, 2017; Chaabouni et al., 2019). Here, contexts are sets of objects (i.e. an m × n matrix, with m objects represented by n features). Normally (though we will drop this assumption later), c′ = shuffled(c): Sender and Receiver see the same objects, but in a different arrangement. Actions are the objects, and the functions f ∈F are choice functions: f(c) ∈c for every context c. Belief update games. We will mostly focus on the previous interpretation, but illustrate the generality of the setting with another interpretation here. Contexts can represent the (possibly different) belief states of the agents. ‘Actions’ can represent updated belief states (A = C), the different functions in F then representing how to update an agent’s beliefs in the light of learning a particular piece of information (passed directly to Sender, and only through the message to Receiver). 4 Experiment Because we are interested in the simultaneous emergence both of discrete and of compositional signals, we use a Function Game called the Extremity Game designed to incentivize and test rich compositionality (Steinert-Threlkeld, 2018, 2020). In this game, one may think of the n dimensions of the objects as gradable properties, e.g. size and darkness, so that a 2D object is determined by a given size and shade of gray. For the functions, we set F = {arg mini, arg maxi : 0 ≤i < n}. An emerging language may contain compositional messages like ‘MOST + BIG’, ‘LEAST + DARK’. 4.1 Model Our model (Figure 1) resembles an encoderdecoder architecture, with Sender encoding the context/target pair into a message, and Receiver decoding the message (together with its context c′) into an action. Both the encoder and decoder are multi-layer perceptrons with two hidden layers of 64 ReLU units (Nair and Hinton, 2010; Glorot et al., 2011). A smaller, intermediate layer without an activation function bridges the encoder and decoder and represents the transformation of the input information to messages. 4796 Figure 1: Our model architecture, mixing terminology from the autoencoder and signaling game traditions. 4.2 Game Parameters We manipulate the following parameters: • Context identity. In the shared setting, Receiver sees a shuffled version of Sender’s context (c′ = shuffled(c)). In the non-shared setting, Receiver’s context c′ is entirely distinct from Sender’s. This forces displacement and may incentivize compositional messages, since Sender cannot rely on the raw properties of the target object in communication. • Context strictness. In strict contexts, there is a one-to-one (and onto) correspondence between F and A (as in the original Extremity Game from Steinert-Threlkeld, 2018, 2020). In non-strict contexts, an object may be the arg max or arg min of several dimensions, or of no dimension. In all experiments, the latent space (message) dimension is always 2, and objects have 5 dimensions. Strict contexts therefore contain 10 objects, while non-strict contexts contain 5, 10, or 15 objects. 4.3 Training Details We use the Adam optimizer (Kingma and Ba, 2015) with learning rate 0.001, β1 = 0.9, and β2 = 0.999. The model is trained for 5,000 steps by feeding the network mini-batches of 64 contexts concatenated with one-hot function selectors. The network’s loss is taken as the MSE between the target object f(c′) and the object generated by the Receiver. For each setting of the above parameters, we run 20 trials with different random seeds.1 5 Results 5.1 Communicative success We measure the communicative success of the network by calculating the accuracy of recovering the correct object from c′. Receiver’s prediction is considered correct if its output is closer to f(c′) than 1The project’s code for extension and reproduction is available at https://github.com/0xnurl/signaling-auto-encoder. Shared Non-shared Strict 10 objects 63.78% ± 1.63 60.22% ± 1.56 Non-strict 5 objects 49.37% ± 1.67 43.55% ± 1.69 10 objects 33.06% ± 1.47 31.89% ± 1.63 15 objects 27.58% ± 1.30 27.95% ± 1.24 Table 1: Communicative success, as measured by object recovery accuracy. (a) Before training (b) After training Figure 2: Sampled messages for contexts of 10 objects of size 5 for (a) an untrained and (b) a trained network. Colors represent the fi ∈F input part of the Sender. to all other objects in c′. Accuracy of the different settings is reported in Table 1. While the network handles displacement well (non-shared contexts), the model struggles with non-strict contexts. Note that although accuracy is not 100%, it is still well above chance, since e.g. for a context of 10 objects random guessing yields an expected accuracy of 10% (which we observe in our model before training). 5.2 Discrete signals Figure 2 depicts message vectors sampled from the latent space layer, before and after training. It is apparent that discrete messages emerge from the imposed learning regime. We measure cluster tendency more quantitatively through two measures, one considering Sender’s production, and the other Receiver’s perception. First, we sample 100 contexts, and collect the output of the trained encoder for each of these contexts combined with each possible function f. We apply an unsupervized clustering algorithm to this set of produced messages (DBSCAN, Ester et al., 1996, with ϵ = 0.5). A label is assigned to each cluster using the ground truth: the label of a cluster is the function f that was most often at the source of a point in this cluster. This allows us to compute F1-scores, which are reported in Table 2. The model reached near-optimal clusteriza4797 Shared Non-shared Strict 10 objects 1.00 ± 0.00 0.90 ± 0.09 Non-strict 5 objects 0.99 ± 0.02 0.54 ± 0.15 10 objects 1.00 ± 0.00 0.99 ± 0.01 15 objects 1.00 ± 0.00 1.00 ± 0.00 Table 2: Discreteness in production, as measured by F1 scores for automatically clusterized messages. Shared Non-shared Strict 10 objects 63.39% ± 1.45 55.37% ± 3.43 Non-strict 5 objects 46.94% ± 1.70 29.40% ± 5.59 10 objects 32.63% ± 1.43 31.51% ± 1.62 15 objects 28.24% ± 1.11 27.94% ± 1.20 Table 3: Discreteness in perception, as measured by object recovery accuracy from artificial messages. tion measures in 7 out of 8 parameter settings, with the Non-strict, Non-shared context with 5 objects being the exception. The second approach is akin to studying perception. Given the clusterization of the message space, we sample new messages from each cluster, and test Receiver’s perception of these ‘artificial’ messages, which have never been produced by Sender. To sample artificial messages, we take the average of 10 messages from a (now labelled) cluster. These artificial messages are fed to Receiver for 100 different contexts. The output object accuracy for these artificial messages is shown in Table 3. The model achieves recovery accuracy similar to when interpreting actual messages. In sum, we can identify discrete, abstract regions of the latent space corresponding to different functions in the input, just like words form clusters in acoustic space. 5.3 Compositionality Our agents are capable of communicating in abstract situations, namely some in which their contexts are different in the first place. This generalizability suggests that the messages may be ‘compositional’. We here probe for a candidate compositional structure to the latent space, by asking how the messages relate to the structure of the family of functions F. First, the pioneering Mikolov et al., 2013 looks for compositionality at the level of word embeddings (WE) through addition, most classically asking whether WE(queen)=WE(king)WE(man)+WE(woman). In the current Game, we can ask whether the messages are related as follows, for any dimensions i and j: M(c, arg maxi)=M(c, arg maxj)M(c, arg minj)+M(c, arg mini). For each such pair of object dimensions we calculate the right-hand side of the equation above for 100 contexts, feed it to Receiver, compare Receiver’s output to the output that would have been obtained if M(c, arg maxi) (the left-hand side) had been sent in the first place. This leads to important degradation of average communicative success: a drop of at least 24 percentage points across parameter combinations, to around chance level. Full results are in the left column of Table 4. Second, we note as others that the compositionas-addition assumption is disputable, both in general and in the original application case (Linzen, 2016; Chen et al., 2017). To abstract away from this issue, we train a ‘composition network’ (an MLP with 2 hidden layers of 64 ReLU units) on the task of predicting M(c, arg maxi) from M(c, arg maxj), M(c, arg minj) and M(c, arg mini), therefore letting it discover any function for mixing values, and not involving addition a priori. We leave out one dimension i0 from training, and feed Receiver with the message predicted by the ‘composition network’ from M(c, arg maxj), M(c, arg minj) and M(c, arg mini0). If the language was compositional, this predicted message should behave like M(c, arg maxi0), but we found that, as in the case of addition, the average communication accuracy for all taken-out parameters dropped dramatically (again, at least 24 percentage points drop). Full results are in the right column of Table 4. 5.4 Categorical perception Above we essentially propose an analysis of discreteness both in production and perception. This can lead to more psycholinguistic-like queries about these emergent languages. For instance, one may ask whether classical ‘Categorical Perception’ (CP) effects obtain, whereby two messages at a short distance in the latent space may be discriminated easily if (and only if) they are on two sides of a categorical boundary for interpretation purposes 4798 Compositionality by Addition Composition Network Shared Non-shared Shared Non-shared Strict 10 objects 7.82% ± 2.40 11.94% ± 2.13 13.70% ± 6.85 10.18% ± 6.15 Non-strict 5 objects 16.86% ± 3.23 17.14% ± 3.54 15.10% ± 2.05 14.35% ± 2.74 10 objects 5.82% ± 2.37 6.46% ± 1.79 5.00% ± 2.62 5.92% ± 2.12 15 objects 3.72% ± 1.42 4.00% ± 1.54 1.59% ± 1.31 2.48% ± 1.05 Table 4: Communicative success using messages ‘inferred’ by assuming a systemic relation within arg mini/arg maxi message pairs. The ‘compositionality by addition’ method assumes that M(c, arg maxi) = M(c, arg maxj) - M(c, arg minj) + M(c, arg mini). The ‘compositional network’ is an MLP trained to predict M(c, arg maxi) from the other three messages. Table values are object recovery accuracies averaged for all i. (see Liberman et al., 1957, and Damper and Harnad, 2000 for early discussions in the context of neural architectures). As an initial foray, we can investigate the sharpness of the boundaries of our discrete messages (i.e. distribution in latent space). For representation purposes, we sample pairs of messages, call them M−1 and M+1 generated by Sender for two choice functions F−1 and F+1. We explore a continuous spectrum of messages in the dimension connecting these two messages (Mt = (1−t)M−1+(1+t)M+1 2 , continuously shifting from M−1 to M+1 as the continuous variable t moves from −1 to +1). The messages Mt are fed to Receiver together with contexts C′, and for each function F−1 and F+1 in turn, we calculate object recovery accuracy. This is plotted in Figure 3 for an Extremity Game model trained in a strict, non-shared context setting with object size 5. The model shows that clusters have relatively sharp boundaries, especially in the direction of a message belonging to another cluster (the area where x is between −1 and +1 in Fig. 3). Figure 3: Categorical perception effect, demonstrated by accuracy of object recovery using messages shifted between two ‘meanings’. We can thus identify a boundary around a cluster, and its width, providing the necessary setup to investigate CP effects: whether pairs of messages crossing such a boundary behave differently (e.g., are easier to discriminate) than a pair of equally distant messages both on one side of this boundary. 6 Conclusion We propose a general signaling game framework in which fewer a priori assumptions are imposed on the conversational situations. We use both production and perception analyses, and find that under appropriate conditions, which are met by most studies involving neural signaling games, messages become discrete without the analyst having to force this property into the language (and having to deal with non-differentiability issues). We find no evidence of compositional structure using vector analogies and a generalization thereof but do find sharp boundaries between the discrete message clusters. Future work will explore other measures and alternative game settings for the emergence of compositionality, as well as more subtle psychological effects (Categeorical Perception) of continuous biological systems exhibiting discrete structure, like the auditory system. Acknowledgments We acknowledge the funding support from ANR17-EURE-0017, and greatly thank Marco Baroni, Diane Bouchacourt, Rahma Chaabouni, Emmanuel Dupoux, Roni Katzir, Philippe Schlenker, Benjamin Spector, Jakub Szymanik, and three ACL reviewers. 4799 References Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. 2013. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation. Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, and Marco Baroni. 2019. Anti-efficient encoding in emergent communication. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019). Dawn Chen, Joshua C. Peterson, and Thomas L. Griffiths. 2017. Evaluating vector-space models of analogy. In Proceedings of the 39th Annual Conference of the Cognitive Science Society. Edward Choi, Angeliki Lazaridou, and Nando de Freitas. 2018. Compositional Obverter Communication Learning from Raw Visual Input. In International Conference of Learning Representations (ICLR 2018), pages 1–18. R.I. Damper and S.R. Harnad. 2000. Neural network models of categorical perception. Bart De Boer. 2000. Self-organization in vowel systems. Journal of Phonetics, 28(4):441–465. Martin Ester, Hans-Peter Kriegel, J¨org Sander, and Xiaowei Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD, volume 96, pages 226–231. Michael Franke. 2016. The Evolution of Compositionality in Signaling Games. Journal of Logic, Language and Information. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep Sparse Rectifier Neural Networks. In 14th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 315–323. Serhii Havrylov and Ivan Titov. 2017. Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols. In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS 2017). Charles F Hockett. 1960. The Origin of Speech. Scienctific American, 203:88–111. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical Reparameterization with Gumbel-Softmax. In International Conference of Learning Representations (ICLR). Eugene Kharitonov, Rahma Chaabouni, Diane Bouchacourt, and Marco Baroni. 2019. EGG: a toolkit for research on Emergence of lanGuage in Games. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 55–60, Stroudsburg, PA, USA. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In International Conference of Learning Representations (ICLR). Simon Kirby, Monica Tamariz, Hannah Cornish, and Kenny Smith. 2015. Compression and communication in the cultural evolution of linguistic structure. Cognition, 141:87–102. Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. 2018. Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input. In International Conference of Learning Representations (ICLR 2018). Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2017. Multi-Agent Cooperation and the Emergence of (Natural) Language. In International Conference of Learning Representations (ICLR2017). David Lewis. 1969. Convention. Blackwell. Alvin M Liberman, Katherine Safford Harris, Howard S Hoffman, and Belver C Griffith. 1957. The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology, 54(5):358. Tal Linzen. 2016. Issues in evaluating semantic spaces using word analogies. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 13–18, Berlin, Germany. Association for Computational Linguistics. Chris J Maddison, Andriy Mnih, Yee Whye Teh, United Kingdom, and United Kingdom. 2017. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. In International Conference of Learning Representations (ICLR). Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781 [cs]. ArXiv: 1301.3781. Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Tim Harley, Timothy P Lillicrap, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous Methods for Deep Reinforcement Learning. In International Conference on Machine Learning (ICML). Igor Mordatch and Pieter Abbeel. 2018. Emergence of Grounded Compositional Language in Multi-Agent Populations. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018). Vinod Nair and Geoffrey E Hinton. 2010. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning (ICML). 4800 Martin A. Nowak, David C. Krakauer, and Andreas Dress. 1999. An error limit for the evolution of language. Proceedings of the Royal Society B: Biological Sciences, 266(1433):2131–2136. John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. 2015. Gradient Estimation Using Stochastic Computation Graphs. In Advances in Neural Information Processing Systems 28 (NIPS 2015). Brian Skyrms. 2010. Signals: Evolution, Learning, and Information. Oxford University Press. Shane Steinert-Threlkeld. 2016. Compositional Signaling in a Complex World. Journal of Logic, Language and Information, 25(3):379–397. Shane Steinert-Threlkeld. 2018. Paying Attention to Function Words. In Emergent Communication Workshop @ NeurIPS 2018. Shane Steinert-Threlkeld. 2020. Towards the Emergence of Non-trivial Compositionality. Philosophy of Science. Shane Steinert-Threlkeld, Philippe Schlenker, and Emmanuel Chemla. 2020. Referential and General Calls in Primate Semantics. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256.
2020
433
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4801–4811 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4801 Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words Josef Klafka Department of Psychology Carnegie Mellon University [email protected] Allyson Ettinger Department of Linguistics University of Chicago [email protected] Abstract Although models using contextual word embeddings have achieved state-of-the-art results on a host of NLP tasks, little is known about exactly what information these embeddings encode about the context words that they are understood to reflect. To address this question, we introduce a suite of probing tasks that enable fine-grained testing of contextual embeddings for encoding of information about surrounding words. We apply these tasks to examine the popular BERT, ELMo and GPT contextual encoders, and find that each of our tested information types is indeed encoded as contextual information across tokens, often with nearperfect recoverability—but the encoders vary in which features they distribute to which tokens, how nuanced their distributions are, and how robust the encoding of each feature is to distance. We discuss implications of these results for how different types of models break down and prioritize word-level context information when constructing token embeddings. 1 Introduction The field of natural language processing has recently seen impressive performance gains associated with the use of “contextual word embeddings”: high-dimensional vectors that have access to information from the contexts of the words they represent. Models that use these contextual embeddings achieve state-of-the-art performance on a variety of natural language processing tasks, from questionanswering to natural language inference. As of writing, nearly all of the models on the SuperGLUE leaderboard (Wang et al., 2019) use contextual embeddings in their architectures, most notably models building on the BERT (Devlin et al., 2019) and Transformer XL (Dai et al., 2019) models. Despite the clear power afforded by incorporating context into word embeddings, little is known about what information these contextual embeddings actually encode about the words around them. In a sentence like “The lawyer questioned the judge”, does the contextual representation for questioned reflect properties of the subject lawyer? Of the object judge? What determines the information that a contextual embedding absorbs about its surrounding words? In this paper, we address these questions by designing and implementing a suite of probing tasks, to test contextual embeddings for information about syntactic and semantic features of words in their contexts. We use controlled sentences of fixed structure, allowing us to probe for information associated with word categories, and to avoid confounds with particular vocabulary items. We then apply these tests to examine the distribution of contextual information across token representations produced by contextual encoders BERT (Devlin et al., 2019), ELMo (Peters et al., 2018b), and GPT (Radford et al., 2018). The contributions of this paper are twofold. First, we introduce a suite of novel probing tasks for testing how encoders distribute contextual information across sentence tokens. All datasets and code are available for follow-up testing.1 Second, we use these tests to shed light on the distribution of context information in state-of-the-art encoders BERT, ELMo and GPT. We find that these models encode each of our tested word features richly across sentence tokens, often with perfect or near-perfect recoverability, but the details of how the models distribute this information vary across encoders. In particular, bidirectional models show more nuance in information selectivity, while the deeper transformer models show more robustness to distance. Follow-up tests suggest that the effects cannot be chalked up to proximity, and that general word features are encoded more robustly than word identity. 1Probing datasets and code available at https://github.com/jklafka/context-probes. 4802 2 Our approach Our tests address the following basic question: if we probe the contextual representation of a given token in a sentence, how much information can we recover about the other words in that sentence? For example, if we create a contextual embedding for the word questioned in the sentence The lawyer questioned the judge how well can we extract information about the subject noun (lawyer)? What if we probe the object noun (judge) or determiners (the)? We develop tasks to probe representations of each word for various types of information about the other words of the sentence, allowing us to examine with fine granularity how contextual encoders distribute information about surrounding words. We complete this investigation for each word in a set of fixedlength sentences of pre-determined form, which allows us to characterize behaviors based on word categories (e.g., subjects versus verbs). Using this approach, we can examine how the distribution of context information is impacted by a) the type of information being encoded, and b) the properties of the word that the embedding corresponds to. 3 Related work Much work has been done on analyzing the information captured by sentence encoders and language models in general. Classification-based probing tasks have been used to analyze the contents of sentence embeddings (Adi et al., 2016; Conneau et al., 2018; Ettinger et al., 2016), finding that these embeddings encode a variety of information about sentence structure, content, length, etc., though more tightly-controlled tasks suggest weaknesses in capturing basic sentence meaning (Ettinger et al., 2018). Our work uses the same classificationbased probing methodology, but focuses on probing token-level embeddings for context information. Other work has analyzed linguistic capacities of language models by examining output probabilities in context, emulating methods for studying human language processing. Much of this work has studied sensitivity to syntactic dependencies in recurrent neural network language models (e.g. Linzen et al., 2016; Wilcox et al., 2018; Chowdhury and Zamparelli, 2018; Gulordava et al., 2018; Marvin and Linzen, 2018; Futrell et al., 2019). Using similar methods to test syntactic awareness in BERT, Goldberg (2019) finds the model to perform almost at ceiling on syntactic tests. Testing BERT’s outputs on a range of semantic, syntactic and pragmatic information, Ettinger (2020) finds strong sensitivity to syntax, but clear limitations in areas of semantics and pragmatic/commonsense reasoning. We complement this work with a direct focus on the contextual token representations learned by models pre-trained on language modeling, examining the syntactic and semantic information that these embeddings capture about surrounding words. Most directly related to the present work are studies using probing and other methods to analyze information in contextual token embeddings. Some of this research (e.g. Tenney et al., 2019a; Jawahar et al., 2019) finds that BERT encodes more local, syntactic information at lower layers and more global, semantic information at higher layers. Peters et al. (2018a) find that encoders differ in encoding strength for semantic features but all encode these features strongly where possible. Hewitt and Manning (2019) provide evidence that contextual encoders capture sentence-level hierarchical syntactic structures in their representations. Other work (Liu et al., 2019; Tenney et al., 2019b) finds that contextual word encoders struggle to learn fine-grained linguistic information in a variety of contexts. These papers have focused primarily on studying the ability of contextual embeddings to capture information about the full sentence, or about phrases or dependencies of which those contextual embeddings are a part. We focus on mapping the precise distribution of context information across token embeddings, with a systematic, finegrained investigation of the information that each token encodes about each of its surrounding tokens. 4 Probing for contextual information For each of our probing tasks, we test for a particular information type, formulated as a query about a particular target word in the sentence—for instance, “What is the animacy of the subject?” or “What is the tense of the verb?”. We then apply these queries to probe the embeddings for each word of the sentence in turn—we call this the probed word. For example, a test with a probed word of verb, a target word of subject noun, and an information type of animacy would ask the question: “What does the embedding of the verb tell us about the animacy of the subject noun?” We implement each test as a binary classification task (e.g., “animate” vs “inanimate”), and train and test a multi-layer perceptron 4803 Task Example Label (subject) Number The lawyer betrayed the judge. SINGULAR The lawyers betrayed the judge. PLURAL (subject) Gender The waiter betrayed the judge. MASCULINE The waitress betrayed the judge. FEMININE (subject) Animacy The car betrayed the judge. INANIMATE The turtle betrayed the judge. ANIMATE Table 1: Example items from probing tasks for each noun information type. classifier using the embeddings of one probed word category at a time as input for the task. In this section, we describe the details of our probing datasets and tested information types. 4.1 Dataset construction We construct our datasets using generated transitive sentences with a fixed five-word structure: “DET SUBJ-N VB DET OBJ-N”, as in “The lawyer questioned the judge”. For generating these sentences, we draw nouns and verbs from the intersection of the single-word vocabularies of the four tested encoding models, from which we select a set of 100 target words for each task, along with a set of 100 of each other content word type. We select based on the necessary properties for the individual probing tasks (for example, as shown in Table 1, the gender task requires explicitly gendered nouns, and the animacy task requires a balanced set of animate vs inanimate nouns). We constrain our sample to ensure balance between positive and negative labels in training and test sets. The stimuli for each task were checked by the first author, a native English speaker, to confirm plausibility of occurrence in a corpus of English text. The exception to the plausibility rule was the noun animacy task, which required certain implausible noun-verb pairings. We follow Ettinger et al. (2018) in employing controls to keep selected baselines at chance performance—in our case, we ensure that noncontextualized GloVe embeddings (Pennington et al., 2014) are at chance on all tests, except when the probed word is the target word (e.g., when testing “what does the verb embedding tell us about the verb”). This ensures that the tasks must be solved by incorporating contextual information, rather than by spurious cues in the words themselves. Controlling in this way requires attention to inflectional marking. When targeting subject number we use only past tense transitive verbs (which have the same form regardless of subject number) to ensure that no word but the target noun indicates the number information of interest. For each task we generate 4000 training and 1000 test transitive sentences. We generate separate datasets for each target word within an information type—for example, generating separate subject animacy and object animacy datasets. 4.2 Information types We probe for three types of linguistic information about nouns and three types of linguistic information about verbs. We select these as reasonably simple and fundamental syntactic and semantic features at the word level, which are thus good candidates to be encoded in representations for other words in the sentence. With our selections, we aim for diversity in how syntactic or semantic the information is, and in whether the targeted information is overtly marked on the target word itself. Noun information When probing for information about subject and object nouns, we target three types of information: number, gender, and animacy. The number of a noun in English (whether it is singular or plural) is a basic property that has syntactic implications for verb agreement, and that is directly encoded on the surface form of the noun. Gender is a primarily semantic feature, and English nouns sometimes indicate gender in their surface forms (e.g. actor versus actress), but in other cases they do not (e.g. brother versus sister). Recent work has examined gender bias in word embeddings (e.g., Caliskan et al., 2017), further highlighting the importance of understanding how this information is reflected in word representations. Animacy is a semantic property that distinguishes animate entities like humans from inanimate entities like cars, and impacts contextual factors like the kind of verb frames a noun is likely to occur in. Table 1 shows example items from probing tasks for each of these noun information types—in this case with the subject noun as the target word. The 4804 Task Example Label Tense The lawyer betrayed the judge. PAST The lawyer betrays the judge. PRESENT Causative-inchoative The warden melted the ice. (the ice melted) YES ALTERNATION alternation The warden bought the ice. (*the ice bought) NO ALTERNATION Dynamic-stative The lawyer found the judge. DYNAMIC VERB The lawyer observed the judge. STATIVE VERB Table 2: Example items from probing tasks for each verb information type. first line for each task shows an example of a positive label sentence, and the second line shows an example of a negative label sentence. We also design probing tasks that target information about the object noun. These tasks are nearly identical in form to the subject tasks: the target word is simply switched to the object, such that the positive and negative labels are determined by the properties of the object noun rather than the subject noun. Verb information When probing for information about verbs, we target three types of information: tense, presence of a causative-inchoative alternation, and classification of dynamic versus stative verbs. Tense information in English is a largely semantic property with some syntactic implications, and it is marked by morphology on the surface form of a verb. In our probing tasks, we restrict to testing present versus past tense. In our verb tense task, we only use singular subjects, to avoid information about the subject influencing variation in the verb form. Present verbs encoding subject number is the only situation in which information about one word is explicitly marked on another word in our tasks. For all other tasks, we use only past tense verbs, which don’t have surface marking of subject information. The causativeinchoative alternation refers to whether a verb has both a transitive and an intransitive meaning—this is a syntactic/semantic feature that has essential implications for the way that a verb can interact with its context.2 The dynamic-stative feature is a primarily semantic feature referring to whether a verb involves the subject producing a change in the object (dynamic), or communicates a state of the subject and the object (stative). The causativeinchoative and dynamic-stative feature information are not marked on the surface forms of the verb. We have included examples for tasks testing each 2This task is derived from the verb alternation probe of the same name in Warstadt et al. (2019). of these verb information types in Table 2. Determiner information While we do probe for information encoded on our determiner words (the), we do not design tests that treat these determiners as target words. English determiners are a small closed-class set, making it difficult to design datasets with sufficient variety for probing. We leave this problem for future work. 5 Experiments We apply our probing tasks to test for the distribution of contextual information across tokens in three prominent contextual encoders: BERTBASE (Devlin et al., 2019), ELMo (Peters et al., 2018b), and GPT (Radford et al., 2018). BERTBASE is a bidirectional transformer architecture of 12 layers, trained on a novel masked language modeling task of predicting randomly masked tokens using left and right context, as well as a next-sentence prediction task. We probe representations from the model’s final layer, based on results suggesting that BERT’s later layers contain more semantic and abstract information (e.g. Jawahar et al., 2019). ELMo is composed of stacked bidirectional LSTMs, trained by jointly optimizing backwards and forwards language modeling objectives. We use the original version of ELMo with two representation layers, and we probe representations from the second layer, which has also been found to encode more abstract and semantic information (Peters et al., 2018b). GPT is a unidirectional left-to-right 12-layer transformer, also trained on language modeling. Consistent with ELMo and BERT, we probe representations from GPT’s final layer.3 We test the pre-trained versions of these models without fine-tuning, to examine their general-purpose encoding capacities, in line with Peters et al. (2018a). 3We also test the second-to-last layers from each model, and find that the results differ in magnitude from results on the final layer, but show the same overall patterns. 4805 We use these models to embed each of our fiveword sentences, producing contextualized representations for each token. Then for each probing task (e.g., subject animacy, verb tense) we train and test classifiers on the embeddings for a single probed word category (e.g., object noun) at a time. We use several classifier architectures in our probing tasks, in order to explore the impact of classifier complexity on extraction of our target information types. We use a multilayer perceptron classifier with a single hidden layer of 1024 units, as well as a smaller classifier with three layers of 20 units each, and a larger classifier with three layers of 1024 units each. We use the relevant contextual or non-contextual token representations as input for classification. The largest inputs we supply to the classifiers are contextual embeddings with dimension 1024, from ELMo.We use the relevant contextual or non-contextual token representations as input to the classifiers. Finding similar results across classifier architectures, we follow precedent in the literature (Adi et al., 2016; Ettinger et al., 2018) and present results only for our classifier with a single hidden layer. To quantify variance across runs, we repeat this process 50 times for each probed word category on each task.4 As a sanity-check baseline, we also test noncontextual GloVe embeddings (Pennington et al., 2014) on each of our tasks, to establish how well each information type is captured by the noncontextual representation for the relevant word (e.g., does the GloVe embedding for waiters encode the information that waiters is plural? masculine?). We also want to confirm that none of these tasks can be performed by non-contextual embeddings for any of the other words of the sentence, to ensure that the information being tested for is truly contextual. We use 300-dimensional GloVE embeddings, which prove generally adequate to encode all of the targeted word information. 6 Probing task results Figures 1-3 show the results for tasks with subject noun, object noun, and verb target words, respectively (note that although the plots include tokens from an example sentence for purposes of clarity, these are results across all test sentences). Each cluster of adjacent bars of the same shade repre4Training intermittently produced outlier runs with chancelevel or below-chance test accuracy in settings with otherwise strong performance—we omit such runs from consideration. sents the three different tested information types, with left-to-right order of number-gender-animacy for noun target words, and tense-dynamic-causative for verb target words. Distribution of subject noun information Figure 1 shows the distribution of subject noun information across sentence tokens, for all three information types and for our four tested encoders. First, we see that our sanity-check baselines indicate that we control our datasets well: as desired, GloVe embeddings are at chance performance for every probed word apart from the target word itself—on which GloVe performance is good—and GPT is at chance to the left of the target word. This suggests that we are successfully targeting contextual information rather than spurious cues. Once the subject noun is encountered, GPT shows near-perfect recoverability of subject number, gender, and animacy on all of the subsequent tokens, with the strength diminishing slightly as the subject grows more distant. The exception to this strong recoverability is in animacy encoding on the subject noun itself, which is notably weaker: GPT appears to encode more information about subject animacy on the verb and object tokens than on the subject itself. Apart from this, GPT appears to distribute subject information fairly uniformly regardless of information type or probed token. BERT and ELMo, the bidirectional contextual encoders, show more sensitivity to the interaction of information type and probed token. Both strongly encode subject number and animacy on all tokens, though BERT’s encoding of animacy lags behind ELMo’s in places, and both encode weaker subject information on the object noun. As for gender, BERT seemingly disregards subject gender as context information—while subject gender is near perfect recoverability on the subject noun itself, its recoverability is only around 75% on all other BERT tokens. In contrast, while ELMo shows weak subject gender on the subject determiner and subject noun itself, it strongly encodes subject gender on the verb, object determiner, and object noun. Distribution of object noun information Distribution of object noun information is shown in Figure 2. Again, the validity and control of our tests is supported by chance-level performance of GloVe representations on all but the object noun, and of GPT embeddings on every token prior to the object noun. GPT shows surprisingly weak encod4806 Figure 1: Probing task results with subject noun as target word. Vertical ranges show 95% confidence intervals computed with non-parametric bootstrap. Each cluster of adjacent bars of the same shade represents the three different tested information types—from left to right: number, gender, animacy Figure 2: Probing task results with object noun as target word. Vertical ranges show 95% confidence intervals computed with non-parametric bootstrap. Each cluster of adjacent bars of the same shade represents the three different tested information types—from left to right: number, gender, animacy Figure 3: Probing task results with verb as target word. Vertical ranges show 95% confidence intervals computed with non-parametric bootstrap. Each cluster of adjacent bars of the same shade represents the three different tested information types—from left to right: tense, dynamic, causative ing of object noun information even on the object noun embedding—this pattern suggests that GPT embeddings of the object noun actually encode more information about the subject noun several words away than about the object noun itself. BERT shows strong encoding of object number and animacy across tokens, but again sacrifices gender information on tokens apart from the object noun. ELMo also shows strong encoding of object number (with the exception of the subject noun), and of object animacy on the object noun, determiner and verb—but encodes animacy more weakly on the subject words. Unlike the case of subject gender, ELMo joins BERT in showing consistently weaker encoding of object gender. Distribution of verb information Distribution of information about the verb is shown in Fig4807 ure 3. Overall, encoding of verb information is weaker and somewhat more uniform across the sentence than encoding of noun information. BERT and ELMo both strongly encode the causativeinchoative alternation across all tokens of the sentence. For GPT this is also the most strongly encoded feature, and as with subject animacy, it is more strongly encoded on the later words than on the verb itself. For ELMo, the dynamic-stative property is the most weakly encoded property across the sentence (except on the subject noun). For BERT the verb’s tense is the most weakly encoded, consistently lagging behind ELMo’s encoding of verb tense. Among ELMo embeddings, the subject determiner shows surprisingly high performance in encoding of all verb properties. Interim summary GPT shows uniform strong encoding of subject information and solid encoding of verb information on the target and subsequent words—but weak encoding of object information on the object noun. BERT and ELMo show more nuance in their distribution of the information types, with BERT heavily deprioritizing gender information, but strongly encoding animacy and maintaining rich number information for both nouns across all words. ELMo too deprioritizes object gender across tokens, but it shows strong encoding of subject gender after the subject noun, mostly strong encoding of animacy (apart from object animacy on subject words), and consistently rich encoding of number for both nouns. Encoding of verb features is generally weaker than noun features, with BERT weakest on tense, ELMo weakest on dynamic-stative, and all contextual models strongest on the causative-inchoative distinction. 7 Distance manipulation tasks Setup Because our sentences follow a fixed structure for category-specific probing, it is possible that differences in encoding from word to word are an effect of linear distance rather than the syntactic/semantic relationships between the words. We perform a follow-up analysis inspired by a task in Zhang and Bowman (2018), in which the authors investigate the effect of distance from the target word as a factor in how richly recurrent neural networks encode syntactic information. For all of our tasks, we introduce a manipulation to change linear distances between our target and probed words, by splicing relative clauses after the subject and adjectives before the object. For example: The lawyer found the judge. The lawyer who was hungry found the angry and competent judge. For reasons of space, we display only subject task results, in Figure 4. All results may be found in our GitHub repository linked in Footnote 1. Results When we increase linear distances between words, the patterns remain similar to those observed in the five-word sentences. GPT still consistently encodes subject information on each of the tokens after the subject noun is encountered, with the exception of animacy encoding on the subject noun itself. BERT and ELMo still show strong encoding of subject number and animacy across tokens, with BERT dispreferring gender information across tokens and ELMo dispreferring gender only on subject determiner and noun. The main difference is that ELMo shows a marked drop in subject number information (and a bit of a drop in gender and animacy) on the object noun. These results suggest that the observed strong encoding of context information is not simply a function of the proximity of the words in our five-word sentences, given that the strong encoding patterns persist over the longer distances (with the slight exception of ELMo losing some encoding on the object noun). This may indicate syntactic awareness in the models, which would be consistent with the findings of, e.g., Hewitt and Manning (2019) and Tenney et al. (2019b). The results further suggest that the contextual encoders tag information as relevant to specific categories of target words in their contexts, operating flexibly across varying linear distances with different structures. 8 Word identity tasks Setup We aim to show whether the encoders incorporate only more coarse-grained linguistic information in their embeddings, or if encoding is fine-grained enough to memorize the embedding patterns for specific context word identities. We use a variation of the word content task from Conneau et al. (2018) and Adi et al. (2016). The goal of the original word content task is to determine whether a sentence vector representation contains a given word. We adapt this task to test the extent to which contextual embeddings can identify a neighboring word at a given position. We formulate our word identity tasks as “What is the identity of the subject” or “What is the identity of the verb”, 4808 Figure 4: Distance manipulation probing task results with subject as target word. Vertical ranges show 95% confidence intervals computed with non-parametric bootstrap. Each cluster of adjacent bars of the same shade represents the three different tested information types—from left to right: number, gender, animacy Figure 5: Word identity task: labeling identity of subject noun. Vertical ranges show 95% confidence intervals computed with non-parametric bootstrap. etc. As in Section 6, we probe each word position independently, using our fixed five-word sentences. For identity classification, we use a softmax kway classification task, similar to the word content task in Conneau et al. (2018). The classifier for this task must choose which of the k output words is in the target position of the sentence. In pre-testing, we found best overall performance with a 30-way classification, for which we present the results here. Smaller and larger k produce similar patterns of results, but performance overall decreases. Results We display results for probing subject noun identity (“what is the identity of the subject noun”) in Figure 5. This proves to be a challenging task, but we see clear trends suggesting that our encoders pick up on word identity signals. As before, GloVe embeddings are at chance on all but the target subject noun, and GPT embeddings are at chance for tokens to the left of the subject noun, satisfying our sanity checks. On the subject noun itself, encoders show comparably high recoverability of word identity, with BERT standing out as the strongest. GPT and ELMo see a slight boost in recoverability of subject identity on the verb, and GPT surprisingly shows the most subject identity information on the object determiner. BERT representations retain consistently strong subject identity encoding throughout the sentence, as do GPT embeddings starting with the subject noun itself—but ELMo encoding of subject identity drops off sharply on the determiners and object noun. This suggests that information about surrounding word identities is distributed fairly evenly across sentence tokens for BERT and GPT, but ELMo keeps word identity information fairly local to the word position itself. Probing for the identity of the verb and of the object produces analogous patterns of results. In particular, GLoVe embeddings are at chance on all words but the target word, while GPT embeddings are at chance before the target word, and pattern similarly to BERT afterwards in the object identity task. BERT is strong throughout, while ELMo shows more effect of distance from the target word. While identity classification performance here is far above chance, it is also well below 100%. It is possible that performance will increase with a 4809 stronger classifier, but it is also likely that encoding of context information at the granularity of word identity is not practical or necessary for contextual embeddings, such that they more strongly encode relevant context word features rather than word identities themselves, as these results suggest. 9 Discussion The results presented here shed light on how different contextual encoders distribute information across token embeddings of a sentence. While we cannot draw strong conclusions about causal relations between model properties and the observed patterns, we can make broad connections between the two to inform future investigations. Overall, the deeper, transformer-based architectures of BERT and GPT do not produce dramatic differences in distribution of information relative to the shallower ELMo model—the main difference observed with ELMo’s shallower recurrent architecture is a bit of a drop in information (particularly number and word identity) over longer distances, where BERT and GPT retain strong encoding. This is not necessarily surprising, given the potential of the self-attention mechanism to capture long-distance connections—it is perhaps more surprising that ELMo shows so little difference overall. These patterns suggest that deeper transformer models may not be critical for encoding and distributing these types of context information, except perhaps over substantial distances. BERT and ELMo, the models that use bidirectional context, generally pattern more similarly to each other than to GPT, particularly in strongly encoding number and animacy over gender, and encoding number strongest overall for nouns; GPT shows more uniformity in encoding noun information (at least from the subject noun). This pattern suggests that using bidirectional versus unidirectional context has more impact on distribution of context information than does depth or architecture type. GPT’s poor encoding of object information relative to subject and verb information further suggests that the left-to-right architecture may prioritize earlier information over later information. As for the two bidirectional models, what BERT’s particular properties seem to give it over ELMo, beyond more robustness to distance, is slightly different selectivity—dropping subject gender information earlier than ELMo does, while keeping object animacy information at a longer distance, and dropping verb tense information a bit more. Given BERT’s generally stronger performance on downstream tasks, this suggests that BERT’s masked language modeling setup, in tandem with its greater capacity to handle longer distances, allows for a more nuanced picture of how bidirectional context information should be distributed across tokens for optimal predictive power. 10 Conclusion In this paper we have begun to tackle a key question in our understanding of the contextual embeddings on which most current state-of-the-art NLP models are founded: what is it that contextual embeddings pick up about the words in their contexts? We have introduced a novel probing approach and a suite of tasks through which we have performed systematic, fine-grained probing of contextual token embeddings for information about features of their surrounding words. We apply these tests to examine the distribution of contextual information across sentence tokens for popular contextual encoders BERT, ELMo, and GPT. We find that each of the tested word features can be encoded in contextual embeddings for other words of the sentence, often with perfect or nearperfect recoverability. However, we see substantial variation across encoders in how robustly each information type is distributed to which tokens. Distance manipulations indicate that the observed rich contextual encoding is not an artifact of proximity between words, and probing for information about context word identities suggests a weaker encoding of identity information than of more abstract word feature information. Bidirectional context appears to impact distribution patterns more than depth or architecture, though the transformer models show more robustness to distance. Overall, these results help to clarify the patterns of distribution of context information within contextual embeddings— future work can further clarify the impact of more diverse syntactic relations between words, and of additional types of word features. We make all datasets and code available for additional testing. Acknowledgments We would like to thank Itamar Francez and Sam Wiseman for helpful discussion, and anonymous reviewers for their valuable feedback. This material is based upon work supported by the National Science Foundation under Award No. 1941160. 4810 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. Proceedings of the International Conference on Learning Representations (ICLR). Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Shammur Absar Chowdhury and Roberto Zamparelli. 2018. RNN simulations of grammaticality judgments on long-distance dependencies. In Proceedings of the 27th international conference on computational linguistics, pages 133–144. Alexis Conneau, German Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics. Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134–139. Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic subjects: Representations of syntactic state. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32–42. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. arXiv preprint arXiv:1901.05287. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Ganesh Jawahar, Benoˆıt Sagot, and Djam´e Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, and Noah A Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GLoVe: Global vectors for word representation. In Proceedings of the 2014 conference on Empirical Methods in Natural Language Processing (EMNLP). Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018a. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. 4811 Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, et al. 2019b. What do you learn from context? Probing for sentence structure in contextualized word representations. Proceedings of the International Conference on Learning Representations (ICLR). Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, pages 3261–3275. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN language models learn about filler–gap dependencies? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 211–221. Kelly Zhang and Samuel Bowman. 2018. Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.
2020
434
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4812–4822 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4812 Dense-Caption Matching and Frame-Selection Gating for Temporal Localization in VideoQA Hyounghun Kim Zineng Tang Mohit Bansal UNC Chapel Hill {hyounghk, terran, mbansal}@cs.unc.edu Abstract Videos convey rich information. Dynamic spatio-temporal relationships between people/objects, and diverse multimodal events are present in a video clip. Hence, it is important to develop automated models that can accurately extract such information from videos. Answering questions on videos is one of the tasks which can evaluate such AI abilities. In this paper, we propose a video question answering model which effectively integrates multi-modal input sources and finds the temporally relevant information to answer questions. Specifically, we first employ dense image captions to help identify objects and their detailed salient regions and actions, and hence give the model useful extra information (in explicit textual format to allow easier matching) for answering questions. Moreover, our model is also comprised of duallevel attention (word/object and frame level), multi-head self/cross-integration for different sources (video and dense captions), and gates which pass more relevant information to the classifier. Finally, we also cast the frame selection problem as a multi-label classification task and introduce two loss functions, In-andOut Frame Score Margin (IOFSM) and Balanced Binary Cross-Entropy (BBCE), to better supervise the model with human importance annotations. We evaluate our model on the challenging TVQA dataset, where each of our model components provides significant gains, and our overall model outperforms the stateof-the-art by a large margin (74.09% versus 70.52%). We also present several word, object, and frame level visualization studies.1 1 Introduction Recent years have witnessed a paradigm shift in the way we get our information, and a lot of it 1Our code is publicly available at: https://github.com/hyounghk/ VideoQADenseCapFrameGate-ACL2020 is related to watching and listening to videos that are shared in huge amounts via the internet and new high-speed networks. Videos convey a diverse breadth of rich information, such as dynamic spatiotemporal relationships between people/objects, as well as events. Hence, it has become important to develop automated models that can accurately extract such precise multimodal information from videos (Tapaswi et al., 2016; Maharaj et al., 2017; Kim et al., 2017; Jang et al., 2017; Gao et al., 2017; Anne Hendricks et al., 2017; Lei et al., 2018, 2020). Video question answering is a representative AI task through which we can evaluate such abilities of an AI agent to understand, retrieve, and return desired information from given video clips. In this paper, we propose a model that effectively integrates multimodal information and locates the relevant frames from diverse, complex video clips such as those from the video+dialogue TVQA dataset (Lei et al., 2018), which contains questions that need both the video and the subtitles to answer. When given a video clip and a natural language question based on the video, naturally, the first step is to compare the question with the content (objects and keywords) of the video frames and subtitles, then combine information from different video frames and subtitles to answer the question. Analogous to this process, we apply dual-level attention in which a question and video/subtitle are aligned in word/object level, and then the aligned features from video and subtitle respectively are aligned the second time at the frame-level to integrate information for answering the question. Among the aligned frames (which contain aggregated video and subtitle information now), only those which contain relevant information for answering the question are needed. Hence, we also apply gating mechanisms to each frame feature to select the most informative frames before feeding them to the classifier. 4813 Next, in order to make the frame selection more effective, we cast the frame selection sub-task as a multi-label classification task. To convert the time span annotation to the label for each frame, we assign a positive label (‘1’) to frames between the start and end points, and negative (‘0’) label to the others, then train them with the binary crossentropy loss. Moreover, for enhanced supervision from the human importance annotation, we also introduce a new loss function, In-and-Out Frame Score Margin (IOFSM), which is the difference in average scores between in-frames (which are inside the time span) and out-frames (which are outside the time span). We empirically show that these two losses are complementary when they are used together. Also, we introduce a way of applying binary cross-entropy to the unbalanced dataset. As we see each frame as a training example (positive or negative), we have a more significant number of negative examples than positive ones. To balance the bias, we calculate normalized scores by averaging the loss separately for each label. This modification, which we call balanced binary crossentropy (BBCE), helps adjust the imbalance and further improve the performance of our model. Finally, we also employ dense captions to help further improve the temporal localization of our video-QA model. Captions have proven to be helpful for vision-language tasks (Wu et al., 2019; Li et al., 2019; Kim and Bansal, 2019) by providing additional, complementary information to the primary task in descriptive textual format. We employ dense captions as an extra input to our model since dense captions describe the diverse salient regions of an image in object-level detail, and hence they would give more useful clues for question answering than single, non-dense image captions. Empirically, our first basic model (with duallevel attention and frame-selection gates) outperforms the state-of-the-art models on TVQA validation dataset (72.53% as compared to 71.13% previous state-of-the-art) and with the additional supervision via the two new loss functions and the employment of dense captions, our model gives further improved results (73.34% and 74.20% respectively). These improvements from each of our model components (i.e., new loss functions, dense captions) are statistically significant. Overall, our full model’s test-public score substantially outperforms the state-of-the-art score by a large margin of 3.57% (74.09% as compared to 70.52%).2 Also, our model’s scores across all the 6 TV shows are more balanced than other models in the TVQA leaderboard3, implying that our model should be more consistent and robust over different genres/domains that might have different characteristics from each other. Our contributions are four-fold: (1) we present an effective model architecture for the video question answering task using dual-level attention and gates which fuse and select useful spatial-temporal information, (2) we employ dense captions as salient-region information and integrate it into a joint model to enhance the videoQA performance by locating proper information both spatially and temporally in rich textual semi-symbolic format, (3) we cast the frame selection sub-task as a multilevel classification task and introduce two new loss functions (IOFSM and BBCE) for enhanced supervision from human importance annotations (which could be also useful in other multi-label classification settings), and (4) our model’s score on the test-public dataset is 74.09%, which is around 3.6% higher than the state-of-the-art result on the TVQA leaderboard (and our model’s scores are more balanced/consistent across the diverse TV show genres). We also present several ablation and visualization analyses of our model components (e.g., the word/object-level and the frame-level attention). 2 Related Work Visual/Video Question Answering Understanding visual information conditioned on language is an important ability for an agent who is supposed to have integrated intelligence. Many tasks have been proposed to evaluate such ability, and visual question answering is one of those tasks (Antol et al., 2015; Lu et al., 2016; Fukui et al., 2016; Xu and Saenko, 2016; Yang et al., 2016; Zhu et al., 2016; Goyal et al., 2017; Anderson et al., 2018). Recently, beyond question answering on a single image, attention to understanding and extracting information from a sequence of images, i.e., a video, is rising (Tapaswi et al., 2016; Maharaj et al., 2017; Kim et al., 2017; Jang et al., 2017; Lei et al., 2018; Zadeh et al., 2019; Lei et al., 2020; Garcia et al., 2020). Answering questions on videos requires an 2At the time of the ACL2020 submission deadline, the publicly visible rank-1 entry was 70.52%. Since then, there are some new entries, with results up to 71.48% (compared to our 74.09%). 3https://competitions.codalab.org/competitions/20415#results 4814 Local Gate Frame Score Margin Inside Frames Outside Frames Frame Level Att. Q-A Subtitle Word/Object Level Att. Q: What is Castle doing when Kate pulls up in her car ?" A: Petting a dog Beckett : That s too bad. You two make a cute couple. a1 a2 a3 a4 a5 ... 0 0 0 0 1 1 1 1 1 0 0 0 ... Inside Frames Outside Frames Features Dual-Level Attention New Loss Supervision [IOFSM/BBCE] Video-DenseCapt. Integration ... the dog is brown the hand of a person a light on the wall the man is wearing a black shirt a man is sitting Q: What is Castle doing when Kate pulls up in her car ?" A: Petting a dog Beckett : What's up, Castle? You proposing? Oh, no. Just waiting for you. Beckett : That 's too bad. You two make a cute couple. Softmax Softmax qa0 qa1 qai qaTqa ... ... st0 st1 stj stTst ... ... sv0 sv1 svk svT ... ... sd0 sd1 sdl sdT ... ... Softmax Softmax qa0 qa1 qai qaTqa ... ... st0 st1 stj stTst ... ... A B C D .... E F G . . what is cathy doing with her hand after she introduces her fiance to ted ? she is doing sign language . Before After before after Q-A SUB Softmax Softmax ... ... Softmax Softmax ... ... ... ... Softmax Softmax ... ... ... ... sv0 sv1 svk svT ... ... sd0 sd1 sdl sdT ... ... Multi-Head Self Attention Video Q-A Subtitle Dense Capt Q-A Subtitle Word/Object Level Att. Word/Object Level Att. Word/Object Level Att. Word/Object Level Att. Frame-Level Att. Frame-Level Att. Multi-Heads Self-Cross Att. Max-Pool Global Gate Local Gate Classifier Multi-Label Classifier Frame Score Margin Q-A SUB VID Q-A SUB-QA VID-QA ... ... Frame-Selection Gates Figure 1: Our model consists of three parts: Dual-Level Attention, Video-DenseCapt Integration, and FrameSelection Gates. The new loss functions (IOFSM/BBCE) also help improve the model with enhanced supervision. understanding of temporal information as well as spatial information, so it is more challenging than a single image question answering. Temporal Localization Temporal localization is a task that is widely explored in event/object detection in video context. There has been work that solely processes visual information to detect objects/actions/activity (Gaidon et al., 2013; Weinzaepfel et al., 2015; Shou et al., 2016; Dai et al., 2017; Shou et al., 2017). At the same time, work on natural language-related temporal localization task is less explored with recent work that focuses on the retrieval of a certain moment in a video by natural language (Anne Hendricks et al., 2017; Gao et al., 2017). With deliberately designed gating and attention mechanisms, our work, in general, will greatly contribute to the task of temporal localization, especially under natural language context and multimodal data. Dense Image Captioning Image captioning is another direction of understanding visual and language information jointly. Single-sentence captions (Karpathy and Fei-Fei, 2015; Anderson et al., 2018) capture the main concept of an image to describe it in a single sentence. However, an image could contain multiple aspects that are important/useful in different ways. Dense captions (Johnson et al., 2016; Yang et al., 2017) and paragraph captions (Krause et al., 2017; Liang et al., 2017; Melas-Kyriazi et al., 2018) have been introduced to densely and broadly capture the diverse aspects and salient regions of an image. Especially, dense caption describes an image in object level and gives useful salient regional information about objects such as attributes and actions. In this paper, we take advantage of this dense caption’s ability to help our video QA model understand an image better for answering questions. 3 Model Our model consists of 2 parts: feature fusion and frame selection. For feature fusion, we introduce dual-level (word/object and frame level) attention, and we design the frame selection problem as a multi-label classification task and introduce 2 new loss functions for enhanced supervision (Figure 1). 3.1 Features We follow the same approach of Lei et al. (2020)’s work to obtain features from video, questionanswer pairs, and subtitle input and encode them. We sample frames at 0.5 fps and extract object features from each frame via Faster R-CNN (Girshick, 2015). Then we use PCA to get features of 300 dimension from top-20 object proposals. We also create five hypotheses by concatenating a question feature with each of five answer features, and we pair each visual frame feature with temporally neighboring subtitles. We encode all the features using convolutional encoder. φen(x) :            x0 0 = Epos(x) xi t = fi,t(xi t−1) + xi t−1, fi(xi 0) = gn(xi L) y = fN ◦... ◦f1(x0 0) (1) where Epos denotes positional encoding, fi,t convolution preceded by Layer Normalization and followed by ReLU activation, and gn the layer normalization. The encoder is composed of N blocks iterations. In each iteration, the encoded inputs are transformed L times of convolutions. The L is set to 2, and N to 1 in our experiment (Figure 2). 3.2 Dual-Level Attention In dual-level attention, features are sequentially aligned in word/object-level and frame-level (Figure 3). 4815 Input Embedding Position Encoding Layer Norm Convolution ReLu Layer Norm Figure 2: CNN encoder. We use this block to encode all the input features. Word/Object-Level Attention The QA feature, qa = {qa0, qa1, .., qaTqa}, are combined with subtitle feature, st = {st0, st1, .., stTst}, and visual feature, vt = {vt0, vt1, .., vtTvt}, of t-th frame respectively via word/object-level attention. To be specific, we calculate similarity matrices following Seo et al. (2017)’s approach, Sv t ∈RTqa×Tst and Ss t ∈RTqa×Tvt, from QA/subtitle and QA/visual features respectively. From the similarity matrices, attended subtitle features are obtained and combined with the QA features by concatenating and applying a transforming function. Then, maxpooling operation is applied word-wise to reduce the dimension. (Ss t )ij = qa⊤ i stj (2) satt t = softmax(Ss t ) · st (3) qam s = maxpool(f1([qa; satt t ; qa ⊙satt t ])) (4) where f1 is a fully-connected layer followed by ReLU non-linearity. The same process is applied to the QA features. qaatt = softmax(Ss⊤ t ) · qa (5) sm t = maxpool(f1([st; qaatt; st ⊙qaatt])) (6) The fused features from different directions are integrated by concatenating and being fed to a function as follows: sw t = f2([qam s ; sm t ; qam s ⊙sm t ; qam s + sm t ]) (7) where f2 is the same function as f1 with non-shared parameters. All this process is also applied to visual features to get word/object-level attended features. vw t = f2([qam v ; vm t ; qam v ⊙vm t ; qam v + vm t ]) (8) Softmax Softmax qa0 qa1 qai q ... ... what is cathy doing with her hand after she introduces her fiance to ted ? she is doing sign language . Q-A Softmax Softmax ... ... Softmax Softmax ... ... ... ... Softmax Softmax ... ... ... ... Q: what is cathy doing with her hand after she introduces her fiance to ted ? A: she is doing sign language . Q-A SUB VID Q-A SUB-QA VID-QA ... ... Figure 3: Dual-Level Attention. Our model performs two-level attentions (word/object and frame level) sequentially. In the word/object-level attention, each word/object is aligned to relevant words or objects. In the frame-level attention, each frame (which has integrated information from the word/object-level attention) is aligned to relevant frames. Frame-Level Attention The fused features from word/object-level attention are integrated framewise via frame-level attention. Similar to the word/object-level attention, a similarity matrix, S ∈RTF ×TF , is calculated, where TF is the number of frames. Also, from the similarity matrix, attended frame-level features are calculated. (S)kl = sw⊤ k vw l (9) satt = softmax(S) · sw + sw (10) ˆv = f3([vw; satt; vw ⊙satt; vw + satt]) (11) vatt = softmax(S⊤) · vw + vw (12) ˆs = f3([sw; vatt; sw ⊙vatt; sw + vatt]) (13) where f3 is the same function as f1 and f2 with non-shared parameters. The frame-wise attended features are added to get an integrated feature. usv = ˆs + ˆv (14) 3.3 Video and Dense Caption Integration We also employ dense captions to help further improve the temporal localization of our video-QA model. They provide more diverse salient regional information (than the usual single non-dense image captions) about object-level details of image frames in a video clip, and also allow the model to explicitly (in textual/semi-symbolic form) match keywords/patterns between dense captions and questions to find relevant locations/frames. 4816 l sdT ... n ... ... ... ... before after Multi-Head Self Attention Frame-Level Att. Frame-Level Att. Esposito : Upstairs. go. Unkname : Carol! Frame 20 Frame 25 Figure 4: Self-Cross Attention. We combine information each from the video (fused with subtitle and QA) and dense caption (fused with subtitle and QA) via the multi-head self attention. Before being fed to the multihead self attention module, video and dense caption features are concatenated. Thus, self and cross attentions are performed simultaneously. We apply the same procedure to the dense caption feature by substituting video features with dense caption features to obtain usd. To integrate usv and usd, we employ multi-head self attention (Figure 4). To be specific, we concatenate usv and usd frame-wise then feed them to the self attention function. φself-att(x) ( hi = ga(w⊤ q xi, w⊤ k xi, w⊤ v xi) y = w⊤ m[h1; . . . ; hk] (15) where ga denotes self-attention. usvd = φself-att([usv; usd]) (16) In this way, usv and usd attend to themselves while attending to each other simultaneously. We split the output, usvd into the same shape as the input, then add the two. z = usvd[0 : TF ] + usvd[TF : 2TF ] (17) 3.4 Frame-Selection Gates To select appropriate information from the framelength features, we employ max-pooling and gates. Features from the video-dense caption integration are fed to the CNN encoder. A fully-connected layer and sigmoid function are applied sequentially to the output feature to get frame scores that indicate how relevant each frame is for answering a given question. We get weighted features by multiplying the output feature from the CNN encoder with the scores. ˆz = φen2(z) (18) gL = sigmoid(fL(ˆz)) (19) zgl = ˆz ⊙gL (20) We calculate another frame scores with a different function fG to get another weighted feature. gG = sigmoid(fG(ˆz)) (21) zgg = ˆz ⊙gG (22) Finally, following Lei et al. (2020)’s work, we also apply frame-wise max-pooling. zmax = maxpool(ˆz) (23) The three features (from local gate, global gate, and max-pooling, respectively), are then concatenated and fed to the classifier to give scores for each candidate answer. logit = clssifier([zmax; zgg; zgl]) (24) We get the logits for the five candidate answers and choose the highest value as the predicted answer. losscls = −log( esg P k esk ) (25) where sg is the logit of ground-truth answer. 3.5 Novel Frame-Selection Supervision Loss Functions We cast frame selection as a multi-label classification task. The frame scores from the local gate, gL, are supervised by human importance annotations, which are time spans (start-end points pair) annotators think needed for selecting correct answers. To this end, we transform the time span into groundtruth frame scores, i.e., if a frame is within the time span, the frame has ‘1’ as its label and a frame outside the span gets ‘0’. In this way, we can assign a label to each frame, and frames should get as close scores as their ground-truth labels. We train the local gate network with binary cross-entropy (BCE) loss. lossbce = − TF X i (ylog(sf i ) + (1 −y)log(1 −sf i )) (26) where sf i is a frame score of i-th frame, and y is a corresponding ground-truth label. 4817 In-and-Out Frame Score Margin For additional supervision other than the binary crossentropy loss, we create a novel loss function, Inand-Out Frame Score Margin (IOFSM). lossio = 1 + Avg(OFS) −Avg(IFS) (27) where OFS (Out Frame Score) is scores of frames whose labels are ‘0’ and IFS (In Frame Score) is scores of frames whose labels are ‘1’. Balanced Binary Cross-Entropy In our multilabel classification setting, each frame can be considered as one training example. Thus, the total number of examples and the proportion between positive and negative examples vary for every instance. This variation can cause unbalanced training since negative examples usually dominate. To balance the unbalanced training, we apply a simple but effective modification to the original BCE, and we call it Balanced Binary Cross-Entropy (BBCE). To be specific, instead of summing or averaging through the entire frame examples, we divide the positive and negative examples and calculate the average cross-entropy scores separately, then sum them together. lossbbce = −  TFin X i log(sfin i )/TFin + TFout X j log(1 −sfout j )/TFout  (28) where sfin i and sfout j are i-th in-frame score and j-th out-frame score respectively, and TFin and TFout are the number of in-frames and out-frames respectively. Thus, the total loss is: loss = losscls + loss(b)bce + lossio (29) 4 Experimental Setup TVQA Dataset TVQA dataset (Lei et al., 2018) consists of video frames, subtitles, and questionanswer pairs from 6 TV shows. The number of examples for train/validation/test-public dataset are 122,039/15,253/7,623. Each example has five candidate answers with one of them the ground-truth. 4At the time of the ACL2020 submission deadline, the publicly visible rank-1 entry was 70.52%. Since then, two more entries have appeared in the leaderboard; however, our method still outperforms their scores by a large margin (71.48% and 71.13% versus 74.09%). So, TVQA is a classification task, in which models select one from the five candidate answers, and models can be evaluated on the accuracy metric. Dense Captions We use Yang et al. (2017)’s pretrained model to extract dense captions from each video frame. We extract the dense captions in advance and use them as extra input data to the model.5 Training Details We use GloVe (Pennington et al., 2014) word vectors with dimension size of 300 and RoBERTa (Liu et al., 2019) with 768 dimension. The dimension of the visual feature is 300, and the base hidden size of the whole model is 128. We use Adam (Kingma and Ba, 2015) as the optimizer. We set the initial learning rate to 0.001 and drop it to 0.0002 after running 10 epochs. For dropout, we use the probability of 0.1. 5 Results and Ablation Analysis As seen from Table 1, our model outperforms the state-of-the-art models in the TVQA leaderboard. Especially our model gets balanced scores for all the TV shows while some other models have high variances across the shows. As seen from Table 2, the standard deviation and ‘max-min’ value over our model’s scores for each TV show are 0.65 and 1.83, respectively, which are the lowest values among all models in the list. This low variance could mean that our model is more consistent and robust across all the TV shows. Model Ablations As shown in Table 3, our basic dual-attention and frame selection gates model shows substantial improvement over the strong single attention and frame span baseline (row 4 vs 1: p < 0.0001), which is from the best published model (Lei et al., 2020). Each of our dual-attention and frame selection gates alone shows a small improvement in performance than the baseline (row 3 vs 1 and 2 vs 1, respectively).6 However, when they are applied together, the model works much better. The reason why they are more effective when put together is that frame selection gates basically select frames based on useful information 5This is less computationally expensive and dense captions from the separately trained model will be less biased towards the questions of TVQA dataset, and hence provide more diverse aspects of image frames of a video clip. 6Although the improvements are not much, but performing word/object level attention and then frame level attention is more intuitive and interpretable than a non-dual-attention method, allowing us to show how the model works: see visualization in Sec. 6. 4818 Model Test-Public (%) Val (%) all bbt friends himym grey house castle 1 jacobssy (anonymous) 66.01 68.75 64.98 65.08 69.22 66.45 63.74 64.90 2 multi-stream (Lei et al., 2018) 66.46 70.25 65.78 64.02 67.20 66.84 63.96 65.85 3 PAMN (Kim et al., 2019b) 66.77 66.38 4 Multi-task (Kim et al., 2019a) 67.05 66.22 5 ZGF (anonymous) 68.77 68.90 6 STAGE (Lei et al., 2020) 70.23 70.50 7 akalsdnr (anonymous) 70.52 71.49 67.43 72.22 70.42 70.83 72.30 71.13 8 Ours (hstar) 74.09 74.04 73.03 74.34 73.44 74.68 74.86 74.20 Table 1: Our model outperforms the state-of-the-art models by a large margin. Moreover, the scores of our model across all the TV shows are more balanced than the scores from other models, which means our model is more consistent/robust and not biased to the dataset from specific TV shows.4 Model TV Show Score avg. std. max-min 1 jacobssy (anonymous) 66.37 2.01 5.48 2 multi-stream (Lei et al., 2018) 66.34 2.15 6.29 3 akalsdnr (anonymous) 70.78 1.65 4.87 4 Ours 74.07 0.65 1.83 Table 2: Average and standard deviation of the testpublic scores from each TV show (for this comparison, we only consider models that release the scores for each TV show).8 Model Val Score (%) 1 Single-Att + Frame-Span 69.86 2 Single-Att + Frame-Selection Gates 70.08 3 Dual-Att + Frame-Span 70.20 4 Dual-Att + Frame-Selection Gates (w/o NewLoss) 71.26 5 Dual-Att + Frame-Selection Gates 72.51 6 Dual-Att + Frame-Selection Gates (w/o NewLoss) + RoBERTa 72.53 7 Dual-Att + Frame-Selection Gates + RoBERTa 73.34 8 Dual-Att + Frame-Selection Gates + RoBERTa + DenseCapts 74.20 Table 3: Model Ablation: our dual-attention / frameselection Gates, new loss functions, and dense captions help improve the model’s performance (NewLoss: IOFSM+BBCE). from each frame feature and our dual-attention can help this selection by getting more relevant information to each frame through the frame-level attention. Next, our new loss functions significantly help over the dual-attention and frame selection gates model by providing enhanced supervision (row 5 vs 4: p < 0.0001, row 7 vs 6: p < 0.005). Our RoBERTa version is also significantly better than the GloVe model (row 6 vs 4: p < 0.0005, row 7 vs 5: p < 0.01). Finally, employing dense captions further improves the performance via useful textual clue/keyword matching (row 8 vs 7: p < 0.005).7 7Statistical significance is computed using the bootstrap test (Efron and Tibshirani, 1994). 8Two more entries have appeared in the leaderboard since the ACL2020 submission deadline. However, our scores are still more balanced than their scores across all TV shows (std.: 2.11 and 2.40 versus our 0.65, max-min: 5.50 and 7.38 versus our 1.83). Loss Val Score (%) IFS OFS avg std avg std 1 BCE 71.26 0.468 0.108 0.103 0.120 2 IOFSM 70.75 0.739 0.127 0.143 0.298 3 BCE+IOFSM 72.22 0.593 0.128 0.111 0.159 4 BBCE 72.27 0.759 0.089 0.230 0.231 5 BBCE+IOFSM 72.51 0.764 0.098 0.182 0.246 Table 4: IOFSM and BBCE help improve the model’s performance by changing in and out-frame scores. IOFSM and BCE Loss Functions Ablation and Analysis To see how In-and-Out Frame Score Margin (IOFSM) and Binary Cross-Entropy (BCE) loss affect the frame selection task, we compare the model’s performance/behaviors according to the combination of IOFSM and BCE. As shown in Table 4, applying IOFSM on top of BCE gives a better result. When we compare row 1 and 3 in Table 4, the average in-frame score of BCE+IOFSM is higher than BCE’s while the average out-frame scores of both are almost the same. This can mean two things: (1) IOFSM helps increase the scores of in-frames, and (2) increased in-frame scores help improve the model’s performance. On the other hand, when we compare row 1 and 2, the average in-frame score of IOFSM is higher than BCE’s. But, the average out-frame score of IOFSM is also much higher than BCE’s. This can mean that out-frame scores have a large impact on the performance as well as in-frame scores. This is intuitively reasonable. Because information from out-frames also flows to the next layer (i.e., classifier) after being multiplied by the frame scores, the score for the ‘negative’ label also has a direct impact on the performance. So, making the scores as small as possible is also important. Also, when we compare the row 2 and others (2 vs. 1 and 3), the gap between in-frame scores is much larger than the gap between out-frame scores. But, considering the scores are average values, and the number of out-frames is usually much larger than in-frames, 4819 the difference between out-frame scores would affect more than the gap itself. Balanced BCE Analysis We can see from row 1 and 4 of the Table 4 that BBCE shift the average scores of both in-frames and out-frames to higher values. This can show that scores from the BCE loss are biased to the negative examples, and BBCE can adjust the bias with the separate averaging. The score shift can help improve the model’s performance. But, when comparing row 2 and 4, the outframe scores of BBCE are higher than IOFSM, and this may imply that the result from BBCE should be worse than IOFSM since out-frame scores have a large impact on the performance. However, as we can see from row 2, the standard deviation of IOFSM’s out-frame scores is larger than BBCE. This could mean that a model with IOFSM has an unstable scoring behavior, and it could affect the performance. As seen from row 5, applying BBCE and IOFSM together gives further improvement, possibly due to the increased in-frame scores and decreased out-frame scores while staying around at a similar standard deviation value. 6 Visualizations In this section, we visualize the dual-level attention (word/object and frame level) and the frame score change by new losses application (for all these attention examples, our model predicts the correct answers). Word/Object-Level Attention We visualize word-level attention in Figure 5. In the top example, the question and answer pair is “Where sat Rachel when holding a cup?” - “Rachel sat on a couch”. Our word/object-level attention between QA pair and dense caption attend to a relevant description like ‘holding a glass’ to help answer the question. In the middle example, the question and answer pair is, “How did Lance react after Mandy insulted his character?” - “Lance said he would be insulted if Mandy actually knew anything about acting”. Our word/object-level attention between QA pair and subtitle properly attend to the most relevant words such as ‘insulted’, ‘knew’, and ‘acting’ to answer the question. In the bottom example, the question and answer pair is, “What is Cathy doing with her hand after she introduces her fiance to Ted?” - “She is doing sign language”. From the score of our word/object-level attention, the model aligns the word ‘sign’ to the woman’s hand ding oding rm on rm Q: What is Cathy doing with her hand after she introduces her fiance to Ted? A: She is doing sign language. Figure 5: Visualization of word/object level attention. Top: words from a question-answer pair to words from dense captions alignment. Middle: words from a question-answer pair to words from subtitles alignment. Bottom: words from a question-answer pair to regions (boxes) from an image (only boxes with top 1 scores from each word are shown). to answer the question. Frame-Level Attention As shown in Figure 6, our frame-level attention can align relevant frames from different features. In the example, the question and answer pair is “Where did Esposito search after he searched Carol’s house downstairs?” - “Upstairs”. To answer this question, the model needs to find a frame in which ‘he (Esposito) searched Carol’s house downstairs’, then find a frame which has a clue for ‘where did Esposito search’. Our frame-level attention can properly align the information fragments from different features (Frame 20 and 25) to help answer questions. Frame Score Enhancement by New Losses As seen in Figure 7, applying our new losses (IOFSM+BBCE) changes the score distribution 4820 Q: Where did Esposito search after he searched Carol 's house downstairs? A: Upstairs. Esposito : Upstairs. go. Unkname : Carol! Frame 20 Frame 25 Figure 6: Visualization of frame-level attention. Frame 25 (which contains ‘upstairs’) from subtitle features and frame 20 (which shows ‘downstairs’ by banister upward) from visual features are aligned. mes svT sd0 sd1 sdl sdT ... ... ead Self Attention sv0 sv1 svk svT ... ... sd0 sd1 sdl sdT ... ... Multi-Head Self Attention before after Figure 7: Visualization of distribution change in frame selection scores. Left: the score distribution before applying new losses (IOFSM+BBEC). Right: the score distribution after applying the losses. Scores neighboring in-frame (gray) are increased. For this example, the model does not predict the right answer before applying the losses, but after training with the losses, the model chooses the correct answer. over all frames. Before applying our losses (left figure), overall scores are relatively low. After using the losses, overall scores increased, and especially, scores around in-frames get much higher. 7 Conclusion We presented our dual-level attention and frameselection gates model and novel losses for more effective frame-selection. Furthermore, we employed dense captions to help the model better find clues from salient regions for answering questions. Each component added to our base model architecture (proposed loss functions and the adoption of dense captions) significantly improves the model’s performance. Overall, our model outperforms the state-of-the-art models on the TVQA leaderboard, while showing more balanced scores on the diverse TV show genres. Acknowledgments We thank the reviewers for their helpful comments. This work was supported by NSF Award 1840131, ARO-YIP Award W911NF-18-1-0336, DARPA KAIROS Grant FA8750-19-2-1004, and awards from Google and Facebook. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision, pages 5803–5812. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV). Xiyang Dai, Bharat Singh, Guyue Zhang, Larry S Davis, and Yan Qiu Chen. 2017. Temporal context network for activity localization in videos. In Proceedings of the IEEE International Conference on Computer Vision, pages 5793–5802. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 457–468. Adrien Gaidon, Zaid Harchaoui, and Cordelia Schmid. 2013. Temporal localization of actions with actoms. IEEE transactions on pattern analysis and machine intelligence, 35(11):2782–2795. Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. 2017. Tall: Temporal activity localization via language query. In Proceedings of the IEEE International Conference on Computer Vision, pages 5267– 5275. 4821 Noa Garcia, Mayu Otani, Chenhui Chu, and Yuta Nakashima. 2020. Knowit vqa: Answering knowledge-based questions about videos. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence. Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR). Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. Tgif-qa: Toward spatiotemporal reasoning in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2758–2766. Justin Johnson, Andrej Karpathy, and Li Fei-Fei. 2016. Densecap: Fully convolutional localization networks for dense captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128–3137. Hyounghun Kim and Mohit Bansal. 2019. Improving visual question answering by referring to generated paragraph captions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Junyeong Kim, Minuk Ma, Kyungsu Kim, Sungjin Kim, and Chang D Yoo. 2019a. Gaining extra supervision via multi-task learning for multi-modal video question answering. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE. Junyeong Kim, Minuk Ma, Kyungsu Kim, Sungjin Kim, and Chang D Yoo. 2019b. Progressive attention memory network for movie story question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8337–8346. Kyung-Min Kim, Min-Oh Heo, Seong-Ho Choi, and Byoung-Tak Zhang. 2017. Deepstory: video story qa by deep embedded memory networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 2016–2022. AAAI Press. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015,San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2017. A hierarchical approach for generating descriptive image paragraphs. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 3337–3345. IEEE. Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. 2018. Tvqa: Localized, compositional video question answering. In EMNLP. Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. 2020. Tvqa+: Spatio-temporal grounding for video question answering. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Hui Li, Peng Wang, Chunhua Shen, and Anton van den Hengel. 2019. Visual question answering as reading comprehension. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, and Eric P Xing. 2017. Recurrent topic-transition gan for visual paragraph generation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3362–3371. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems, pages 289–297. Tegan Maharaj, Nicolas Ballas, Anna Rohrbach, Aaron Courville, and Christopher Pal. 2017. A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6884–6893. Luke Melas-Kyriazi, Alexander Rush, and George Han. 2018. Training for diversity in image paragraph captioning. EMNLP. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, and Shih-Fu Chang. 2017. Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5734–5743. 4822 Zheng Shou, Dongang Wang, and Shih-Fu Chang. 2016. Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1049–1058. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2016. Movieqa: Understanding stories in movies through question-answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4631–4640. Philippe Weinzaepfel, Zaid Harchaoui, and Cordelia Schmid. 2015. Learning to track for spatio-temporal action localization. In Proceedings of the IEEE international conference on computer vision, pages 3164–3172. Jialin Wu, Zeyuan Hu, and Raymond Mooney. 2019. Generating question relevant captions to aid visual question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3585–3594. Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In European Conference on Computer Vision, pages 451–466. Springer. Linjie Yang, Kevin Tang, Jianchao Yang, and Li-Jia Li. 2017. Dense captioning with joint inference and visual context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2193–2202. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 21–29. Amir Zadeh, Michael Chan, Paul Pu Liang, Edmund Tong, and Louis-Philippe Morency. 2019. Social-iq: A question answering benchmark for artificial social intelligence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8807–8817. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li FeiFei. 2016. Visual7w: Grounded question answering in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4995–5004.
2020
435
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4823–4830 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4823 Shaping Visual Representations with Language for Few-Shot Classification Jesse Mu1, Percy Liang1, Noah D. Goodman1,2 Departments of 1Computer Science and 2Psychology Stanford University {muj,ngoodman}@stanford.edu, [email protected] Abstract By describing the features and abstractions of our world, language is a crucial tool for human learning and a promising source of supervision for machine learning models. We use language to improve few-shot visual classification in the underexplored scenario where natural language task descriptions are available during training, but unavailable for novel tasks at test time. Existing models for this setting sample new descriptions at test time and use those to classify images. Instead, we propose language-shaped learning (LSL), an end-toend model that regularizes visual representations to predict language. LSL is conceptually simpler, more data efficient, and outperforms baselines in two challenging few-shot domains. 1 Introduction Humans are powerful and efficient learners partially due to the ability to learn from language (Chopra et al., 2019; Tomasello, 1999). For instance, we can learn about robins not by seeing thousands of examples, but by being told that a robin is a bird with a red belly and brown feathers. This language further shapes the way we view the world, constraining our hypotheses for new concepts: given a new bird (e.g. seagulls), even without language we know that features like belly and feather color are relevant (Goodman, 1955). In this paper, we guide visual representation learning with language, studying the setting where no language is available at test time, since rich linguistic supervision is often unavailable for new concepts encountered in the wild. How can one best use language in this setting? One option is to just regularize, training representations to predict language descriptions. Another is to exploit the compositional nature of language directly by using it as a bottleneck in a discrete latent variable model. c Meta (Snell et al., 2017) Support c Query fθ True LSTMDec gϕ L3 (Andreas et al., 2018) Support Query fθ LSTMEnc hη True LSTMDec gϕ (sample from gϕ at test) LSL (ours) fθ fθ a red cross is below a square a red cross is below a square Auxiliary training (discard at test) Figure 1: We propose few-shot classification models whose learned representations are constrained to predict natural language task descriptions during training, in contrast to models which explicitly use language as a bottleneck for classification (Andreas et al., 2018). For example, the recent Learning with Latent Language (L3; Andreas et al., 2018) model does both: during training, language is used to classify images; at test time, with no language, descriptions are sampled from a decoder conditioned on the language-shaped image embeddings. Whether the bottleneck or regularization most benefits models like L3 is unclear. We disentangle these effects and propose language-shaped learning (LSL), an end-to-end model that uses visual representations shaped by language (Figure 1), thus avoiding the bottleneck. We find that discrete bottlenecks can hurt performance, especially with limited language data; in contrast, LSL is architecturally simpler, faster, uses language more efficiently, and outperforms L3 and baselines across two few-shot transfer tasks. 4824 2 Related Work Language has been shown to assist visual classification in various settings, including traditional visual classification with no transfer (He and Peng, 2017) and with language available at test time in the form of class labels or descriptions for zero(Frome et al., 2013; Socher et al., 2013) or fewshot (Xing et al., 2019) learning. Unlike past work, we have no language at test time and test tasks differ from training tasks, so language from training cannot be used as additional class information (cf. He and Peng, 2017) or weak supervision for labeling additional in-domain data (cf. Hancock et al., 2018). Our setting can be viewed as an instance of learning using privileged information (LUPI; Vapnik and Vashist, 2009), where richer supervision augments a model only during training. In this framework, learning with attributes and other domain-specific rationales has been tackled extensively (Zaidan et al., 2007; Donahue and Grauman, 2011; Tokmakov et al., 2019); language less so. Gordo and Larlus (2017) use METEOR scores between captions as a similarity measure for specializing embeddings for image retrieval, but do not directly ground language explanations. Srivastava et al. (2017) explore a supervision setting similar to ours, except in simple text and symbolic domains where descriptions can be easily converted to executable logical forms via semantic parsing. Another line of work studies the generation of natural language explanations for interpretability across language (e.g. entailment; Camburu et al., 2018) and vision (Hendricks et al., 2016, 2018) tasks, but here we examine whether predicting language can actually improve task performance; similar ideas have been explored in text (Rajani et al., 2019) and reinforcement learning (Bahdanau et al., 2019; Goyal et al., 2019) domains. 3 Language-shaped learning We are interested in settings where language explanations can help learn representations that generalize more efficiently across tasks, especially when training data for each task is scarce and there are many spurious hypotheses consistent with the input. Thus, we study the few-shot (meta-)learning setting, where a model must learn from a set of train tasks, each with limited data, and then generalize to unseen tasks in the same domain. Specifically, in N-way, K-shot learning, a task t consists of N support classes {S(t) 1 , . . . , S(t) N } with K examples each: S(t) n = {x(t) n,1, . . . , x(t) n,K}. Each task has M query examples Q(t) = {(x(t) 1 , y(t) 1 ), . . . , (x(t) M , y(t) M )}. Given the m-th query example x(t) m as input, the goal is to predict its class y(t) m ∈{1, . . . , N}. After learning from a set of tasks Ttrain, a model is evaluated on unseen tasks Ttest. While the language approach we propose is applicable to nearly any meta-learning framework, we use prototype networks (Snell et al., 2017), which have a simple but powerful inductive bias for fewshot learning. Prototype networks learn an embedding function fθ for examples; the embeddings of the support examples of a class n are averaged to form a class prototype (omitting task (t) for clarity): cn = 1 K K X k=1 fθ(xn,k). (1) Given a query example (xm, ym), we predict class n with probability proportional to some similarity function s between cn and fθ(xm): pθ(ˆym = n | xm) ∝exp (s (cn, fθ (xm))) . (2) fθ is then trained to minimize the classification loss LCLS(θ) = − M X m=1 log pθ (ˆym = ym | xm) . (3) 3.1 Shaping with language Now assume that during training we have for each class Sn a set of Jn associated natural language descriptions Wn = {w1, . . . , wJn}. Each wj should explain the relevant features of Sn and need not be associated with individual examples.1 In Figure 1, we have one description w1 = (A, red, . . . , square). Our approach is simple: we encourage fθ to learn prototypes that can also decode the class language descriptions. Let ˜cn be the prototype formed by averaging the support and query examples of class n. Then define a language model gφ (e.g., a recurrent neural network), which conditioned on 1If we have language associated with individual examples, we can regularize at the instance-level, essentially learning an image captioner. We did not observe major gains with instancelevel supervision (vs class-level) in the tasks explored here, in which case class-level language is preferable, since it is much easier to obtain. There are likely tasks where instance-level supervision is superior, which we leave for future work. 4825 ˜cn provides a probability distribution over descriptions gφ( ˆwj | ˜cn) with a corresponding natural language loss: LNL(θ, φ) = − N X n=1 Jn X j=1 log gφ(wj | ˜cn), (4) i.e. the total negative log-likelihood of the class descriptions across all classes in the task. Since LNL depends on parameters θ through the prototype ˜cn, this objective should encourage our model to better represent the features expressed in language. Now we jointly minimize both losses: arg min θ,φ [LCLS(θ) + λNLLNL(θ, φ)] , (5) where the hyperparameter λNL controls the weight of the natural language loss. At test time, we simply discard gφ and use fθ to classify. We call our approach language-shaped learning (LSL; Figure 1). 3.2 Relation to L3 L3 (Andreas et al., 2018) has the same basic components of LSL, but instead defines the concepts cn to be embeddings of the language descriptions themselves, generated by an additional recurrent neural network (RNN) encoder hη: cn = hη(wn). During training, the ground-truth description is used for classification, while gφ is trained to produce the description; at test time, L3 samples candidate descriptions ˆwn from gφ, keeping the description most similar to the images in the support set according to the similarity function s (Figure 1). Compared to L3, LSL is simpler since it (1) does not require the additional embedding module hη and (2) does not need the test-time language sampling procedure.2 This also makes LSL much faster to run than L3 in practice: without the language machinery, LSL is up to 50x faster during inference in our experiments. 4 Experiments Here we describe our two tasks and models. For each task, we evaluate LSL, L3, and a prototype network baseline trained without language (Meta; Figure 1). For full details, see Appendix A. 2LSL is similar to the “Meta+Joint” model of Andreas et al. (2018), which did not improve over baseline. However, they used separate encoders for the support and query examples, with only the support encoder trained to predict language, resulting in overfitting of the query encoder. ShapeWorld. First, we use the ShapeWorld (Kuhnle and Copestake, 2017) dataset used by Andreas et al. (2018), which consists of 9000 training, 1000 validation, and 4000 test tasks (Figure 2).3 Each task contains a single support set of K = 4 images representing a visual concept with an associated (artificial) English language description, generated with a minimal recursion semantics representation of the concept (Copestake et al., 2016). Each concept is a spatial relation between two objects, each object optionally qualified by color and/or shape, with 2-3 distractor shapes present. The task is to predict whether a query image x belongs to the concept. For ease of comparison, we report results with models identical to Andreas et al. (2018), where fθ is the final convolutional layer of a fixed ImageNetpretrained VGG-16 (Simonyan and Zisserman, 2015) fed through two fully-connected layers: fθ(x) = FC(ReLU(FC(VGG-16(x)))). (6) However, because fixed ImageNet representations may not be the most appropriate choice for artificial data, we also run experiments with convolutional networks trained from scratch: either the 4-layer convolutional backbone used in much of the few-shot literature (Chen et al., 2019), as used in the Birds experiments we describe next, or a deeper ResNet-18 (He et al., 2016). This is a special binary case of the few-shot learning framework, with a single positive support class S and prototype c. Thus, we define the similarity function to be the sigmoid function s(a, b) = σ(a · b) and the positive prediction P(ˆy = 1 | x) = s (fθ(x), c). gφ is a 512dimensional gated recurrent unit (GRU) RNN (Cho et al., 2014) trained with teacher forcing. Through a grid search on the validation set, we set λNL = 20. Birds. To see if LSL can scale to more realistic scenarios, we use the Caltech-UCSD Birds dataset (Wah et al., 2011), which contains 200 bird species, each with 40–60 images, split into 100 train, 50 validation, and 50 test classes. During training, tasks are sampled dynamically by selecting N classes from the 100 train classes. K support and 16 query examples are then sampled from each class (similarly for val and test). For language, we use the descriptions collected by Reed et al. (2016), where 3This is a larger version with 4x as many test tasks for more stable confidence intervals (see Appendix A). 4826 This bird has distinctive-looking brown and white stripes all over its body, and its brown tail sticks up. The bird has a white underbelly, black feathers in the wings, a large wingspan, and a white beak. Birds ShapeWorld a cyan pentagon is to the right of a magenta shape Support True Query False Query Figure 2: Example language and query examples for ShapeWorld and Birds. AMT crowdworkers were asked to describe individual images of birds in detail, without reference to the species (Figure 2). While 10 English descriptions per image are available, we assume a more realistic scenario where we have much less language available only at the class level: removing associations between images and their descriptions, we aggregate D descriptions for each class, and for each K-shot training task we sample K descriptions from each class n to use as descriptions Wn. This makes learning especially challenging for LSL due to noise from captions that describe features only applicable to individual images. Despite this, we found improvements with as few as D = 20 descriptions per class, which we report as our main results, but also vary D to see how efficiently the models use language. We evaluate on the N = 5-way, K = 1-shot setting, and as fθ use the 4-layer convolutional backbone proposed by Chen et al. (2019). Here we use a learned bilinear similarity function, s(a, b) = a⊤Wb, where W is learned jointly with the model. gφ is a 200-dimensional GRU, and with another grid search we set λNL = 5. 5 Results Results are in Table 1. For ShapeWorld, LSL outperforms the meta-learning baseline (Meta) by 6.7%, and does at least as well as L3; Table 2 shows similar trends when fθ is trained from scratch. For Birds, LSL has a smaller but still significant 3.3% increase over Meta, while L3 drops below baseline. Furthermore, LSL uses language more efficiently: Figure 3 shows Birds performance as the captions per class D increases from 1 (100 total) to 60 (6000 total). LSL benefits from a remarkably small number of captions, with limited gains past 20; in contrast, L3 requires much more language to 50 52 54 56 58 60 1 5 10 20 30 40 50 60 D descriptions/class Birds Accuracy Model LSL L3 Figure 3: Varying the descriptions per class, D, for Birds. Each dot is a separate independently trained model. The dashed lines represent independently trained baselines (Meta). This bird has a white belly and breast with brown wings and a black crown. This is a dark gray bird with a light brown belly. Stripes tarsuses are both light, olive colored head, small songbird edges to light brown. Dark grey feathers and bright red with a black pointed beak. Figure 4: Examples of language generated by the L3 decoder gφ for Birds validation images. Since the LSL decoder is identically parameterized, it generates similar language. even approach baseline performance. In the low-data regime, L3’s lower performance is unsurprising, since it must generate language at test time, which is difficult with so little data. Example output from the L3 decoder in Figure 4 highlights this fact: the language looks reasonable in some cases, but in others has factual errors (dark gray bird; black pointed beak) and fluency issues. These results suggest that any benefit of L3 is likely due to the regularizing effect that language has on its embedding model fθ, which has been trained to predict language for test-time inference; in fact, the discrete bottleneck actually hurts in some settings. By using only the regularized visual representations and not relying exclusively on the generated language, LSL is the simpler, more efficient, and overall superior model. Table 1: Test accuracies (± 95% CI) across 1000 (ShapeWorld) and 600 (Birds) tasks. ShapeWorld Birds (D = 20) Meta 60.59 ± 1.07 57.97 ± 0.96 L3 66.60 ± 1.18 53.96 ± 1.06 LSL 67.29 ± 1.03 61.24 ± 0.96 4827 Table 2: ShapeWorld performance with different fθ architectures trained from scratch. fθ Conv4 ResNet-18 Meta 50.91 ± 1.10 58.73 ± 1.08 L3 62.28 ± 1.09 67.90 ± 1.07 LSL 63.25 ± 1.06 68.76 ± 1.02 50 53 56 59 62 Accuracy Birds 50 55 60 65 70 Accuracy ShapeWorld ar LSL Only Color No Color Shuffled Words Shuffled Captions Meta Figure 5: Language ablations. Error bars are 95% CIs. 5.1 Language ablation To identify which aspects of language are most helpful, in Figure 5 we examine LSL performance under ablated language supervision: (1) keeping only a list of common color words, (2) filtering out color words, (3) shuffling the words in each caption, and (4) shuffling the captions across tasks (see Figure 6 for examples). We find that while the benefits of color/no-color language varies across tasks, neither component provides the benefit of complete language, demonstrating that LSL leverages both colors and other attributes (e.g. size, shape) described in language. Word order is important for Birds but surprisingly unimportant for ShapeWorld, suggesting that even with decoupled colors and shapes, the model can often infer the correct relation from the shapes that consistently appear in the examples. Finally, when captions are shuffled across tasks, LSL for Birds does no worse than Meta, while ShapeWorld suffers, suggesting that language is more important for ShapeWorld than for the fine-grained, attributebased Birds task. 6 Discussion We presented LSL, a few-shot visual recognition model that is regularized with language descriptions during training. LSL outperforms baselines across two tasks and uses language supervision more efficiently than L3. We find that if a model is trained to expose the features and abstractions in language, a linguistic bottleneck on top of these Birds ShapeWorld a cyan pentagon is to the right of a magenta shape cyan magenta a pentagon is to the right of a shape shape right the is a pentagon a of cyan to magenta a green square is below a triangle The bird has a white underbelly, black feathers in the wings, a large wingspan, and a white beak. white black white The bird has a underbelly feathers in the wings, a large wingspan, and a beak. The , a and a . , beak bird in wingspan feathers large the black white underbelly has , white a wings This magnificent fellow is almost all black with a red crest, and white cheek patch. Original Only Color No Color Shuffled Words Shuffled Captions Figure 6: Examples of ablated language supervision for the Birds and ShapeWorld tasks. language-shaped representations is unnecessary, at least for the kinds of visual tasks explored here. The line between language and sufficiently rich attributes and rationales is blurry, and recent work (Tokmakov et al., 2019) suggests that similar performance gains can likely be observed by regularizing with attributes. However, unlike attributes, language is (1) a more natural medium for annotators, (2) does not require preconceived restrictions on the kinds of features relevant to the task, and (3) is abundant in unsupervised forms. This makes shaping representations with language a promising and easily accessible way to improve the generalization of vision models in low-data settings. Acknowledgments We thank Pang Wei Koh, Sebastian Schuster, and Dan Iter for helpful discussions and feedback, Mike Wu and Jacob Andreas for discussions and code, and our anonymous reviewers for insightful comments. This work was supported by an NSF Graduate Research Fellowship for JM, a SAIL-Toyota Research Award, and the Office of Naval Research grant ONR MURI N00014-16-1-2007. Toyota Research Institute (TRI) provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. Reproducibility Code, data, and experiments are available at https: //github.com/jayelm/lsl and on CodaLab at https://bit.ly/lsl_acl20. 4828 References Jacob Andreas, Dan Klein, and Sergey Levine. 2018. Learning with latent language. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2166–2179. Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, and Edward Grefenstette. 2019. Learning to understand goal specifications by modelling reward. In International Conference on Learning Representations (ICLR). Oana-Maria Camburu, Tim Rockt¨aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: natural language inference with natural language explanations. In Advances in Neural Information Processing Systems (NeurIPS), pages 9539–9549. Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, YuChiang Frank Wang, and Jia-Bin Huang. 2019. A closer look at few-shot classification. In International Conference on Learning Representations (ICLR). Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Sahil Chopra, Michael Henry Tessler, and Noah D Goodman. 2019. The first crank of the cultural ratchet: Learning and transmitting concepts through language. In Proceedings of the 41st Annual Meeting of the Cognitive Science Society, pages 226–232. Ann A Copestake, Guy Emerson, Michael Wayne Goodman, Matic Horvat, Alexander Kuhnle, and Ewa Muszynska. 2016. Resources for building applications with dependency minimal recursion semantics. In International Conference on Language Resources and Evaluation (LREC). Jeff Donahue and Kristen Grauman. 2011. Annotator rationales for visual recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1395–1402. Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Ranzato Marc’Aurelio, and Tomas Mikolov. 2013. DeViSE: A deep visual-semantic embedding model. In Advances in Neural Information Processing Systems (NeurIPS), pages 2121– 2129. Nelson Goodman. 1955. Fact, fiction, and forecast. Harvard University Press, Cambridge, MA. Albert Gordo and Diane Larlus. 2017. Beyond instance-level image retrieval: Leveraging captions to learn a global visual representation for semantic retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6589–6598. Prasoon Goyal, Scott Niekum, and Raymond J. Mooney. 2019. Using natural language for reward shaping in reinforcement learning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 2385– 2391. Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R´e. 2018. Training classifiers with natural language explanations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1884– 1895. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778. Xiangteng He and Yuxin Peng. 2017. Fine-grained image classification via combining vision and language. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5994–6002. Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. 2016. Generating visual explanations. In Proceedings of the European Conference on Computer Vision (ECCV), pages 3–19. Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata. 2018. Grounding visual explanations. In Proceedings of the European Conference on Computer Vision (ECCV), pages 264–279. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). Alexander Kuhnle and Ann Copestake. 2017. Shapeworld-a new test methodology for multimodal language understanding. arXiv preprint arXiv:1704.04517. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! Leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 4932–4942, Florence, Italy. 4829 Scott Reed, Zeynep Akata, Honglak Lee, and Bernt Schiele. 2016. Learning deep representations of fine-grained visual descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 49–58. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR). Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NeurIPS), pages 4077–4087. Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Advances in Neural Information Processing Systems (NeurIPS), pages 935–943. Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1527–1536. Pavel Tokmakov, Yu-Xiong Wang, and Martial Hebert. 2019. Learning compositional representations for few-shot recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 6372–6381. Michael Tomasello. 1999. The Cultural Origins of Human Cognition. Harvard University Press, Cambridge, MA. Vladimir Vapnik and Akshay Vashist. 2009. A new learning paradigm: Learning using privileged information. Neural Networks, 22(5-6):544–557. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. 2011. The CaltechUCSD Birds-200-2011 dataset. Chen Xing, Negar Rostamzadeh, Boris Oreshkin, and Pedro O Pinheiro. 2019. Adaptive cross-modal fewshot learning. In Advances in Neural Information Processing Systems (NeurIPS), pages 4848–4858. Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using annotator rationales to improve machine learning for text categorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference (NAACL-HLT), pages 260–267. 4830 A Model and training details A.1 ShapeWorld fθ. Like Andreas et al. (2018), fθ starts with features extracted from the last convolutional layer of a fixed ImageNet-pretrained VGG-19 network (Simonyan and Zisserman, 2015). These 4608-d embeddings are then fed into two fully connected layers ∈R4608×512, R512×512 with one ReLU nonlinearity in between. LSL. For LSL, the 512-d embedding from fθ directly initializes the 512-d hidden state of the GRU gφ. We use 300-d word embeddings initialized randomly. Initializing with GloVe (Pennington et al., 2014) made no significant difference. L3. fθ and gφ are the same as in LSL and Meta. hη is a unidirectional 1-layer GRU with hidden size 512 sharing the same word embeddings as gφ. The output of the last hidden state is taken as the embedding of the description w(t). Like Andreas et al. (2018), a total of 10 descriptions per task are sampled at test time. Training. We train for 50 epochs, each epoch consisting of 100 batches with 100 tasks in each batch, with the Adam optimizer (Kingma and Ba, 2015) and a learning rate of 0.001. We select the model with highest epoch validation accuracy during training. This differs slightly from Andreas et al. (2018), who use different numbers of epochs per model and did not specify how they were chosen; otherwise, the training and evaluation process is the same. Data. We recreated the ShapeWorld dataset using the same code as Andreas et al. (2018), except generating 4x as many test tasks (4000 vs 1000) for more stable confidence intervals. Note that results for both L3 and the baseline model (Meta) are 3–4 points lower than the scores reported in Andreas et al. (2018) (because performance is lower for all models, we are not being unfair to L3). This is likely due to differences in model initialization due to our PyTorch reimplementation and/or recreation of the dataset with more test tasks. A.2 Birds fθ. The 4-layer convolutional backbone fθ is the same as the one used in much of the few-shot literature (Chen et al., 2019; Snell et al., 2017). The model has 4 convolutional blocks, each consisting of a 64-filter 3x3 convolution, batch normalization, ReLU nonlinearity, and 2x2 max-pooling layer. With an input image size of 84 × 84 this results in 1600-d image embeddings. Finally, the bilinear matrix W used in the similarity function has dimension 1600 × 1600. LSL. The resulting 1600-d image embeddings are fed into a single linear layer ∈R1600×200 which initializes the 200-d hidden state of the GRU. We initialize embeddings with GloVe. We did not observe significant gains from increasing the size of the decoder gφ. L3. fθ and gφ are the same. hη is a unidirectional GRU with hidden size 200 sharing the same embeddings as gφ. The last hidden state is taken as the concept cn. 10 descriptions per class are sampled at test time. We did not observe significant gains from increasing the size of the decoder gφ or encoder hη, nor increasing the number of descriptions sampled per class at test. Training. For ease of comparison to the few-shot literature we use the same training and evaluation process as Chen et al. (2019). Models are trained for 60000 episodes, each episode consisting of one randomly sampled task with 16 query images per class. Like Chen et al. (2019), they are evaluated on 600 episodes. We use Adam with a learning rate of 0.001 and select the model with the highest validation accuracy after training. Data. Like Chen et al. (2019), we use standard data preprocessing and training augmentation: ImageNet mean pixel normalization, random cropping, horizontal flipping, and color jittering.
2020
436
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4831–4842 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4831 Discrete Latent Variable Representations for Low-Resource Text Classification Shuning Jin1∗ Sam Wiseman2 Karl Stratos1 Karen Livescu2 1Rutgers University 2Toyota Technological Institute at Chicago [email protected], {swiseman,klivescu}@ttic.edu, [email protected] Abstract While much work on deep latent variable models of text uses continuous latent variables, discrete latent variables are interesting because they are more interpretable and typically more space efficient. We consider several approaches to learning discrete latent variable models for text in the case where exact marginalization over these variables is intractable. We compare the performance of the learned representations as features for lowresource document and sentence classification. Our best models outperform the previous best reported results with continuous representations in these low-resource settings, while learning significantly more compressed representations. Interestingly, we find that an amortized variant of Hard EM performs particularly well in the lowest-resource regimes.1 1 Introduction Deep generative models with latent variables have become a major focus of NLP research over the past several years. These models have been used both for generating text (Bowman et al., 2016) and as a way of learning latent representations of text for downstream tasks (Yang et al., 2017; Gururangan et al., 2019). Most of this work has modeled the latent variables as being continuous, that is, as vectors in Rd, in part due to the simplicity of performing inference over (certain) continuous latents using variational autoencoders and the reparameterization trick (Kingma and Welling, 2014; Rezende et al., 2014). At the same time, deep generative models with discrete latent variables are attractive because the latents are arguably more interpretable, and because they lead to significantly more compressed ∗Work done as an intern at Toyota Technological Institute at Chicago. 1Code available on GitHub: https://github.com/ shuningjin/discrete-text-rep representations: A representation consisting of M floating point values conventionally requires M × 32 bits, whereas M integers in {1, . . . , K} requires only M × log2 K bits. Unfortunately, discrete latent variable models have a reputation for being more difficult to learn. We conduct a thorough comparison of several popular methods for learning such models, all within the framework of maximizing the evidence lower bound (ELBO) on the training data. In particular, we compare learning such models either with a Vector Quantized-VAE (van den Oord et al., 2017, VQ-VAE), a more conventional VAE with discrete latent variables (Jang et al., 2017; Maddison et al., 2017), or with an amortized version of “Hard” or “Viterbi” Expectation Maximization (Brown et al., 1993), which to our knowledge has not been explored to date. We consider both models where the latents are local (i.e., per token) and where they are global (i.e., per sentence); we assess the quality of these learned discrete representations as features for a low-resource text classifier, as suggested by Gururangan et al. (2019), and in a nearest neighborbased retrieval task. Our classification experiments distinguish between (1) the setting where the classifier must consume only the discrete representation associated with each sentence (i.e., the discrete assignment that maximizes the approximate posterior), and (2) the setting where the classifier may consume the embeddings of this discrete representation learned by the VAE encoder. Note that the former setting is more flexible, since we need only store a sentence’s discrete representation, and are therefore free to use task-specific (and possibly much smaller) architectures for classification. In case (1), we are able to effectively match the performance of Gururangan et al. (2019) and other baselines; in case (2), we outperform them. Our experiments also suggest that Hard EM performs particularly well in 4832 case (1) when there is little supervised data, and that VQ-VAE struggles in this setting. 2 Related Work Our work builds on recent advances in discrete representation learning and its applications. In particular, we are inspired by recent success with VQ-VAEs outside NLP (van den Oord et al., 2017; Razavi et al., 2019). These works show that we can generate realistic speech and image samples from discrete encodings, which better align with symbolic representations that humans seem to work with (e.g., we naturally encode continuous speech signals into discrete words). Despite its success in speech and vision, VQ-VAE has not been considered as much in NLP. One exception is the translation model of Kaiser et al. (2018) that encodes a source sequence into discrete codes using vector quantization. But their work focuses on making inference faster, by decoding the target sequence from the discrete codes non-autoregressively. To our knowledge, we are the first that explores general text representations induced by VQ-VAEs for semi-supervised and transfer learning in NLP. In addition to exploring the viability of VQVAEs for text representation learning, an important part of this paper is a systematic comparison between different discretization techniques. GumbelSoftmax (Jang et al., 2017; Maddison et al., 2017) is a popular choice that has been considered for supervised text classification (Chen and Gimpel, 2018) and dialog generation (Zhao et al., 2018). In the binary latent variable setting, straight-through estimators are often used (Dong et al., 2019). Another choice is “continuous decoding” which takes a convex combination of latent values to make the loss differentiable (Al-Shedivat and Parikh, 2019). Yet a less considered choice is Hard EM (Brown et al., 1993; De Marcken, 1995; Spitkovsky et al., 2010). A main contribution of this work is a thorough empirical comparison between such different choices in a controlled setting. To demonstrate the usefulness of our models, we focus on improving low-resource classification performance by pretraining on unlabeled text. Previous best results are obtained with continuous latentvariable VAEs, e.g., VAMPIRE (Gururangan et al., 2019). We show that our discrete representations outperform these previous results while being significantly more lightweight. 3 Background We consider generative models of a sequence x = x1:T of T word tokens. We assume our latents to be a sequence z = z1:L of L discrete latent vectors, each taking a value in {1, . . . , K}M; that is, z ∈{1, . . . , K}M×L. As is common in VAE-style models of text, we model the text autoregressively, and allow arbitrary interdependence between the text and the latents. That is, we have p(x, z; θ) = p(z) × QT t=1 p(xt | x<t, z; θ), where θ are the generative model’s parameters. We further assume p(z) to be a fully factorized, uniform prior: p(z) = 1 KML . Maximizing the marginal likelihood of such a model will be intractable for moderate values of K, M, and L. So we consider learning approaches that maximize the ELBO (Jordan et al., 1999) in an amortized way (Kingma and Welling, 2014; Rezende et al., 2014): ELBO(θ, φ) = Eq(z | x;φ)  log p(x, z; θ) q(z | x; φ)  , where q(z | x; φ) is the approximate posterior given by an inference or encoder network with parameters φ. The approaches we consider differ in terms of how this approximate posterior q is defined. Mean-Field Categorical VAE (CatVAE) A standard Categorical VAE parameterizes the approximate posterior as factorizing over categorical distributions that are independent given x. We therefore maximize: Eq(z | x;φ) [log p(x | z; θ)] − X m,l KL(qml||pml) = Eq(z | x;φ)) [log p(x | z; θ)] + X m,l H(qml) −ML log K, where q(z | x; φ)= QM m=1 QL l=1 qml(zml | x; φ), pml = 1/K, and H is the entropy. We approximate the expectation above by sampling from the qml, and we use the straight-through gradient estimator (Bengio et al., 2013; Jang et al., 2017) to compute gradients with respect to φ. We find this approach to be more stable than using the REINFORCE (Williams, 1992) gradient estimator, or a Concrete (Maddison et al., 2017; Jang et al., 2017) approximation to categorical distributions. Specifically, we sample from a categorical distribution using the Gumbel-Max trick (Maddison et al., 2014) in the forward pass, and approximate the 4833 gradient using softmax with a small temperature. This approach is also referred to as straight-through Gumbel-Softmax (Jang et al., 2017). VQ-VAE A VQ-VAE (van den Oord et al., 2017; Razavi et al., 2019) can also be seen as maximizing the ELBO, except the approximate posterior is assumed to be a point mass given by qml(zml|x) = ( 1 if zml = ˆzml 0 otherwise , where ˆzml = arg min j∈{1,...,K} ||e(m) j −enc(x)ml||2, (1) and e(m) j ∈Rd is an embedding of the jth discrete value zml can take on, and enc(x)ml ∈Rd is an encoding corresponding to the mlth latent given by an encoder network. These e(m) j embedding vectors are often referred to as a VQ-VAE’s “code book”. In our setting, a code book is shared across latent vectors. VQ-VAEs are typically learned by maximizing the ELBO assuming degenerate approximate posteriors as above, plus two terms that encourage the encoder embeddings and the “code book” embeddings to become close. In particular, we attempt to maximize the objective: log p(x | ˆz) − X m,l ||sg(enc(x)ml) −e(m) ˆzm,l||2 2 (2) −β X m,l || enc(x)ml −sg(e(m) ˆzm,l)||2 2, where sg is the stop-gradient operator, and ˆz = ˆz1:L is the sequence of minimizing assignments ˆzm,l for each enc(x)ml. The loss term following the β is known as the “commitment loss”. Gradients of the likelihood term with respect to enc(x) are again estimated with the straight-through gradient estimator. Hard EM We train with an amortized form of Hard EM. First we define a relaxed version of z, ˜z, where each ˜zml is a softmax over K outputs (rather than a hard assignment) and is produced by an inference network with parameters φ.2 In the E-Step, we take a small, constant number of 2Note this assumes our generative model can condition on such a relaxed latent variable. e3 e6 3 6 ... Code books ... VQ-VAE e3 e6 3 6 CatVAE Nearest neighbor index Matrix multiplication Real vector Categorical sample Matrix multiplication Probability vector E2 Encoder E1 E3 [1,8], [3,6], [7,2] Decoder E1 E0 E2 E3 w2 w1 w3 w4 Attention Local is This local is This local <SOS> is This local <EOS> E2 Encoder E1 E3 Decoder E1 E0 E2 E3 w2 w1 w3 w4 Attention Global This is global is This global <SOS> This is global <EOS> [3,6] Figure 1: Discrete VAE architectures with M = 2. The Local (middle) and Global (bottom) models are two different encoder-decoder setups. The top row shows the procedure of converting continuous output from encoder into discrete input to decoder by drawing discrete samples: VQ-VAE (top left) draws samples from point mass distributions using nearest neighbor lookup from the code books; CatVAE (top right) samples from categorical distributions directly. gradient steps to maximize log p(x | ˜z; θ) with respect to φ (for a fixed θ). In the M-Step, we take a single gradient step to maximize log p(x | ˆz; θ) with respect to θ, where ˆz contains the elementwise argmaxes of ˜z as produced by the inference network (with its most recent parameters φ). Thus, Hard EM can also be interpreted as maximizing the (relaxed) ELBO. We also note that taking multiple steps in the hard E-step somewhat resembles the recently proposed aggressive training of VAEs (He et al., 2019). 4 Models and Architectures Recall that the latent sequence is z = z1:L, where zl ∈{1, . . . , K}M. We consider two generative models p(x | z; θ), one where L = T and one where L = 1. Each latent in the former model corresponds to a word, and so we refer to this as a “local” model, whereas in the second model we view the latents as being “global”, since there is one latent vector for the whole sentence. We use the following architectures for our encoders and decoder, as illustrated in Figure 1. 4834 4.1 Encoder The encoder (parameterized by φ) maps an example x to the parameters of an approximate posterior distribution. Our encoder uses a single-layer Transformer (Vaswani et al., 2017) network to map x = x1:T to a sequence of T vectors h1, . . . , hT , each in Rd. Mean-Field Categorical VAE For the local model, we obtain the parameters of each categorical approximate posterior qmt as softmax(Wm ht), where each Wm ∈RK×d is a learned projection. For the global model, we obtain the parameters of each categorical approximate posterior qm1 as softmax  P t Wm ht T  ; that is, we pass token-level ht vectors through learned projections Wm, followed by mean-pooling. VQ-VAE For the local model, let ˜d = d/M. We obtain enc(x)mt, the encoding of the mtth latent variable, as ht,(m−1) ˜d:m ˜d, following Kaiser et al. (2018). That is, we take the mth ˜d-length subvector of ht. For the global model, let ˜d = d. We first project ht to RMd, mean-pool, and obtain enc(x)m1 by taking the mth ˜d-length subvector of the resulting pooled vector. A VQ-VAE also requires learning a code book, and we define M code books E(m) = [e(m) 1 ⊤; . . . ; e(m) K ⊤] ∈RK× ˜d . Hard EM We use the same encoder architecture as in the mean-field Categorical VAE case. Note, however, that we do not sample from the resulting categorical distributions. Rather, the softmax distributions are passed directly into the decoder. 4.2 Decoder In the case of the mean-field Categorical VAE, we obtain a length-L sequence of vectors zl ∈ {1, . . . , K}M after sampling from the approximate posteriors. For the VQ-VAE, on the other hand, we obtain the sequence of ˆzl vectors by taking the indices of the closest code book embeddings, as in Equation (1). In both cases, the resulting sequence of discrete vectors is embedded and consumed by the decoder. In particular, when learning with a VQ-VAE, the embedding of ˆzml is simply e(m) ˆzml , whereas for the Categorical VAE each discrete latent is embedded using a trained embedding layer. In the local model, when M > 1, we concatenate the M embeddings to form a single real vector embedding for the lth latent variable. In the global model, we use the M embeddings directly. This resulting sequence of T or M real vectors is then viewed as the source side input for a standard 1-layer Transformer encoderdecoder model (Vaswani et al., 2017), which decodes x using causal masking. As above, for Hard EM, we do not obtain a sequence of discrete vectors from the encoder, but rather a sequence of softmax distributions. These are multiplied into an embedding layer, as in the Categorical VAE case, and fed into the Transformer encoder-decoder model. 5 Evaluating Latent Representations Similar to Gururangan et al. (2019), we evaluate the learned latent representations by using them as features in a text classification system. We are in particular interested in using latent representations learned on unlabeled text to help improve the performance of classifiers trained on a small amount of labeled text. Concretely, we compare different discrete latent variable models in following steps: 1. Pretraining an encoder-decoder model on indomain unlabeled text with an ELBO objective, with early stopping based on validation perplexity. 2. Fixing the encoder to get discrete latents for the downstream classification task, and training a small number of task-specific parameters on top, using varying amounts of labeled data. As noted in the introduction, we consider both reembedding these latents from scratch, or using the embeddings learned by the encoder. 5.1 Tasks and Datasets The datasets we use for classification are AG News, DBPedia, and Yelp Review Full (Zhang et al., 2015), which correspond to predicting news labels, Wikipedia ontology labels, and the number of Yelp stars, respectively. The data details are summarized in Table 1. For all datasets, we randomly sample 5,000 examples as development data. To evaluate the efficiency of the latent representation in lowresource settings, we train the classifier with varying numbers of labeled instances: 200, 500, 2500, and the full training set size (varies by dataset). We use accuracy as the evaluation metric. In preprocessing, we space tokenize, lowercase, and clean the text as in Kim (2014), and then truncate each sentence to a maximum sequence length of 400. For each dataset, we use a vocabulary of the 30,000 most common words. 4835 Dataset # Classes Train Dev Test AG News 4 115K 5K 7.6K DBPedia 14 555K 5K 70K Yelp Review Full 5 645K 5K 50K Table 1: The number of classes and the numbers of examples in each data subset, for the classification tasks. 5.2 Transfer Paradigm When transferring to a downstream classification task, we freeze the pretrained encoder and add a lightweight classifier on top, viewing each sentence as an L-length sequence of vectors in {1, . . . , K}M, as described in Section 4. For instance, the sentence (from the DBPedia dataset) “backlash is a 1986 australian film directed by bill bennett” is encoded as [90, 114, 30, 111] under a global model with M = 4, and as [[251, 38], [44, 123], [94, 58], [228, 53], [88, 55], [243, 43], [66, 236], [94, 72], [172, 61], [236, 150]] under a local model with M = 2. As noted in the introduction, we consider two ways of embedding the integers for consumption by a classifier. We either (1) learn a new taskspecific embedding space E(m) task (i.e., reembedding) or (2) use the fixed embedding space E(m) from pretraining. The first setting allows us to effectively replace sentences with their lower dimensional discrete representations, and learn a classifier on the discrete representations from scratch. In the local model, we obtain token-level embedding vectors by concatenating the M subvectors corresponding to each word. The resulting embeddings are either averaged, or fed to a Transformer and then averaged, and finally fed into a linear layer followed by a softmax. 6 Experimental Details 6.1 Baselines We first experiment with three common text models: CBOW (Mikolov et al., 2013), bidirectional LSTM (Hochreiter and Schmidhuber, 1997), and a single-layer Transformer encoder. We find CBOW (with 64-dimensional embeddings) to be the most robust in settings with small numbers of labeled instances, and thus report results only with this baseline among the three. Further, we compare to VAMPIRE (Gururangan et al., 2019), a framework of pretraining VAEs for text classification using continuous latent variables. We pretrain VAMPIRE models on in-domain text for each dataset with 60 random hyperparameter search (with same ranges as specified in their Appendix A.1), and select best models based on validation accuracy in each setting. 6.2 Hyperparameters In our experiments, we use Transformer layers with dmodel = 64. For optimization, we use Adam (Kingma and Ba, 2015), either with a learning rate of 0.001 or with the inverse square-root schedule defined in Vaswani et al. (2017) in pretraining. We use a learning rate of 0.0003 in classification. We tune other hyperparameters with random search and select the best settings based on validation accuracy. For the latent space size, we choose M in {1, 2, 4, 8, 16} and K in {128, 256, 512, 1024, 4096}. Model specific hyperparameters are introduced below. 6.3 VQ-VAE In VQ-VAE, an alternative to the objective in Equation (2) is to remove its second term, while using an auxiliary dictionary learning algorithm with exponential moving averages (EMA) to update the embedding vectors (van den Oord et al., 2017). We tune whether to use EMA updates or not. Also, we find small β for commitment loss to be beneficial, and search over {0.001, 0.01, 0.1}. 6.4 Mean-Field Categorical VAE We find that using the discrete analytic KL divergence term directly in the ELBO objective leads to posterior collapse. The KL term vanishes to 0 and the qml distributions converge to the uniform priors. To circumvent this, we modify the KL term to be max(KL, λ). This is known as Free Bits (Kingma et al., 2016; Li et al., 2019), which ensures that the latent variables encode a certain amount of information by not penalizing the KL divergence when it is less than λ. We set λ = γML log K, where γ is a hyperparameter between 0 and 1. That is, we allocate a “KL budget” as a fraction of ML log K, which is the upper bound of KL divergence between ML independent categorical distributions and uniform prior distributions. Since in this case KL(qml(zml | x)||pml(zml)) = log K − H[qml(zml | x)], this is equivalent to thresholding H[qml(zml | x)] by (1 −γ) log K. We experiment with γ ∈{0.2, 0.4, 0.6, 0.8, 1}.3 3Note that when γ ≥1 the VAE reduces to an autoencoder. 4836 200 500 2500 full Labeled Examples 68 70 72 74 76 78 80 Accuracy average_cat-all HardEM CatVAE VQ-VAE global local reembed no reembed Figure 2: The accuracies obtained by Hard EM, Categorical VAE, and VQ-VAE representations, averaged over the AG News, DBPedia, and Yelp Full development datasets, for different numbers of labeled training examples. Triangular and circular markers correspond to global and local models, respectively. Unshaded and shaded markers correspond to reembedding from scratch and using encoder embeddings, respectively. 1 2 4 8 16 M 62 64 66 68 70 72 74 Accuracy Average Accuracy (200 Labels) HardEM CatVAE VQ-VAE global local reembed no reembed Figure 3: The averaged accuracies obtained from using Hard EM, Categorical VAE, and VQ-VAE representations and 200 labeled examples, for different M values. 6.5 Hard EM We vary the number of gradient steps in the E-step in {1, 3}. At evaluation time, we always take the argmax of ˜z to get a hard assignment. 7 Results In Figure 2, we compare the accuracy obtained by the representations from our Hard EM, Categorical VAE, and VQ-VAE models, averaged over the development datasets of AG News, DBPedia, and Yelp Full. In particular, we plot the best accuracy obtained over all hyperparameters (including M) for different numbers of labeled examples; we distinguish between local and global models, and between when the discrete representations are reembedded from scratch and when the encoder embeddings are used. We see that using the encoder embeddings typically outperforms reembedding from scratch, and that global representations tend to outperform local ones, except in the full data regime. Furthermore, we see that the Categorical VAE and VQ-VAE are largely comparable on average, though we undertake a finer-grained comparison by dataset in Appendix A. Perhaps most interestingly, we note that when reembedding from scratch, Hard EM significantly outperforms the other approaches in the lowest data regimes (i.e., for 200 and 500 examples). In fact, Hard EM allows us to match the performance of the best previously reported results even when reembedding from scratch; see Table 3. Table 2 shows the best combinations of model and hyperparameters when training with 200 labeled examples on AG News. These settings were used in obtaining the numbers in Figure 2, and are largely stable across datasets. In Figure 3, we compare the average accuracy of our local and global model variants trained on 200 labeled examples, as we vary M. When reembedding, local representations tend to improve as we move from M = 1 to M = 2, but not significantly after that. When reembedding global representations, performance increases as M does. Unsurprisingly, when not reembedding, M matters less. 4837 Method K M Local CatVAE 4096 1 Local (re) Hard EM 1024 1 Global CatVAE 256 4 Global (re) Hard EM 4096 4 Table 2: Best methods and settings of K and M when training on 200 labeled examples of the AG News corpus and evaluating on the development set. The “(re)” affix indicates that latent variables are reembedded from scratch. Model 200 500 2500 Full AG News CBOW 63.4 (1.5) 72.9 (0.7) 82.1 (0.2) 90.0 (0.2) VAMPIRE⋆83.9 (0.6) 84.5 (0.4) 85.8 (0.2) VAMPIRE 82.2 (0.8) 84.7 (0.2) 86.4 (0.4) 91.0 (0.1) Local 82.7 (0.1) 84.3 (0.3) 85.0 (0.4) 86.6 (0.2) Local (re) 82.7 (0.4) 84.0 (0.3) 85.4 (0.1) 87.1 (0.3) Global 84.6 (0.1) 85.7 (0.1) 86.3 (0.2) 87.5 (0.6) Global (re) 83.9 (0.5) 84.6 (0.2) 85.1 (0.3) 86.9 (0.1) DBPedia CBOW 72.7 (0.6) 84.7 (0.7) 92.8 (0.3) 97.7 (0.1) VAMPIRE 89.1 (1.3) 93.7 (0.5) 95.7 (0.2) 98.2 (0.1) Local 89.2 (0.2) 92.8 (0.4) 94.6 (0.2) 97.1 (0.3) Local (re) 88.7 (0.2) 90.2 (0.3) 93.3 (0.1) 96.9 (0.2) Global 91.8 (0.5) 94.3 (0.3) 95.0 (0.2) 95.6 (0.0) Global (re) 88.5 (0.7) 92.3 (0.7) 94.6 (0.4) 95.8 (0.1) Yelp Full CBOW 31.0 (5.9) 41.1 (0.6) 48.4 (0.4) 58.9 (0.4) VAMPIRE 41.4 (2.9) 47.2 (0.7) 52.5 (0.1) 60.3 (0.1) Local 46.2 (0.8) 49.0 (0.5) 51.9 (0.5) 53.1 (0.3) Local (re) 47.2 (0.7) 49.4 (0.7) 52.1 (0.2) 55.0 (0.6) Global 48.5 (1.0) 50.1 (0.5) 53.0 (0.3) 54.9 (0.4) Global (re) 46.0 (0.5) 47.4 (0.5) 48.8 (0.8) 53.8 (0.3) Table 3: Test accuracy results by dataset and by the number of labeled examples used in training. The scores are averages over five random subsamples, with standard deviations in parentheses and column bests in bold. VAMPIRE⋆for AG News is reported by Gururangan et al. (2019) and VAMPIREs are from our experiments. Finally, we show the final accuracies obtained by our best models on the test data of each dataset in Table 3. We see that on all datasets when there are only 200 or 500 labeled examples, our best model outperforms VAMPIRE and the CBOW baseline, and our models that reembed the latents from scratch match or outperform VAMPIRE. As noted in Table 2, it is Hard EM that is particularly performant in these settings. 8 Analysis and Discussion 8.1 Qualitative analysis To gain a better understanding of what the learned clusters represent, we examine their patterns on the AG News dataset labeled with four classes. Since VQ-VAEs and Categorical VAEs exhibit similar patterns, we focus on the latter model. Tables 4 and 5 show examples of sentence- and word-level clusters, respectively, induced by Categorical VAEs. The sentence-level model encodes each document into M = 4 latents, each taking one of K = 256 integers. The word-level model encodes each word into M = 1 latent taking one of K = 1024 integers. Since a word can be assigned multiple clusters, we take the majority cluster for illustration purposes. We see that clusters correspond to topical aspects of the input (either a document or a word). In particular, in the sentence-level case, documents in the same cluster often have the same ground-truth label. We also find that each of M latents independently corresponds to topical aspects (e.g., z1 = 65 implies that the topic has to do with technology); thus, taking the combination of these latents seems to make the cluster “purer”. The word-level clusters are also organized by topical aspects (e.g., many words in cluster 510 are about modern conflicts in the Middle East). 8.2 Effect of Alternating Optimization While Hard EM achieves impressive performance when reembedding from scratch and when training on only 200 or 500 examples, we wonder whether this performance is due to the alternating optimization, to the multiple E-step updates per M-step update, or to the lack of sampling. We accordingly experiment with optimizing our VQ-VAE and CatVAE variants in an alternating way, allowing multiple inference network updates per update of the generative parameters θ. We show the results on the AG News dataset in Table 6. We find that alternating does generally improve the performance of VQ-VAE and CatVAE as well, though Hard EM performs the best overall when reembedding from scratch. Furthermore, because Hard EM requires no sampling, it is a compelling alternative to CatVAE. For all three methods, we find that doing 3 inference network update steps during alternating optimization performs no better than doing a single one, which suggests that aggressively optimizing the inference network is not crucial in our setting. 4838 Cluster Class Text (23, 155, 24, 53) World a platoon in iraq is being investigated for allegedly refusing to carry out a convoy mission... World afp chechen warlord shamil basayev has claimed responsibility for the deadly school... World the federal government has sent a team of defence personnel to verify a claim that two... World an audio tape purportedly by osama bin laden praises gunmen who attacked a us consulate... (41, 75, 175, 222) Business amazon com says it has reached an agreement to buy joyo com, the largest internet retailer... Business electronic data systems offered voluntary early retirement to about 9, 200 us employees... Business in the aftermath of its purchase of at amp t wireless, cingular wireless is selling several sets... Sci/Tech wired amp wireless continues its reign at the top spot among it priorities due to widespread... (10, 208, 179, 180) Sports this is the week of the season when every giants defensive back needs to have shoulders as... Sports drew henson will have to wait before he’s the star of the dallas cowboys offense right now... Sports st louis how do you beat the greatest show on turf with two rookie cornerbacks... Sports cincinnati bengals coach marvin lewis said yesterday that he expects quarterback carson... (65, 224, 78, 114) Sci/Tech microsoft acknowledged on monday it continued to battle a technical glitch that prevented... Sci/Tech users of the music player should watch out for hacked themes a flaw allows would be... World microsoft’s popular internet explorer has a serious rival in the firefox browser Sci/Tech microsoft has doubled the period of time it will allow business users of windows xp to... Table 4: Examples of sentence-level (M = 4, K = 256) clusters on AG News. Cluster Words 822 government indonesia guilty prison general prosecutors leader law german sex authorities charged marched issue 651 yankees veteran baltimore quarterback offense tampa steelers giants defensive cleveland minnesota pittsburgh 595 month currency low session dollar euro greenback yen monetary weakening lows versus maintained grip rebounded 305 if despite when although 304 core plans intel athlon opteron processors chip hewlett packard strategy clearer forum designs desktop upped ante 802 bit cameras image pleasing integrates multimedia functions gprs automation self types btx supercomputers logic 298 president dick cheney john republicans kerry voters democrat javier sen kellogg 994 exploded bomb near killing injuring explosion eight residents firefighters leak central philippine 55 heavily cancun 484 apple atari san francisco sony toshiba anaheim finally assault famed mp3 freedom u2 accusations brook introduces 510 iraq killed car rebel iraqi military suicide forces marines insurgents baghdad evacuation bomber strikes explosions Table 5: Word-level (M = 1, K = 1024) clusters on AG News. We take the majority cluster for each word for illustration purposes. Model 200 200 (re) 500 500 (re) EM-Local 81.4 82.1 83.0 82.8 EM-Global 85.6 84.6 85.5 85.4 Cat-Local-Alt 83.3 82.9 84.8 84.1 Cat-Global-Alt 86.4 83.1 87.1 85.0 Cat-Local 83.2 82.5 85.3 84.8 Cat-Global 85.4 82.8 86.1 84.5 VQ-Local-Alt 82.9 81.1 84.8 81.4 VQ-Global-Alt 84.7 79.6 85.9 82.9 VQ-Local 82.6 78.7 83.6 81.3 VQ-Global 83.0 76.8 85.4 82.0 Table 6: Effect of alternating optimization on AG News classification with 200 and 500 labels. The “(re)” affix denotes reembedding. Accuracies are on development set with column highs in bold. 8.3 Compression We briefly discuss in what sense discrete latent representations reduce storage requirements. Given a vocabulary of size 30,000, storing a T-length sentence requires T log2 30000 ≈14.9T bits. Our models require at most ML log2 K bits to represent a sentence, which is generally smaller, and especially so when using a global representation. It is also worth noting that storing a d-dimensional floating point representation of a sentence (as continuous latent variable approaches might) costs 32d bits, which is typically much larger. While the above holds for storage, the space required to classify a sentence represented as ML integers using a parametric classifier may not be smaller than that required for classifying a sentence represented as a d-dimensional floating point vector. On the other hand, nearest neighbor-based methods, which are experiencing renewed interest (Guu et al., 2018; Chen et al., 2019; Wiseman and Stratos, 2019), should be significantly less expensive in terms of time and memory when sentences are encoded as ML integers rather than d-dimensional floating point vectors. In the next subsection we quantitatively evaluate our discrete representations in a nearest neighbor-based retrieval setting. 4839 Discrete Embedding M=4, K=256 M=8, K=128 M=16, K=256 Hard EM 76.1 79.6 78.8 CatVAE 77.5 73.7 78.5 VQ-VAE 69.1 73.5 71.2 Continuous Embedding (300d) L2 COSINE GloVe 76.4 76.6 fastText 72.8 74.1 Table 7: Unsupervised document retrieval on AG News dataset, measured by average label precision of top 100 nearest neighbors of the development set. Underlined score is the row best. Discrete representations use Hamming distance. 8.4 Nearest Neighbor-Based Retrieval In the classification experiments of Section 5, we evaluated our discrete representations by training a small classifier on top of them. Here we evaluate our global discrete representations in a document retrieval task to directly assess their quality; we note that this evaluation does not rely on the learned code books, embeddings, or a classifier. In these experiments we use each document in the development set of the AG News corpus as a query to retrieve 100 nearest neighbors in the training corpus, as measured by Hamming distance. We use average label precision, the fraction of retrieved documents that have the same label as the query document, to evaluate the retrieved neighbors. We compare with baselines that use averaged 300d pretrained word vectors (corresponding to each token in the document) as a representation, where neighbors are retrieved based on cosine or L2 distance. We use GloVe with a 2.2 million vocabulary (Pennington et al., 2014) and fastText with a 2 million vocabulary (Mikolov et al., 2018). The results are in Table 7. We see that CatVAE and Hard EM outperform these CBOW baselines (while being significantly more space efficient), while VQ-VAE does not. These results are in line with those of Figure 2, where VQ-VAE struggles when its code book vectors cannot be used (i.e., when reembedding from scratch). In Figure 4 we additionally experiment with a slightly different setting: Rather than retrieving a fixed number of nearest neighbors for a query document, we retrieve all the documents within a neighborhood of Hamming distance ≤D, and calculate the average label precision. These results use global representations with M = 16, and we therefore examine thresholds of D ∈{0, . . . , 16}. We Figure 4: Retrieving document clusters with Hamming distance ≤D, for global models with M = 16 and K = 256. Query and target documents are from AG News’s development set and training set respectively. Dot size indicates the number of documents in a cluster. see that for CatVAE and Hard EM, the document similarity (or label precision) has an approximately linear correlation with Hamming distance. On the other hand, VQ-VAE shows a more surprising pattern, where high precision is not achieved until D = 10, perhaps suggesting that a large portion of the latent dimensions are redundant. 9 Conclusion We have presented experiments comparing the discrete representations learned by a Categorical VAE, a VQ-VAE, and Hard EM in terms of their ability to improve a low-resource text classification system, and to allow for nearest neighbor-based document retrieval. Our best classification models are able to outperform previous work, and this remains so even when we reembed discrete latents from scratch in the learned classifier. We find that amortized Hard EM is particularly effective in lowresource regimes when reembedding from scratch, and that VQ-VAE struggles in these settings. Acknowledgments This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-18-1-0166. References Maruan Al-Shedivat and Ankur Parikh. 2019. Consistency by agreement in zero-shot neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4840 pages 1184–1197, Minneapolis, Minnesota. Association for Computational Linguistics. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263– 311. Mingda Chen and Kevin Gimpel. 2018. Smaller text classifiers with discriminative cluster embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 739–745, New Orleans, Louisiana. Association for Computational Linguistics. Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019. A multi-task approach for disentangling syntax and semantics in sentence representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2453–2464. Carl De Marcken. 1995. Lexical heads, phrase structure and the induction of grammar. In Third Workshop on Very Large Corpora. Wei Dong, Qinliang Su, Dinghan Shen, and Changyou Chen. 2019. Document hashing with mixture-prior generative models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5229–5238. Suchin Gururangan, Tam Dang, Dallas Card, and Noah A. Smith. 2019. Variational pretraining for semi-supervised text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5880–5894, Florence, Italy. Association for Computational Linguistics. Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437–450. Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. In Proceedings of International Conference on Learning Representations (ICLR). Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9:1735– 1780. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with Gumbel-Softmax. In Proceedings of International Conference on Learning Representations (ICLR). Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. 1999. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233. Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In Proceedings of the 35th International Conference on Machine Learning, pages 2390–2399, Stockholmsmssan, Stockholm Sweden. PMLR. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representations (ICLR). Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improving variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems, pages 4743–4751. Diederik P. Kingma and Max Welling. 2014. AutoEncoding Variational Bayes. In Proceedings of International Conference on Learning Representations (ICLR). Bohan Li, Junxian He, Graham Neubig, Taylor BergKirkpatrick, and Yiming Yang. 2019. A surprisingly effective fix for deep latent variable modeling of text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3601– 3612. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The Concrete distribution: A continuous relaxation of discrete random variables. In Proceedings of International Conference on Learning Representations (ICLR). 4841 Chris J. Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Advances in Neural Information Processing Systems, pages 3086–3094. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. In Advances in Neural Information Processing Systems, pages 6306–6315. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Ali Razavi, Aaron van den Oord, and Oriol Vinyals. 2019. Generating diverse high-fidelity images with VQ-VAE-2. In Advances in Neural Information Processing Systems, pages 14866–14876. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, pages 1278–1286, Bejing, China. PMLR. Valentin I. Spitkovsky, Hiyan Alshawi, Daniel Jurafsky, and Christopher D. Manning. 2010. Viterbi training improves unsupervised dependency parsing. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 9–17, Uppsala, Sweden. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8. Sam Wiseman and Karl Stratos. 2019. Label-agnostic sequence labeling by copying nearest neighbors. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5363– 5369. Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In Proceedings of the 34th International Conference on Machine Learning, pages 3881–3890, Sydney, Australia. PMLR. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, pages 649–657. Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1098–1107, Melbourne, Australia. Association for Computational Linguistics. A Model Comparison by Datasets We plot the development set classification performance of each method, this time distinguishing between datasets, in Figure 5. 4842 200 500 2500 full Labeled Examples 78 80 82 84 86 88 Accuracy AG News HardEM CatVAE VQ-VAE global local reembed no reembed 200 500 2500 full Labeled Examples 84 86 88 90 92 94 96 98 Accuracy DBPedia HardEM CatVAE VQ-VAE global local reembed no reembed 200 500 2500 full Labeled Examples 40 42 44 46 48 50 52 54 56 Accuracy Yelp HardEM CatVAE VQ-VAE global local reembed no reembed Figure 5: The accuracies obtained by Hard EM, Categorical VAE, and VQ-VAE representations on the development datasets of AG News (top), DBPedia (middle), and Yelp Full (bottom), for different numbers of labeled training examples. Triangular and circular markers correspond to global and local models, respectively. Unshaded and shaded markers correspond to reembedding from scratch and using encoder embeddings, respectively.
2020
437
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4843–4858 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4843 Learning Constraints for Structured Prediction Using Rectifier Networks Xingyuan Pan, Maitrey Mehta, Vivek Srikumar School of Computing, University of Utah {xpan,maitrey,svivek}@cs.utah.edu Abstract Various natural language processing tasks are structured prediction problems where outputs are constructed with multiple interdependent decisions. Past work has shown that domain knowledge, framed as constraints over the output space, can help improve predictive accuracy. However, designing good constraints often relies on domain expertise. In this paper, we study the problem of learning such constraints. We frame the problem as that of training a two-layer rectifier network to identify valid structures or substructures, and show a construction for converting a trained network into a system of linear constraints over the inference variables. Our experiments on several NLP tasks show that the learned constraints can improve the prediction accuracy, especially when the number of training examples is small. 1 Introduction In many natural language processing (NLP) tasks, the outputs are structures which can take the form of sequences, trees, or in general, labeled graphs. Predicting such output structures (e.g. Smith, 2011) involves assigning values to multiple interdependent variables. Certain joint assignments may be prohibited by constraints designed by domain experts. As a simple example, in the problem of extracting entities and relations from text, a constraint could disallow the relation “married to” between two entities if one of the entity is not a “person”. It has been shown that carefully designed constraints can substantially improve model performance in various applications (e.g., Chang et al., 2012; Anzaroot et al., 2014), especially when the number of training examples is limited. Designing constraints often requires taskspecific manual effort. In this paper, we ask the question: can we use neural network methods to automatically discover constraints from data, and use them to predict structured outputs? We provide a general framework for discovering constraints in the form of a system of linear inequalities over the output variables in a problem. These constraints can improve an already trained model, or be integrated into the learning process for global training. A system of linear inequalities represents a bounded or unbounded convex polytope. We observe that such a system can be expressed as a twolayer threshold network, i.e., a network with one hidden layer of linear threshold units and an output layer with a single threshold unit. This two-layer threshold network will predict 1 or −1 depending on whether the system of linear inequalities is satisfied or not. In principle, we could try to train such a threshold network to discover constraints. However, the zero-gradient nature of the threshold activation function prohibits using backpropagation for gradient-based learning. Instead, in this paper, we show that a construction of a specific two-layer rectifier network represents linear inequality constraints. This network also contains a single linear threshold output unit, but in the hidden layer, it contains rectified linear units (ReLUs). Pan and Srikumar (2016) showed that a two-layer rectifier network constructed in such a way is equivalent to a threshold network, and represents the same set of linear inequalities as the threshold network with far fewer hidden units. The linear constraints thus obtained can augment existing models in multiple ways. For example, if a problem is formulated as an integer program (e.g., Roth and Yih, 2004, 2005; Riedel and Clarke, 2006; Martins et al., 2009), the learned constraints will become additional linear inequalities, which can be used directly. Alternatively, a structure can be constructed using graph search (e.g., Collins and Roark, 2004; Daum´e et al., 2009; Doppa et al., 2014; Chang et al., 2015; Wiseman and Rush, 4844 2016), in which case the learned constraints can filter available actions during search-node expansions. Other inference techniques that extend Lagrangian Relaxation (Komodakis et al., 2007; Rush et al., 2010; Martins et al., 2011) can also employ the learned constraints. Essentially, the learned constraints can be combined with various existing models and inference techniques and the framework proposed in this paper can be viewed as a general approach to improve structured prediction. We report experiments on three NLP tasks to verify the proposed idea. The first one is an entity and relation extraction task, in which we aim to label the entity candidates and identify relations between them. In this task, we show that the learned constraints can be used while training the model to improve prediction. We also show that the learned constraints in this domain can be interpreted in a way that is comparable to manually designed constraints. The second NLP task is to extract citation fields like authors, journals and date from a bibliography entry. We treat it as a sequence labeling problem and show that learned constraints can improve an existing first-order Markov model trained using a structured SVM method (Tsochantaridis et al., 2004). In the final experiment we consider chunking, i.e., shallow parsing, which is also a sequence labeling task. We train a BiLSTMCRF model (Huang et al., 2015) on the training set with different sizes, and we show that learned constraints are particularly helpful when the number of training examples is small. In summary, the contributions of this paper are: 1. We propose that rectifier networks can be used to represent and learn linear constraints for structured prediction problems. 2. In tasks such as entity and relation extraction, the learned constraints can exactly recover the manually designed constraints, and can be interpreted in a way similar to manually designed constraints. 3. When manually designed constraints are not available, we show via experiments that the learned constraints can improve the original model’s performance, especially when the original model is trained with a small dataset.1 1The scripts for replaying the experiments are available at https://github.com/utahnlp/learning-constraints 2 Representing Constraints In this section, we formally define structured prediction and constraints. In a structured prediction problem, we are given an input x belonging to the instance space, such as sentences or images. The goal is to predict an output y ∈Yx, where Yx is the set of possible output structures for the input x. The output y have a predefined structure (e.g., trees, or labeled graphs), and the number of candidate structures in Yx is usually large, i.e., exponential in the input size. Inference in such problems can be framed as an optimization problem with a linear objective function: y∗= argmax y∈Yx α · φ(x, y), (1) where φ(x, y) is a feature vector representation of the input-output pair (x, y) and α are learned parameters. The feature representation φ(x, y) can be designed by hand or learned using neural networks. The feasible set Yx is predefined and known for every x at both learning and inference stages. The goal of learning is to find the best parameters α (and, also perhaps the features φ if we are training a neural network) using training data, and the goal of inference is to solve the above argmax problem given parameters α. In this paper, we seek to learn additional constraints from training examples {(x, y)}. Suppose we want to learn K constraints, and the kth one is some Boolean function2: ck(x, y) = 1 if (x, y) satisfies the kth constraint, and ck(x, y) = −1 if it does not. Then, the optimal structure y∗is the solution to the following optimization problem: max y∈Yx α · φ(x, y), (2) subject to ∀k, ck(x, y) = 1. We will show that such learned constraints aid prediction performance. 2.1 Constraints as Linear Inequalities Boolean functions over inference variables may be expressed as linear inequalities over them (Roth and Yih, 2004). In this paper, we represent constraints as linear inequalities over some feature vector ψ(x, y) of a given input-output pair. The kth constraint ck is equivalent to the linear inequality wk · ψ(x, y) + bk ≥0, (3) 2We use 1 to indicate true and −1 to indicate false. 4845 whose weights wk and bias bk are learned. A Boolean constraint is, thus, a linear threshold function, ck(x, y) = sgn wk · ψ(x, y) + bk  . (4) Here, sgn(·) is the sign function: sgn(x) = 1 if x ≥0, and −1 otherwise. The feature representations ψ(x, y) should not be confused with the original features φ(x, y) used in the structured prediction model in Eq. (1) or (2). Hereafter, we refer to ψ(x, y) as constraint features. Constraint features should be general properties of inputs and outputs, since we want to learn domain-specific constraints over them. They are a design choice, and in our experiments, we will use common NLP features. In general, they could even be learned using a neural network. Given a constraint feature representation ψ(·), the goal is thus to learn the parameters wk’s and bk’s for every constraint. 2.2 Constraints as Threshold Networks For an input x, we say the output y is feasible if it satisfies constraints ck for all k = 1, . . . , K. We can define a Boolean variable z(x, y) indicating whether y is feasible with respect to the input x: z(x, y) = c1(x, y) ∧· · · ∧cK(x, y). That is, z is a conjunction of all the Boolean functions corresponding to each constraint. Since conjunctions are linearly separable, we can rewrite z(x, y) as a linear threshold function: z(x, y) = sgn  1 −K + K X k=1 ck(x, y)  . (5) It is easy to see that z(x, y) = 1 if, and only if, all ck’s are 1—precisely the definition of a conjunction. Finally, we can plug Eq. (4) into Eq. (5): z = sgn  1 −K + K X k=1 sgn wk · ψ(x, y) + bk  (6) Observe that Eq. (6) is exactly a two-layer threshold neural network: ψ(x, y) is the input to the network; the hidden layer contains K linear threshold units with parameters wk and bk; the output layer has a single linear threshold unit. This neural network will predict 1 if the structure y is feasible with respect to input x, and −1 if it is infeasible. In other words, constraints for structured prediction problems can be written as two-layer threshold networks. One possible way to learn constraints is thus to learn the hidden layer parameters wk and bk, with fixed output layer parameters. However, the neural network specified in Eq. (6) is not friendly to gradient-based learning; the sgn(·) function has zero gradients almost everywhere. To circumvent this, let us explore an alternative way of learning constraints using rectifier networks rather than threshold networks. 2.3 Constraints as Rectifier Networks We saw in the previous section that a system of linear inequalities can be represented as a two-layer threshold network. In this section, we will see a special rectifier network that is equivalent to a system of linear inequalities, and whose parameters can be learned using backpropagation. Denote the rectifier (ReLU) activation function as R(x) = max(0, x). Consider the following twolayer rectifier network: z = sgn  1 − K X k=1 R wk · ψ(x, y) + bk  (7) The input to the network is still ψ(x, y). There are K ReLUs in the hidden layer, and one threshold unit in the output layer. The decision boundary of this rectifier network is specified by a system of linear inequalities. In particular, we have the following theorem (Pan and Srikumar, 2016, Theorem 1): Theorem 1. Consider a two-layer rectifier network with K hidden ReLUs as in Eq. (7). Define the set [K] = {1, 2, . . . , K}. The network output z(x, y) = 1 if, and only if, for every subset S of [K], the following linear inequality holds: 1 − X k∈S wk · ψ(x, y) + bk  ≥0 (8) The proof of Theorem 1 is given in the supplementary material. To illustrate the idea, we show a simple example rectifier network, and convert it to a system of linear inequalities using the theorem. The rectifier network contains two hidden ReLUs (K = 2): z = sgn  1−R w1 ·ψ +b1  −R w2 ·ψ +b2  Our theorem says that z = 1 if and only if the following four inequalities hold simultaneously, one 4846 per subset of [K]:            1 ≥0 1 − w1 · ψ + b1  ≥0 1 − w2 · ψ + b2  ≥0 1 − w1 · ψ + b1  − w2 · ψ + b2  ≥0 The first inequality, 1 ≥0, corresponding to the empty subset of [K], trivially holds. The rest are just linear inequalities over ψ. In general, [K] has 2K subsets, and when S is the empty set, inequality (8) is trivially true. The rectifier network in Eq. (7) thus predicts y is a valid structure for x, if a system of 2K −1 linear inequalities are satisfied. It is worth mentioning that even though the 2K −1 linear inequalities are constructed from a power set of K elements, it does not make them dependent on each other. With general choice of wk and bk, these 2K −1 inequalities are linearly independent. This establishes the fact that a two-layer rectifier network of the form of Eq. (7) can represent a system of linear inequality constraints for a structured prediction problem via the constraint feature function ψ. 3 Learning Constraints In the previous section, we saw that both threshold and rectifier networks can represent a system of linear inequalities. We can either learn a threshold network (Eq. (6)) to obtain constraints as in (3), or we can learn a rectifier network (Eq. (7)) to obtain constraints as in (8). The latter offers two advantages. First, a rectifier network has non-trivial gradients, which facilitates gradient-based learning3. Second, since K ReLUs can represent 2K −1 constraints, the rectifier network can express constraints more compactly with fewer hidden units. We will train the parameters wk’s and bk’s of the rectifier network in the supervised setting. First, we need to obtain positive and negative training examples. We assume that we have training data for a structured prediction task. Positive examples can be directly obtained from the training data of the structured prediction prob3The output threshold unit in the rectifier network will not cause any trouble in practice, because it can be replaced by sigmoid function during training. Our theorem still follows, as long as we interpret z(x, y) = 1 as σ(x, y) ≥0.5 and z(x, y) = −1 as σ(x, y) < 0.5. We can still convert the rectifier network into a system of linear inequalities even if the output unit is the sigmoid unit. lem. For each training example (x, y), we can apply constraint feature extractors to obtain positive examples of the form (ψ(x, y), +1). Negative examples can be generated in several ways; we use simple but effective approaches. We can slightly perturb a structure y in a training example (x, y) to obtain a structure y′ that we assume to be invalid. Applying the constraint feature extractor to it gives a negative example (ψ(x, y′), −1). We also need to ensure that ψ(x, y′) is indeed different from any positive example. Another approach is to perturb the feature vector ψ(x, y) directly, instead of perturbing the structure y. In our experiments in the subsequent sections, we will use both methods to generate negative examples, with detailed descriptions in the supplementary material. Despite their simplicity, we observed performance improvements. Exploring more sophisticated methods for perturbing structures or features (e.g., using techniques explored by Smith and Eisner (2005), or using adversarial learning (Goodfellow et al., 2014)) is a future research direction. To verify whether constraints can be learned as described here, we performed a synthetic experiment where we randomly generate many integer linear program (ILP) instances with hidden shared constraints. The experiments show that constraints can indeed be recovered using only the solutions of the programs. Due to space constraints, details of this synthetic experiment are in the supplementary material. In the remainder of the paper we focus on three real NLP tasks. 4 Entity and Relation Extraction Experiments In the task of entity and relation extraction, we are given a sentence with entity candidates. We seek to determine the type of each candidate, as in the following example (the labels are underlined): [Organization Google LLC] is headquartered in [Location Mountain View, California]. We also want to determine directed relations between the entities. In the above example, the relation from “Google LLC” to “Mountain View, California” is OrgBasedIn, and the opposite direction is labeled NoRel, indicating there is no relation. This task requires predicting a directed 4847 graph and represents a typical structured prediction problem—we cannot make isolated entity and relation predictions. Dataset and baseline: We use the dataset from (Roth and Yih, 2004). It contains 1441 sentences with labeled entities and relations. There are three possible entity types: Person, Location and Organization, and five possible relations: Kill, LiveIn, WorkFor, LocatedAt and OrgBasedIn. Additionally, there is a special entity label NoEnt meaning a text span is not an entity, and a special relation label NoRel indicating that two spans are unrelated. We used 70% of the data for training and the remaining 30% for evaluation. We trained our baseline model using the integer linear program (ILP) formulation with the same set of features as in (Roth and Yih, 2004). The baseline system includes manually designed constraints from the original paper. An example of such a constraint is: if a relation label is WorkFor, the source entity must be labeled Person, and the target entity must be labeled Organization. For reference, the supplementary material lists the complete set of manually designed constraints. We use three kinds of constraint features: (i) source-relation indicator, which looks at a given relation label and the label of its source entity; (ii) relation-target indicator, which looks at a relation label and the label of its target entity; and (iii) relation-relation indicator, which looks at a pair of entities and focuses on the two relation label, one in each direction. The details of the constraint features, negative examples and hyper-parameters are in the supplementary material. 4.1 Experiments and Results We compared the performance of two ILP-based models, both trained in the presence of constraints with a structured SVM. One model was trained with manually designed constraints and the other used learned constraints. These models are compared in Table 1. We manually inspected the learned constraints and discovered that they exactly recover the designed constraints, in the sense that the feasible output space is exactly the same regardless of whether we use designed or learned constraints. As an additional confirmation, we observed that when a model is trained with designed constraints and tested with learned constraints, we get the same model perforPerformance Metric Designed Learned entity F-1 84.1% 83.1% relation F-1 41.5% 38.2% Table 1: Comparison of performance on the entity and relation extraction task, between two ILP models, one trained with designed constraints (Designed) and one with learned constraints (Learned). mance as when tested with designed constraints. Likewise, a model that is trained with learned constraints performs identically when tested with learned and designed constraints. Below, we give one example of a learned constraint, and illustrate how to interpret such a constraint. (The complete list of learned constraints is in the supplementary material.) A learned constraint using the source-relation indicator features is −1.98x1 + 3.53x2 −1.90x3 + 0.11x4 + 2.66x5 −2.84x6 −2.84x7 −2.84x8 + 2.58x9 + 0.43x10 + 0.32 ≥0 (9) where x1 through x10 are indicators for labels NoEnt, Person, Location, Organization, NoRel, Kill, LiveIn, WorkFor, LocatedAt, and OrgBasedIn, respectively. This constraint disallows a relation labeled as Kill having a source entity labeled as Location, because −1.90 −2.84 + 0.32 < 0. Therefore, the constraint “Location cannot Kill” is captured in (9). In fact, it is straightforward to verify that the inequality in (9) captures many more constraints such as “NoEnt cannot LiveIn”, “Location cannot LiveIn”, “Organization cannot WorkFor”, etc. A general method for interpreting learned constraints is a direction of future research. Note that the metric numbers in Table 1 based on learned constraints are lower than those based on designed constraints. Since the feasible space is the same for both kinds of constraints, the performance difference is due to the randomness of the ILP solver picking different solutions with the same objective value. Therefore, the entity and relation experiments in this section demonstrate that our approach can recover the designed constraints and provide a way of interpreting these constraints. 4848 5 Citation Field Extraction Experiments In the citation field extraction task, the input is a citation entry. The goal is to identify spans corresponding to fields such as author, title, etc. In the example below, the labels are underlined: [ Author A . M . Turing . ] [ Title Computing machinery and intelligence . ] [ Journal Mind , ] [Volume 59 , ] [ Pages 433-460 . ] [ Date October , 1950 . ] Chang et al. (2007) showed that hand-crafted constraints specific to this domain can vastly help models to correctly identify citation fields. We show that constraints learned from the training data can improve a trained model without the need for manual effort. Dataset and baseline. We use the dataset from Chang et al. (2007, 2012) whose training, development and test splits have 300, 100 and 100 examples, respectively. We train a first-order Markov model using structured SVM (Tsochantaridis et al., 2004) on the training set with the same raw text features as in the original work. Constraint features. We explore multiple simple constraint features ψ(x, y) in the citation field extraction experiments as shown in Table 2. Detailed descriptions of these features, including how to develop negative examples for each feature, and experiment settings are in the supplementary material. Feature Description Label existence Indicates which labels exist in a citation Label counts Counts the number of occurrences of a label Bigram labels Indicators for adjacent labels Trigram labels Indicators for 3 adjacent labels Part-of-speech Indicator for the part-ofspeech of a token Punctuation Indicator for whether a token is a punctuation Table 2: Constraint feature templates for the citation field extraction task 5.1 Experiments and Results For each constraint feature template, we trained a rectifier network with 10 ReLUs in the hidden layer. We then use Theorem 1 to convert the resulting network to a system of 210 −1, or 1023 linear inequalities. We used beam search with beam size 50 to combine the learned inequalities with the original sequence model to predict on the test set. States in the search space correspond to partial assignments to a prefix of the sequence. Each step we predict the label for the next token in the sequence. The pretrained sequence model (i.e., the baseline) ranks search nodes based on transition and emission scores, and the learned inequality prunes the search space accordingly4. Table 3 shows the token level accuracies of various methods. The results show that all versions of constrained search outperform the baselines, indicating that the learned constraints are effective in the citation field extraction task. Furthermore, different constraints learned with different features can be combined. We observe that combining different constraint features generally improves accuracy. It is worth pointing out that the label existence and label counts features are global in nature and cannot be directly used to train a sequence model. Even if some constraint features can be used in training the original model, it is still beneficial to learn constraints from them. For example, the bigram label feature is captured in the original first order model, but adding constraints learned from them still improves performance. As another test, we trained a model with POS features, which also contains punctuation information. This model achieves 91.8% accuracy. Adding constraints learned with POS improves the accuracy to 92.6%; adding constraints learned with punctuation features further improves it to 93.8%. We also observed that our method for learning constraints is robust to the choice of the number of hidden ReLUs. For example, for punctuation, learning using 5, 8 and 10 hidden ReLUs results an accuracy of 90.1%, 90.3%, and 90.2%, respectively. We observed similar behavior for other constraint features as well. Since the number of constraints learned is exponential in the number of hidden units, these results shows that learning redundant constraints will not hurt performance. 4Since the label-existence and label-counts features are global, pruning by learned inequalities is possible only at the last step of search. The other four features admit pruning at each step of the search process. 4849 Baselines Search with learned constraints Combine constraints Exact Search L.E. L.C. B.L. T.L. POS Punc. C1 C2 C3 86.2 87.3 88.0 87.7 87.9 88.1 89.8 90.2 88.6 90.1 90.6 Table 3: Token level accuracies (in percentage) of baseline models and constrained-search models, for the citation field extraction task. Exact is our trained first-order Markov model. It uses exact inference (dynamic programming) for prediction. Search is our search baseline, it uses the same model as Exact, but with beam search for inexact inference. L.E., L.C., B.L., T.L., POS, Punc. use search with different constraint features: label existence, label counts, bigram labels, trigram labels, part-of-speech, and punctuation features. C1 to C3 are search with combined constraints. C1 combines L.E. and T.L.. C2 combines L.E., T.L. and POS. Finally C3 combines all constraints. Note that carefully hand-crafted constraints may achieve higher accuracy than the learned ones. Chang et al. (2007) report an accuracy of 92.5% with constraints specifically designed for this domain. In contrast, our method for learning constraints uses general constraint features, and does not rely on domain knowledge. Therefore, our method is suited to tasks where little is known about the underlying domain. 6 Chunking Experiments Chunking is the task of clustering text into groups of syntactically correlated tokens or phrases. In the instance below, the phrase labels are underlined: [NP An A.P. Green official] [VP declined to comment] [PP on] [NP the filing] [O.] We treat the chunking problem as a sequence labeling problem by using the popular IOB tagging scheme. For each phrase label, the first token in the phrase is labeled with a “B-” prefixed to phrase label while the other tokens are labeled with an “I-” prefixed to the phrase label. Hence, [NP An A.P. Green official] is represented as [[B-NP An] [I-NP A.P.] [I-NP Green] [I-NP official]] This is done for all phrase labels except “O”. Dataset and Baselines. We use the CoNLL2000 dataset (Tjong Kim Sang and Buchholz, 2000) which contains 8936 training sentences and 2012 test sentences. For our experiments, we consider 8000 sentences out of 8936 training sentences as our training set and the remaining 936 sentences as our development set. Chunking is a well-studied problem and showing performance improvements on full training dataset is difficult. However, we use this task to illustrate the interplay of learned constraints with neural network models, and the impact of learned constraints in the low training data regime. We use the BiLSTM-CRF (Huang et al., 2015) for this sequence tagging task. We use GloVe for word embeddings. We do not use the BERT (Devlin et al., 2019) family of models since tokens are broken down into sub-words during pre-processing, which introduces modeling and evaluation choices that are orthogonal to our study of label dependencies. As with the citation task, all our constrained models use beam search, and we compare our results to both exact decoding and beam search baselines. We use two kinds of constraint features: (i) n-gram label existence, and (ii) n-gram part of speech. Details of the constraint features and construction of negative samples are given in the supplementary material. 6.1 Experiments and Results We train the rectifier network with 10 hidden units. The beam size of 10 was chosen for our experiments based on preliminary experiments. We report the average results on two different random seeds for learning each constraint. Note that the n-gram label existence is a global constraint while the n-gram POS constraint is a local constraint which checks for validity of label assignments at each token. In essence, the latter constraint reranks the beam at each step by ensuring that states that satisfy the constraint are preferred over states that violate the constraint. Since the n-gram label existence is a global constraint, we check the validity of the tag assignments only at the last token. In the case where none of the states in the beam satisfy the constraint, the original beams are used. The results for this set of experiments are presented in Table 4. We observe that the POS constraint improves the performance of the base4850 Constraint n Percentage of training data used 1% 5% 10% 25% 50% 100% Label existence 2 81.28 88.30 89.73 91.24 90.40 92.48 3 80.98 88.20 90.58 91.20 92.37 93.12 Part-of-speech 3 86.52 90.74 91.80 92.41 93.07 93.84 4 84.21 90.99 92.17 92.46 93.08 93.93 Search without constraints 81.29 88.27 90.62 91.33 92.51 93.44 Exact decoding 82.11 88.70 90.49 92.57 93.94 94.75 Table 4: Token level accuracies (in percentage) for the chunking baseline and constrained model. The results are shown on n-gram Label Existence and n-gram Part of Speech constraints with n = {2, 3} and n = {3, 4} respectively. The results are shown on {1%, 5%, 10%, 25%, 50%, 100%} of training data. Exact decoding with Viterbi algorithm and Search w/o constraint are baseline models which do not incorporate constraints during inference. line models significantly, outperforming the beam search baseline on all training ratios. More importantly, the results show sizable improvements in accuracy for smaller training ratios (e.g, 4.41% and 5.23% improvements on exact and search baselines respectively with 1% training data ). When the training ratios get bigger, we expect the models to learn these properties and hence the impact of the constraints decreases. These results (along with the experiments in the previous sections) indicate that our constraints can significantly boost performance in the low data regime. Another way to improve performance in low resource settings is to use better pretrained input representations. When we replaced GloVe embeddings with ELMo, we observed a 87.09% accuracy on 0.01 ratio of training data using exact decoding. However, this improvement comes at a cost: the number of parameters increases from 3M (190k trainable) to 94M (561k trainable). In contrast, our method instead introduces a smaller rectifier network with ≈1000 additional parameters while still producing similar improvements. In other words, using trained constraints is computationally more efficient. We observe that the label existence constraints, however, do not help. We conjecture that this may be due to one of the following three conditions: (i) The label existence constraint might not exist for the task; (ii) The constraint exists but the learner is not able to find it; (iii) The input representations are expressive enough to represent the constraints. Disentangling these three factors is a future research challenge. 7 Related Work Structured prediction is an active field in machine learning and has numerous applications, including various kinds of sequence labeling tasks, parsing (e.g., Martins et al., 2009), image segmentation (e.g., Lam et al., 2015), and information extraction (e.g., Anzaroot et al., 2014). The work of Roth and Yih (2004) introduced the idea of using explicitly stated constraints in an integer programming framework. That constraints and knowledge can improve models has been highlighted by several lines of work (e.g., Ganchev et al., 2010; Chang et al., 2012; Hu et al., 2016). The interplay between constraints and representations has been sharply highlighted by recent work on integrating neural networks with structured outputs (e.g., Rockt¨aschel and Riedel, 2017; Niculae et al., 2018; Manhaeve et al., 2018; Xu et al., 2018; Li and Srikumar, 2019; Li et al., 2019, and others). We expect that constraints learned as described in this work can be integrated into these formalisms, presenting an avenue for future research. While our paper focuses on learning explicit constraints directly from examples, it is also possible to use indirect supervision from these examples to learn a structural classifier (Chang et al., 2010), with an objective function penalizing invalid structures. Related to our goal of learning constraints is rule learning, as studied in various subfields of artificial intelligence. Quinlan (1986) describes the ID3 algorithm, which extracts rules as a decision tree. First order logic rules can be learned from examples using inductive logic programming (Muggleton and de Raedt, 1994; Lavrac and Dzeroski, 1994; 4851 Page and Srinivasan, 2003). Notable algorithms for inductive logic programming include FOIL (Quinlan, 1990) and Progol (Muggleton, 1995). Statistical relation learning addresses learning constraints with uncertainty (Friedman et al., 1999; Getoor and Mihalkova, 2001). Markov logic networks (Richardson and Domingos, 2006) combines probabilistic models with first order logic knowledge, whose weighted formulas are soft constraints and the weights can be learned from data. In contrast to these directions, in this paper, we exploit a novel representational result about rectifier networks to learn polytopes that represent constraints with off-the-shelf neural network tools. 8 Conclusions We presented a systematic way for discovering constraints as linear inequalities for structured prediction problems. The proposed approach is built upon a novel transformation from two layer rectifier networks to linear inequality constraints and does not rely on domain expertise for any specific problem. Instead, it only uses general constraint features as inputs to rectifier networks. Our approach is particularly suited to tasks where designing constraints manually is hard, and/or the number of training examples is small. The learned constraints can be used for structured prediction problems in two ways: (1) combining them with an existing model to improve prediction performance, or (2) incorporating them into the training process to train a better model. We demonstrated the effectiveness of our approach on three NLP tasks, each with different original models. Acknowledgments We thank members of the NLP group at the University of Utah, especially Jie Cao, for their valuable insights and suggestions; and reviewers for pointers to related works, corrections, and helpful comments. We also acknowledge the support of NSF Cyberlearning-1822877, SaTC-1801446 and gifts from Google and NVIDIA. References Sam Anzaroot, Alexandre Passos, David Belanger, and Andrew McCallum. 2014. Learning Soft Linear Constraints with Application to Citation Field Extraction. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, and John Langford. 2015. Learning to Search Better than Your Teacher. In Proceedings of The 32nd International Conference on Machine Learning. Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraint-driven learning. In Proceedings of the 45th annual meeting of the association of computational linguistics. Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2012. Structured Learning with Constrained Conditional Models. Machine Learning. Ming-wei Chang, Vivek Srikumar, Dan Goldwasser, and Dan Roth. 2010. Structured Output Learning with Indirect Supervision. Proceedings of the 27th International Conference on Machine Learning (ICML-10). Michael Collins and Brian Roark. 2004. Incremental Parsing with the Perceptron Algorithm. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. Hal Daum´e, John Langford, and Daniel Marcu. 2009. Search-based Structured Prediction. Machine Learning Journal (MLJ). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Janardhan Rao Doppa, Alan Fern, and Prasad Tadepalli. 2014. Structured Prediction via Output Space Search. The Journal of Machine Learning Research. Nir Friedman, Lise Getoor, Daphne Koller, and Avi Pfeffer. 1999. Learning Probabilistic Relational Models. Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence. Kuzman Ganchev, Joao Grac¸a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior Regularization for Structured Latent Variable Models. The Journal of Machine Learning Research. Lise Getoor and Lilyana Mihalkova. 2001. Learning Statistical Models from Relational Data. Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27. Gurobi Optimization LLC. 2019. Gurobi optimizer reference manual. 4852 Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016. Harnessing Deep Neural Networks with Logic Rules. In ”Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)”. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF Models for Sequence Tagging. arXiv preprint arXiv:1508.01991. Nikos Komodakis, Nikos Paragios, and Georgios Tziritas. 2007. MRF Optimization via Dual Decomposition: Message-passing Revisited. Proceedings of the IEEE International Conference on Computer Vision. Michael Lam, Janardhan Rao Doppa, Sinisa Todorovic, and Thomas G Dietterich. 2015. HC-Search for Structured Prediction in Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Nada Lavrac and Saso Dzeroski. 1994. Inductive Logic Programming: Techniques and Applications. Ellis Horwood. Tao Li, Vivek Gupta, Maitrey Mehta, and Vivek Srikumar. 2019. A Logic-Driven Framework for Consistency of Neural Models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Tao Li and Vivek Srikumar. 2019. Augmenting Neural Networks with First-order Logic. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. 2018. Deepproblog: Neural probabilistic logic programming. In Advances in Neural Information Processing Systems. Andr´e FT Martins, M´ario AT Figueiredo, Pedro MQ Aguiar, Noah A Smith, and Eric P Xing. 2011. An Augmented Lagrangian Approach to Constrained MAP Inference. In Proceedings of the 28th International Conference on International Conference on Machine Learning. Andr´e FT Martins, Noah A Smith, and Eric P Xing. 2009. Concise Integer Linear Programming Formulations for Dependency Parsing. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1. Stephen Muggleton. 1995. Inverse Entailment and Progol. New Generation Computing. Stephen Muggleton and Luc de Raedt. 1994. Inductive Logic Programming: Theory and Methods. The Journal of Logic Programming. Vlad Niculae, Andre Martins, Mathieu Blondel, and Claire Cardie. 2018. SparseMAP: Differentiable Sparse Structured Inference. In International Conference on Machine Learning. David Page and Ashwin Srinivasan. 2003. ILP: A Short Look Back and a Longer Look Forward. Journal of Machine Learning Research. Xingyuan Pan and Vivek Srikumar. 2016. Expressiveness of Rectifier Networks. In Proceedings of the 33rd International Conference on Machine Learning. J. R. Quinlan. 1986. Induction of Decision Trees. Machine Learning. J. R. Quinlan. 1990. Learning Logical Definitions from Relations. Machine Learning. Matthew Richardson and Pedro Domingos. 2006. Markov Logic Networks. Machine Learning. Sebastian Riedel and James Clarke. 2006. Incremental Integer Linear Programming for Non-projective Dependency Parsing. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. Tim Rockt¨aschel and Sebastian Riedel. 2017. End-toend differentiable proving. In Advances in Neural Information Processing Systems. Dan Roth and Wen-tau Yih. 2004. A Linear Programming Formulation for Global Inference in Natural Language Tasks. Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004. Dan Roth and Wen-tau Yih. 2005. Integer Linear Programming Inference for Conditional Random Fields. Proceedings of the 22nd International Conference on Machine Learning. Alexander M Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Noah A. Smith. 2011. Linguistic Structure Prediction. Morgan & Claypool Publishers. Noah A Smith and Jason Eisner. 2005. Contrastive Estimation: Training Log-Linear Models on Unlabeled Data. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task chunking. In Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop. 4853 Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vector machine learning for interdependent and structured output spaces. Proceedings of the Twenty-First International Conference on Machine Learning. Sam Wiseman and Alexander M Rush. 2016. Sequence-to-Sequence Learning as Beam-Search Optimization. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Van den Broeck. 2018. A Semantic Loss Function for Deep Learning with Symbolic Knowledge. In Proceedings of the 35th International Conference on Machine Learning. A Proof of Theorem 1 In this section we prove Theorem 1. The theorem and the relevant definitions are repeated here for convenience. Define the rectifier (ReLU) activation function as R(x) = max(0, x). Consider the following twolayer rectifier network: z(x, y) = sgn  1 − K X k=1 R wk · ψ(x, y) + bk  (10) The input to the network is still ψ(x, y). There are K ReLUs in the hidden layer, and one threshold unit in the output layer. The decision boundary of this rectifier network is specified by a system of linear inequalities. In particular, we have the following theorem: Theorem 2. Consider a two-layer rectifier network with K hidden ReLUs as in Eq. (10). Define the set [K] = {1, 2, . . . , K}. The network outputs z(x, y) = 1 if, and only if, for every subset S of [K], the following linear inequality holds: 1 − X k∈S wk · ψ(x, y) + bk  ≥0 Proof. Define ak = wk · ψ(x, y) + bk. We first prove the “if” part of the theorem. Suppose that for any S ⊆[K], 1 −P k∈S ak ≥0. Thus for a specific subset S∗= {k ∈[K] : ak ≥0}, we have 1 −P k∈S∗ak ≥0. By the definition of S∗, PK k=1 R(ak) = P k∈S∗ak, therefore 1 − PK k=1 R(ak) ≥0. Next we prove the “only if” part of the theorem. Suppose that 1 −PK k=1 R(ak) ≥0. For any S ⊆ [K], we have PK k=1 R(ak) ≥P k∈S R(ak) ≥ P k∈S ak. Therefore, for any S ⊆[K], 1 − P k∈S ak ≥0. B Synthetic Integer Linear Programming Experiments We first check if constraints are learnable, and whether learned constraints help a downstream task with a synthetic experiment. Consider framing structure prediction as an integer linear program (ILP): min z∈{0,1}n X i ci · zi, subject to X i Akizi ≥bk, k ∈[m] (11) The objective coefficient ci denotes the cost of setting the variable zi to 1 and the goal of prediction is to find a cost minimizing variable assignment subject to m linear constraints in (11). We randomly generate a hundred 50-dimensional ILP instances, all of which share a fixed set of random constraints. Each instance is thus defined by its objective coefficients. We reserve 30% of instances as test data. The goal is to learn the shared linear constraints in Eq. (11) from the training set. We use the Gurobi Optimizer (Gurobi Optimization LLC, 2019) to solve all the ILP instances to obtain pairs {(c, z)}, where c is the vector of objective coefficients and z is the optimal solution. Each z in this set is feasible, giving us positive examples (z, +1) for the constraint learning task. Negative examples are generated as follows: Given a positive pair (c, z) described above, if the ith coefficient ci > 0 and the corresponding decision zi = 1, construct z′ from z by flipping the ith bit in z from 1 to 0. Such a z′ is a negative example for the constraint learning task because z′ has a lower objective value than z. Therefore, it violates at least one of the constraints in Eq. (11). Similarly, if ci < 0 and zi = 0, we can flip the ith bit from 0 to 1. We perform the above steps for every coefficient of every example in the training set to generate a set of negative examples {(z′, −1)}. We trained a rectifier network on these examples and converted the resulting parameters into a system of linear inequalities using Theorem 2. The hyper-parameters and design choices are summarized in the supplementary material. We used the learned inequalities to replace the original constraints to obtain predicted solutions. We evaluated these predicted solutions against the oracle solutions (i.e., based on the original constraints). We 4854 also computed a baseline solution for each test example by minimizing an unconstrained objective. Table 5 lists four measures of the effectiveness of learned constraints. First, we want to know whether the learned rectifier network can correctly predict the synthetically generated positive and negative examples. The binary classification accuracies are listed in the first row. The second row lists the bitwise accuracies of the predicted solutions based on learned constraints, compared with the gold solutions. We see that the accuracy values of the solutions based on learned constraints are in the range from 80.2–83.5%. As a comparison, without using any constraints, the accuracy of the baseline is 56.8%. Therefore the learned constraints can substantially improve the prediction accuracy in the down stream inference tasks. The third row lists the percentage of the predicted solutions satisfying the original constraints. Solutions based on learned constraints satisfy 69.8–74.4% of the original constraints. In contrast, the baseline solutions satisfy 55.3% of the original constraints. The last row lists the percentage of the gold solutions satisfying the learned constraints. We see that the gold solutions almost always satisfy the learned constraints. The hyper-parameter and other design choices for the synthetic ILP experiments are listed in Table 6. C Entity and relation extraction experiments C.1 Designed constraints Table 7 lists the designed constraints used in the entity and relation extraction experiments. There are fifteen constraints, three for each relation type. For example, the last row in Table 7 means that the relation OrgBasedIn must have an Organization as its source entity and a Location as its target entity, and the relation in the opposite direction must be NoRel. C.2 Constraint features We use the same example as in the main paper to illustrate the constraint features used in the entity and relation extraction experiments: [Organization Google LLC] is headquartered in [Location Mountain View, California, USA]. In the above example, the relation from “Google LLC” to “Mountain View, California, USA” is OrgBasedIn, and the relation in the opposite direction is labeled NoRel, indicating there is no relation from “Mountain View, California, USA” to “Google LLC”. We used three constraint features for this task, explained as follows. Source-relation indicator This feature looks at a given relation label and the label of its source entity. It is an indicator pair (source label, relation label). Our example sentence will contribute the following two feature vectors, (Organization, OrgBasedIn) and (Location, NoRel), both corresponding to postive examples. The negative examples contains all possible pairs of (source label, relation label), which do not appear in the positive example set. Relation-target indicator This feature looks at a given relation label the label of its target entity. It is an indicator pair (relation label, target label). Our example sentence will contribute the following two feature vectors, (OrgBasedIn, Location) and (NoRel,Organization), both corresponding to positive examples. The negative examples contains all possible pairs of (relation label, target label), which do not appear in the positive example set. Relation-relation indicator This feature looks at a pair of entities and focuses on the two relation labels between them, one in each direction. Therefore our running example will give us two positive examples with features (OrgBasedIn, NoRel) and (NoRel,OrgBasedIn). The negative examples contain any pair of relation labels that is not seen in the positive example set. C.3 Hyper-parameters and design choices The hyper-parameter and design choices for the experiments are in Table 8. Note that different runs of the SVM learner with the learned or designed constraints may give different results from those on Table 1. This is due to non-determinism introduced by hardware and different versions of the Gurobi solver picking different solutions that have the same objective value. In the results in Table 1, we show the results where the training with learned constraints seem to underperform the model that is trained with designed constraints. In other runs on different hardware, we found the opposite ordering of the results. 4855 Number of ReLUs 2 3 4 5 6 7 8 9 10 binary classification acc. (%) 85.1 87.3 92.1 90.3 95.0 94.3 94.1 97.7 98.0 bitwise solution acc. (%) 81.1 80.9 81.9 80.2 81.0 82.3 81.1 83.2 83.5 original constr. satisfied (%) 70.3 69.8 72.7 70.4 70.1 71.1 71.4 74.4 74.3 learned constr. satisfied (%) 95.6 98.6 98.7 99.1 97.4 98.9 99.9 99.1 99.4 Table 5: Effectiveness of learned constraints for the synthetic ILP experiments. Description Value Total number of examples 100 Number of training examples 70 Number of test examples 30 Dimensionality 50 Range of hidden ReLU units considered for experiments 2-10 Learning rates for cross-validation while learning rectifier networks {0.001, 0.01, 0.1} Learning rate decay for cross-validation {0.0, 10−7, 10−6} Optimizer parameters for learning β1 = 0.9, β2 = 0.999, ϵ = 10−7 Number of training epochs 1000 Table 6: Parameters used in the synthetic ILP experiments Antecedents Consequents If the relation is Source must be Target must be Reversed relation must be Kill Person Person NoRel LiveIn Person Location NoRel WorkFor Person Organization NoRel LocatedAt Location Location NoRel OrgBasedIn Organization Location NoRel Table 7: Designed constraints used in the entity and relation extraction experiments Description Value Structured SVM trade-off parameter for the base model 2−6 Number of hidden ReLU units –for source-relation 2 –for relation-target 2 –for relation-relation 1 Learning rates for cross-validation while learning rectifier networks {0.001, 0.01, 0.1} Learning rate decay for cross-validation {0.0, 10−7, 10−6} Optimizer parameters for learning β1 = 0.9, β2 = 0.999, ϵ = 10−7 Table 8: Parameters used in the entity and relation extraction experiments 4856 C.4 Learned Constraints We see in the main paper that 2K −1 linear inequality constraints are learned using a rectifier network with K hidden units. In the entity and relation extraction experiments, we use two hidden units to learn three constraints from the source-relation indicator features. The three learned constraints are listed in Table 9. A given pair of source label and relation label satisfies the constraint if the sum of the corresponding coefficients and the bias term is greater than or equal to zero. For example, the constraint from the first row in Table 9 disallows the pair (Location, Kill), because −1.90 −2.84 + 0.32 < 0. Therefore, the learned constraint would not allow the source entity of a Kill relation to be a Location, which agrees with the designed constraints. We enumerated all possible pairs of source label and relation label and found that the learned constraints always agree with the designed constraints in the following sense: whenever a pair of source label and relation label satisfies the designed constraints, it also satisfies all three learned constraints, and whenever a pair of source label and relation label is disallowed by the designed constraints, it violates at least one of the learned constraints. Therefore, our method of constraint learning exactly recovers the designed constraints. We also use two hidden units to learn three constraints from the relation-target indicator features, and one hidden unit to learn one constraint from the relation-relation indicator features. The learned constraints are listed in Table 11 and Table 10. Again we verify that the learned constraints exactly recover the designed constraints in all cases. D Citation field extraction experiments D.1 Constraint Features We use the same example as in the main paper to illustrate the constraint features used in the citation field extraction experiments: [ Author A . M . Turing . ] [ Title Computing machinery and intelligence . ] [ Journal Mind , ] [Volume 59 , ] [ Pages 433-460 . ] [ Date October , 1950 . ] We explore multiple simple constraint features ψ(x, y) as described below. Label existence This features indicates which labels exist in a citation entry. In our above example, there are six labels. Suppose there are nl possible labels. The above example is a positive example, the feature vector of which is an nl-dimensional binary vector. Exactly six elements, corresponding to the six labels in the example, have the value 1 and all others have the value 0. To obtain the negative examples, we iterate through every positive example and flip one bit of its feature vector. If the resulting vector is not seen in the positive set it will be a negative example. Label counts Label-count features are similar to Label-existence features. Instead of indicating whether a label exists using 1 or 0, label-count features records the number of times each label appears in the citation entry. The positive examples can be generated naturally from the training set. To generate negative examples, we perturb the actual labels of a positive example, as opposed to its feature vector. We then extract the label counts feature from the perturbed example, and treat it as negative if it has not seen before in the positive set. Bigram labels This feature considers each pair of adjacent labels in the text. From left to right, the above example will give us feature vectors like (Author, Author), (Author, Title), (Title, Title), ..., (Date, Date). We then use one-hot encoding to represent these features, which is the input vector to the rectifier network. All these feature vectors are labeled as positve (+1) by the rectifier network, since they are generated from the training set. To generate negative examples for bigram-label features, we generate all positive examples from the training set, then enumerate all possible pair of labels and select those that were not seen in the positive examples. Trigram labels This feature is similar to the bigram labels. From the training set, we generate positive examples, e.g., (Author, Author, Author), (Author, Author, Title) etc, and convert them into one-hot encodings. For negative examples, we enumerate all possible trigram labels, and select those trigrams as negative if two conditions are met: (a) the trigram is not seen in the positive set; and (b) a bigram contained in it is seen in the training set. The intuition is that we want negative examples to be almost feasible. 4857 Source Labels Relation Labels NoEnt Per. Loc. Org. NoRel Kill Live Work Located Based Bias -1.98 3.53 -1.90 0.11 2.66 -2.84 -2.84 -2.84 2.58 0.43 0.32 -1.61 -1.48 3.50 0.92 1.15 1.02 1.02 1.02 -3.96 -1.38 1.46 -3.59 2.04 1.60 1.03 3.81 -1.82 -1.82 -1.82 -1.38 -0.95 0.78 Table 9: Linear constraint coefficients learned from the source-relation indicator features Forward Relation Labels Backward Relation Labels Bias 4.95 -1.65 -1.65 -1.65 -1.65 -1.65 5.06 -1.53 -1.53 -1.53 -1.53 -1.53 -2.41 Table 10: Linear constraint coefficients learned from the relation-relation indicator features. The order of the relation labels is: NoRel, Kill, LiveIn, WorkFor, LocatedAt, and OrgBasedIn Part-of-speech For a fixed window size, we extract part-of-speech tags and the corresponding labels, and use the combination as our constraint features. For example, with window size two, we get indicators for (tagi−1, tagi, labeli−1, labeli) for the ith token in the sentence, where tag and label refer to part-of-speech tag and citation field label respectively. For negative examples, we enumerate all four-tuples as above, and select it as negative if the four-tuple is not seen in the positive set, but both (tagi−1, tagi) and (labeli−1, labeli) are seen in the training set. Punctuation The punctuation feature is similar to the part-of-speech feature. Instead of the POS tag, we use an indicator for whether the current token is a punctuation. D.2 Hyper-parameters and design choices The hyper-parameter and design choices for the experiments are in the Table 12. E Chunking Experiments E.1 Constraint Features The two constraints which we discussed in the main paper for the chunking dataset are described below. N-gram label existence This constraint is a general form of the label existence constraint mentioned in Section D.1. In fact, it is the n-gram label existence constraint with n=1. The n-gram label existence constraint represents the labels of a sequence as a binary vector. Each feature of this binary vector corresponds to an n-gram label combination. Hence, the length of this constraint feature will be | l |n where | l | is the total number of distinct labels. This means the vector size of this constraint grows exponentially with increasing n. The binary vector indicates a value of 1 for all the n-gram label features present in the sequence tags. The positive examples are hence formed from the training set sequences. For the negative examples, we iterate through each positive example and flip a bit. The resulting vector is incorporated as a negative example if it doesn’t occur in the training set. N-gram part of speech (POS) This constraint is a general form of the part of speech constraint mentioned in Section D.1. POS tags of a token are converted to a indicator vector. We concatenate the indicator vectors of each gram in an n-gram in order and this vector is further concatenated with indicators of labels of each of these grams. Hence, for n=2, we get the constraint vector as (tagi−1, tagi, labeli−1, labeli) where tagi and labeli are indicators for POS tags and labels respectively for the ith token. The positive examples enumerate vectors for all existing n-grams in the training sequences. The negative examples are creating by changing a label indicator in the constraint feature. The label to be perturbed and the perturbation both are chosen at random. The constraint vector hence formed is incorporated as a negative example if it doesn’t occur in the set of positive examples. E.2 Hyper-parameters and design choices The hyper-parameter and design choices are summarized in Table 13. 4858 Relation Labels Target Labels NoRel Kill Live Work Located Based NoEnt Per. Loc. Org. Bias 2.68 -3.17 -0.55 2.68 -0.55 -0.55 -1.58 3.15 0.53 -2.70 1.02 2.72 2.42 -1.39 -2.55 -1.39 -1.39 -2.51 -2.27 1.54 2.31 0.85 5.40 -0.74 -1.94 0.13 -1.94 -1.94 -4.10 0.88 2.08 -0.39 0.86 Table 11: Linear constraint coefficients learned from the relation-target indicator features Description Value Structured SVM trade-off parameter for the base model unregularized Beam size 50 Number of hidden ReLU units for experiments 10 Learning rates for cross-validation while learning rectifier networks {0.001, 0.01, 0.1} Learning rate decay for cross-validation {0.0, 10−7, 10−6} Optimizer parameters for learning β1 = 0.9, β2 = 0.999, ϵ = 10−7 Table 12: Parameters used in the citation field extraction experiments Description Value Constraint Rectifier Network Range of hidden ReLU units considered for experiments {5, 10} Learning rates for development while learning rectifier networks {0.001, 0.005, 0.01, 10−4} Number of training epochs 1000 Random Seeds {1, 2} BiLSTM CRF Model Learning rate for development while learning baseline model {0.01, 0.05, 0.001, 0.005} Learning Rate Decay {10−5, 10−6} Beam Size 10 Number of training epochs 300 Table 13: Parameters used in the chunking experiments
2020
438
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4859–4870 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4859 Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models Dan Iter*∗, Kelvin Guu**, Larry Lansing**, and Dan Jurafsky* *Computer Science Department, Stanford University **Google Research {daniter,jurafsky}@stanford.edu {kguu,llansing}@google.com Abstract Recent models for unsupervised representation learning of text have employed a number of techniques to improve contextual word representations but have put little focus on discourse-level representations. We propose CONPONO1, an inter-sentence objective for pretraining language models that models discourse coherence and the distance between sentences. Given an anchor sentence, our model is trained to predict the text k sentences away using a sampled-softmax objective where the candidates consist of neighboring sentences and sentences randomly sampled from the corpus. On the discourse representation benchmark DiscoEval, our model improves over the previous state-of-the-art by up to 13% and on average 4% absolute across 7 tasks. Our model is the same size as BERTBase, but outperforms the much larger BERTLarge model and other more recent approaches that incorporate discourse. We also show that CONPONO yields gains of 2%-6% absolute even for tasks that do not explicitly evaluate discourse: textual entailment (RTE), common sense reasoning (COPA) and reading comprehension (ReCoRD). 1 Introduction Pretraining large language models has become the primary method for learning representations from unsupervised text corpora. Since the initial improvements demonstrated by ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), many alternative pretraining methods have been proposed to best leverage unlabeled data. These methods include bi-directional language modeling (Peters et al., 2018), masked language models (Devlin et al., 2019), word order permutation (Yang et al., ∗Work done during internship at Google. 1Code is available at https://github.com/googleresearch/language/tree/master/language/conpono and https://github.com/daniter-cu/DiscoEval 2019), more robust training (Liu et al., 2019) and more efficient architectures (Lan et al., 2019). However, little focus has been put on learning discourse coherence as part of the pretraining objective. While discourse coherence has been of great interest in recent natural language processing literature (Chen et al., 2019; Nie et al., 2019; Xu et al., 2019), its benefits have been questioned for pretrained language models, some even opting to remove any sentence ordering objective (Liu et al., 2019). However, in a recently published benchmark for evaluating discourse representations, Chen et al. (2019) found that the best performing model was surprisingly BERT, despite comparing against models specifically designed for discourse, such as DisSent (Nie et al., 2019) and a new recurrent network trained on a large range of sentence ordering objectives. We show that combining transformer encoders with our intersentence coherence objective, we can further improve discourse-level representations in language models. We present a model that trains a sentence-level encoder to capture discourse relationships between sentences, including ordering, distance and coherence. The encoder is trained by using its output to predict spans of text that are some k sentences away from a context in either direction. The predictions are made discriminatively with a sampled-softmax that contrasts the correct target sentence against negatives, including hard examples sampled from the same paragraph. Our objective is inspired by the recently proposed Constrastive Predictive Coding (CPC) (van den Oord et al., 2018), but, among other differences, is applied on the sentence-level rather than the token-level and is bi-directional. We call this the CONtrastive Position and Ordering with Negatives Objective (CONPONO)2. 2Also means arrange or order in Latin. 4860 We evaluate our model on DiscoEval (Chen et al., 2019), a recently published benchmark for evaluating and probing for various aspects of discourselevel semantics in representations output by discourse models. We observe that the representations learned with CONPONO outperform BERT-Large and achieve a new state-of-the-art despite using fewer parameters and training on the same data. Furthermore, we show that our new objective improves model performance on other tasks including textual entailment, common-sense reasoning and reading comprehension. We compare CONPONO against BERT-Base on RTE (Giampiccolo et al., 2007; Bentivogli et al., 2009), COPA (Roemmele et al., 2011) and ReCoRD (Zhang et al., 2018), while controlling for model size, training data and training time. Our main contributions are: 1. We describe a novel sentence-level discourse objective that is used in conjunction with a masked language model for unsupervised representation learning for text. We show that this objective can leverage the cross-attention and pretrained weights of a transformer model to learn discourse-level representations. 2. We show that our model achieves a new stateof-the-art on DiscoEval, improving the results on 5 of the 7 tasks and increasing accuracy by up to 13% and an average of over 4% absolute across all tasks. We also show 2%6% absolute improvements over Bert-Base on RTE, COPA and ReCoRD as evidence that discourse pretraining can also improve model performance on textual entailment, commonsense reasoning and reading comprehension. 2 Model Figure 1 illustrates the CONPONO model. The intuition is that if the model is able to accurately predict the surrounding target sentences given some anchor text, then the vector representations for these sentences should also be useful for downstream tasks. The input to the model is a paragraph that is split into sentences. A sentence is chosen at random as the anchor, and will be denoted as si. We encode si with a transformer encoder to produce a vector ci. The surrounding sentences are denoted as si+k where k ∈[−K .. −1, 1 .. K], meaning the maximum distance we use is K. We report results for K ∈[1..4]. These sentences, si+k, are encoded jointly with the anchor sentence. We use just a single encoder gθ so all text is encoded with the same weights. The encoded vectors are named ti+k because these are the target vectors the model tries to identify given the anchor and a target distance k. Equation 1 defines ti+k and ci as a function gθ of the input sentences. Note that the CONPONO gθ is different from the encoder in CPC because we input both the anchor and the target into the encoder, rather than separate anchor and target encoders. ti+k = gθ(si, si+k), ci = gθ(si) (1) Given the anchor and targets, we define a logbilinear model in equation 2 to score the plausibility of target ti+k being in position k from anchor ci. The full set of parameters for our model is θ for the encoder and a Wk for each k. CPC has the same bi-linear form as Equation 2 but the architecture for the encoders is different. fk(si+k, si) = exp(tT i+kWkci) (2) The loss for each k is given in equation 3 where the score for the correct target is contrasted to scores of random samples sj, sampled from both indocument and random sentences from the corpus, S. Lk = −ES  log fk(si+k, si) Σsj∈S fk(sj, si)  (3) To train CONPONO, we sample negative examples randomly from the corpus and from the same paragraph but different k as hard negatives. Note that when |k| is greater than 1, there will be sentences between the anchor sentence and target sentence that will be purposely omitted from the input. The missing context is intended to create a challenging objective where the model may not be able to rely on trivial signals that often appear in contiguous sentences. 2.1 Encoder Architectures For each example we encode two text spans, the anchor and the target. There are three main options for encoding the two spans into ci and ti+k. The simplest method, and most similar to CPC is to encode the anchor and target separately, which we call isolated encoding. With this encoder, equation 1 will be ti+k = gθ(si+k). The major drawback of this approach is that there is no token-level crossattention between the anchor and the target, which has been shown to generally improve text encoding 4861 Si-2 Si-1 Si+1 Si+2 Si Encoder Encoder Encoder Encoder Encoder ti-2 ti-1 ti+1 ti+2 ci Predictions Sr Sr’ Encoder Encoder tr tr’ Random Negatives Figure 1: During training, a text segment is selected as the anchor (Si). The anchor as well as all the targets, Si−2...Si+2 plus random samples Sr are encoded with the transformer masked language model. The encoded representation of the anchor is used to predict each target at its target distance. The Si objects are raw text sentences, the encoder is the transformer model, and ci and ti are vectors. (Vaswani et al., 2017). Cross-attention is the mechanism in neural networks that allows for attention to be shared between multiple inputs, in our case, two separate spans of text. Alternatively, we can encode the anchor and target together and then dot product the latent vector with a learned vector representation for each distance k. We call this approach a uni-encoder. With this encoder, equation 2 will be fk(si+k, si) = exp(tT i+kwk). The class matrix Wk in equation 2 is replaced by a class vector wk, which has fewer parameters. This is similar to the ordering objectives in BERT and ALBERT where the pooled representation is used for a binary classification task and the learned vector representation for each distance k is just the softmax weights. The potential drawback to this method is that each pair of sentences is represented by a single vector. This encoder may learn a representation that is similar for all examples that have the same label but does not explicitly model the content of the input. CONPONO implements the intersection of these two approaches. The targets are concatenated to the anchor when encoded, to make use of the crossattention of the transformer encoder. The anchor, is encoded independently, though with the same weights. This objective allows for more freedom in the values of ci and ti+k, unlike the uni-encoder. Furthermore, since the encoder, gθ, can encode either one span (si) or two spans (si, si+k), it can be used for downstream tasks that have either single (eg. SSP) or double (eg. BSO) span inputs. 2.2 Comparing Inter-Sentence Modeling Objectives There are different tasks that can be used for learning inter-sentence representations. BERT (Devlin et al., 2019) included a next sentence prediction (NSP) task. For NSP, two spans are fed into the model with the second span either being the next contiguous span of text from the source or 50% of the time it is replaced with a random span from the corpus. The task is a binary classification of whether the two spans are from the same source. ALBERT (Lan et al., 2019) compares the NSP approach to using no inter-sentence objective and to sentence order prediction, which for clarity we refer to as binary sentence ordering (BSO). For BSO, the input is two spans that are always contiguous and from the same source but 50% of the time are in reverse order. With CONPONO we capture the benefits of both learning ordering between coherent sentences and contrasting against random negatives. We make the objective even more challenging by also predicting order on spans that are multiple sentences apart, and using other sentences from the same paragraph as harder negatives. 2.3 Technical details In practice, we use a 512 token input which is much larger than most two sentence pairs. To train on longer sequence lengths, we use 4 sentences as the anchor and 3 sentences as the target segment. We truncate longer sentences and pad tokens up to the sequence length as done for typical BERT input. There is no overlap between the two segments and 4862 the k distance refers to the number of sentences omitted between the two segments. For example, for a paragraph we may choose s7..s10 as the anchor and s1..s3 as the target for k = −4 because s3 is 4 positions behind s7. Since most paragraphs are not long enough to have many sentences in both directions of a 4 sentence anchor, we randomly select 4 of the 8 possible k targets for a given paragraph. Because of the random sampling, we oversample shorter distances because they occur more consistently in the data. We train with 32 input sentences, where 1 is the correct target, 3 are hard negatives from the same document and 28 are random sentences from other documents. For fair comparison, we train on the same data as BERT, using only Wikipedia and BooksCorpus (Zhu et al., 2015). We initialize our model with BERT-Base weights and train until the model has seen one-fourth as many segment pairs as the original BERT model ( 32M total), so the total compute and iterations of training are not significantly greater than BERT-Base. We also use a masked language model objective similar to BERT but dynamically mask during training for different masks each epoch. When jointly encoding two inputs, we concatenate the input tokens and separate the two spans with a “[SEP]” token to mimic the BERT format. 3 Evaluation We evaluate our model on the DiscoEval benchmark (Chen et al., 2019) and on the RTE (Giampiccolo et al., 2007; Bentivogli et al., 2009), COPA (Roemmele et al., 2011) and ReCoRD (Zhang et al., 2018) datasets. We chose the DiscoEval benchmark because it is intended to evaluate a model’s ability to represent the “role of a sentence in its discourse context”. We also report results on RTE, COPA and ReCoRD because these tasks have a discourse or sentence ordering aspect to them but are not exclusively designed for discourse evaluation. 3.1 Discourse Evaluation Tasks: DiscoEval (Chen et al., 2019) is a suite of tasks “designed to evaluate discourse-related knowledge in pretrained sentence representations”. The benchmark is composed of seven tasks; four based on sentence ordering or coherence (Sentence position (SP), Binary sentence ordering (BSO), Discource coherence (DC) and Sentence section prediction (SSP)) and three that are based on classifying the type of relationship between a pair of text sequences (Penn Discourse Tree Bank Explicit and Implicit (PDTB-E/I) and Rhetorical structure theory (RST)). PDTB (Prasad et al., 2008) and RST (Carlson et al., 2001) are human annotated datasets. Both are multi-class classification tasks where PDTB is classifying a pair of sentences whereas RST is predicting the class of a node in a document-level discourse tree. Both classes of tasks are critical aspects of understanding discourse. Baselines: The previously best overall performing model from DiscoEval (Chen et al., 2019) was BERT-Large (Devlin et al., 2019). We also include the results for BERT-Base because our model is most comparable to BERT-Base in terms of parameter size, training data and training compute. We also evaluate RoBERTa-Base (Liu et al., 2019) because it was trained on more data, reported improvements over BERT-Base on other tasks but dropped the next sentence prediction objective entirely. We also compare against a BERT-Base model which we trained with binary sentence ordering (BERT-Base BSO) because this objective has been shown to be more useful than next sentence prediction (Lan et al., 2019). This BERT-Base BSO model was initialized with BERT weights and trained on the same data but only on contiguous spans of text where 50% of the time we switch the order. This model and CONPONO are initialized from the same weights and trained on the same number of segment pairs so that the two models can be compared fairly. In Section 2.1 we describe different encoding approaches for generating the sentence-level representations. We report results from versions of CONPONO using each of these encoding approaches, labeled isolated to represent separate encoding and uni-encoder to represent joint encoding of the anchor and target without a separate anchor encoding. The final line in Table 1 is the combined approach that we describe in Section 2. Modeling DiscoEval We reuse the code from DiscoEval and generally maintain the same process for collecting our results on the benchmark, such as freezing all weights and only training a logistic regression or one layer perceptron on top of the sentence encodings. Note that since we are only interested in the vector representations of the input, we drop the weight matrix Wk and only use the output of the encoder. We omit the details for 4863 Model SP BSO DC SSP PDTB-E PDTB-I RST-DT avg. BERT-Base 53.1 68.5 58.9 80.3 41.9 42.4 58.8 57.7 BERT-Large 53.8 69.3 59.6 80.4 44.3 43.6 59.1 58.6 RoBERTa-Base 38.7 58.7 58.4 79.7 39.4 40.6 44.1 51.4 BERT-Base BSO 53.7 72.0 71.9 80.0 42.7 40.5 63.8 60.6 CONPONO isolated 50.2 57.9 63.2 79.9 35.8 39.6 48.7 53.6 CONPONO uni-encoder 59.9 74.6 72.0 79.6 40.0 43.9 61.9 61.7 CONPONO (k=2) 60.7 76.8 72.9 80.4 42.9 44.9 63.1 63.0 CONPONO std. ±.3 ±.1 ±.3 ±.1 ±.7 ±.6 ±.2 Table 1: CONPONO improves the previous state-of-the-art on four DiscoEval tasks. The average accuracy across all tasks is also a new state-of-the-art, despite a small drop in accuracy for PDTB-E. BERT-Base and BERT-Large numbers are reported from Chen et al. (2019), while the rest were collected for this paper. We report standard deviations by running the evaluations 10 times with different seeds for the same CONPONO model weights. the encoding logic for each task since that is explained in detail in Chen et al. (2019). Here we only mention our deviations from the Chen et al. (2019) methodology. The most salient difference is that we only use the pooled representation from our model rather than the average from multiple layers of the model for the SP, BSO and DC tasks. For encoding individual tasks we prefer to encode pairs of sentences together. For SP we encode the first sentence concatenated with every other sentence instead of taking the point-wise difference and concatenate the 5 vectors. For BSO we also encode the two sentences together instead of separately. For DC we split the paragraph into pairs of sentences and encode those together. We concatenate the 3 output vectors. For RST instead of embedding each sentence and doing a mean of all the sentences in a subtree, we simply concatenate those sentences and encode them all together as a single text span. Any text segments longer than 512 tokens are truncated from the end. Results: Table 1 shows that our model outperforms the previous state-of-the-art accuracy on DiscoEval overall. Our model excels in particular on the sentence ordering and coherence tasks (SP, BSO, and DC). Note that our model parameter count is the same as BERT-Base but it outperforms BERT-Large, which has significantly more parameters and has used much more compute for pretraining. From the discussion in Section 2.2, BERT represents using the NSP objective and we train BERT-Base BSO to compare NSP, BSO and CONPONO directly. BERT-Base BSO scores tend to fall between those of BERT-Base and our model, implying that the sentence ordering objective is improving the models for this benchmark, but that binary sentence ordering is not sufficient to capture the added benefits of including more fine-grained ordering and negative examples. We observe that CONPONO outperforms both the isolated encoding and uni-encoding approaches. CONPONO isolated preforms significantly worse than both other approaches, suggesting that crossattention between the anchor and the target is critical to learning stronger discourse representations. CONPONO uni-encoder results are closer to our combined encoding approach but still fall short on every task. This empirical result suggests that the separate encoding of the anchor during pretraining is important despite the fact that theoretically CONPONO could trivially reduce to the uni-coder representation by ignoring ci. 3.2 RTE, COPA and ReCoRD Tasks: DiscoEval was specifically designed to evaluate model performance on discourse tasks but there are many other benchmarks that could also benefit from pretraining for improved discourse coherence. We evaluate our model on three such tasks, Recognizing Textual Entailment (RTE) (Giampiccolo et al., 2007; Bentivogli et al., 2009), Corpus of Plausible Alternatives (COPA) (Roemmele et al., 2011) and Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) (Zhang et al., 2018). We report accuracy on the validation set provided by each dataset. Each example in RTE is a pair of sentences. The model must classify whether or not the second sentence entails the first. Examples in COPA are composed of a single context sentence followed by two candidate sentences that are either a cause or effect of the context sentence. The model must select the 4864 Context Completions ReCoRD ... Despite its buzz, the odds are stacked against Google’s Chrome OS becoming a serious rival to Windows... Chrome OS must face the same challenges as Linux: compatibility and unfamiliarity. A big stumbling block for Google will be whether its system supports iTunes. Google will also be under pressure to ensure [Chrome OS / iTunes / Linux] works flawlessly with gadgets such as cameras, printers, smartphones and e-book readers. RTE Rabies virus infects the central nervous system, causing encephalopathy and ultimately death. Early symptoms of rabies in humans are nonspecific, consisting of fever, headache, and general malaise. Rabies is fatal in humans. COPA The women met for coffee. They wanted to catch up with each other. The cafe reopened in a new location. Table 2: These are examples from ReCoRD, RTE, and COPA that exhibit aspects of discourse coherence. For ReCoRD, candidate entities are in italics and replaced terms in the completion are underlined. True completions are bold. most “plausible” sentence of the two. Lastly, an example in ReCoRD is a paragraph from a news article, followed by several bullet points and with all the entities marked. The model is given a single sentence from later in the document with a single entity masked out and must select the entity from the context that fills the blank. Table 2 shows examples of each with correct choices in bold. Baselines: We compare our model against BERT-Base because this is the closest model in terms of parameter size and training data. However, since our model is initialized with BERT-Base weights, we also report results from BERT-Base BSO because it was trained on the same number of text examples as CONPONO. We also compare against BERT-Large to contrast to a much larger language model. We provide results from Albert (Lan et al., 2019) when available to provide a stateof-the-art baseline that may have used more data, compute and parameters. The purpose of these results is not to compare against the current stateof-the-art but rather to better understand the improvements that can be found from adding a discourse coherence objective to BERT-Base without significantly increasing the model size or training data. Results: We believe that the coherence and ordering aspects of these evaluation tasks are well fit to demonstrate the how our model can improve on strong baselines such as BERT-Base. Table 3 shows that our model achieves accuracies on RTE and COPA comparable to BERT-Large while having the same number of parameters as BERT-Base. Interestingly, we observe improvements over the baseline with BERT-Base BSO, showing that even Model RTE COPA BERT-Base 66.4 62.0 BERT-Base BSO 71.1 67.0 CONPONO 70.0 69.0 BERT-Large 70.4 69.0 ALBERT 86.6 Table 3: Our model improves accuracy over BERTBase for RTE and COPA benchmarks. Improvements are comparable to BERT-Large but still lag behind much larger models trained on more data, such as ALBERT. All scores are on the validation set. simple discourse-level objectives could lead to noticeable downstream effects. Though these improvements are modest compared to BERT-Large, they are meant to highlight that our model does not only improve on results for artificial sentence ordering tasks, but also on aspects of benchmarks used to generally evaluate pretrained language models and language understanding. 3.2.1 ReCoRD results and models Model Accuracy BERT-Base 61.2 CONPONO 63.2 BERT-Large 69.8 [EM] Table 4: CONPONO is more effective at classifying the most plausible sentence from the extended context than BERT-Base. We report the BERT-Large exact match score, where the model selects only the target entity from the context, for reference. All scores are on the validation set. 4865 The task for the ReCoRD dataset is to select the correct entity from those that appear in the context to fill in the blank in the target. Previous models for ReCoRD have used a similar structure to SQuAD (Rajpurkar et al., 2016) where the model outputs a vector for each token and the model learns the best start and end position of the answer span based on the softmax over all the tokens. We, instead, generate all possible target sentences by filling the blank with each marked entity and discriminatively choose the sentence most likely to be the true “plausible” sentence from the context. This modified task evaluates how our model compares to BERTBase choosing the most coherent sentence from a set of nearly identical sentences. In Table 4 we show that CONPONO does achieve a boost over BERT-Base but is still well below BERT-Large exact match score on the harder task of selecting the entities in context. The strong results from BERTLarge imply that having a better representation of the text with a large model is able to subsume any improvement from learning plausible contexts for this task. 3.3 Ablations There are three aspects of our modeling choices that warrant a deeper understanding of their importance to the model: • Window size: We ablate the 4 window sizes (ie. choices of k). k = 1 is effectively binary sentence ordering with negative samples. • Masked Language Model Objective: We remove the MLM objective allowing the model to optimize only the CONPONO objective without maintaining a good token level representation. • Model size: We train a smaller model that is also initialized with pretrained weights. To measure the effects of each of these design decisions, we report DiscoEval scores for each model as well as accuracy on the CONPONO classification task on a held-out set of examples. This is to show how well the model is optimized as well as how well it performs on downstream tasks. Table 5 shows the results on DiscoEval with our model and several key ablations. We observe that using a window size for our objective that is larger than 1 is key to seeing downstream improvements. We believe that this is due to the objective being harder for the model because there is more variation farther from the anchor. At the same time, increasing the window size beyond 2 seems to result in similar performance. This may be because larger distances from the anchor also lead to more ambiguity. We see this reflected in the held-out classification accuracy being lower for examples with larger distance labels in Figure 2. We also note that keeping the masked language model objective during pretraining also improves downstream performance. In Figure 2 we see that classification accuracy is consistently lower with the MLM objective compared to without. This is expected because during inference, many key terms may be masked out, making the task harder. However, keeping this objective during pretraining maintains a good token-level representation that is necessary for downstream tasks. Lastly, we try training a smaller version of our model, with only 2 hidden layers, and a 512 intermediate size. The smaller model is able to train much faster, allowing us to train on many more examples and new data. However, we are unable to achieve similar results despite training on 24 times more examples, and including CCNews (Liu et al., 2019), a larger and higher quality data source. 3.4 Qualitative Analysis To glean some insight into how CONPONO representations may differ from BERT-Base representations, we look at the occurrence of discourse markers in the BSO-Wikipedia task of DiscoEval. We choose this task because it is a simple binary classification task that has only 2 sentences as input and the domain is similar to the pre-training data. We look at the usage of discourse markers identified by Nie et al. (2017); but, when, if, before, because, while, though, after, so, although, then, also, still. 3 We extract examples from the test set on which CONPONO output the correct label and BERT-Base output the incorrect label and visa versa. For each set of examples, we measure the change in the occurrence of discourse markers relative to the training data counts. Since some markers are much more common than others, we take the weighted average of the change in appearance rate, where the weights are the training data counts of each marker. 3We omit and and as because they are very common in this corpus but often are not used as connectives between the two candidate sentences for the BSO task. 4866 Model SP BSO DC SSP PDTB-E PDTB-I RST-DT avg. k=4 59.84 76.05 73.62 80.65 42.28 44.25 63.00 62.81 k=3 60.47 76.68 72.74 80.30 43.40 44.28 62.56 62.92 k=2 60.67 76.75 72.85 80.38 42.87 44.87 63.13 63.07 k=1 47.56 66.03 72.62 80.15 42.79 43.55 62.31 59.29 - MLM 54.92 75.37 68.35 80.2 41.67 43.88 61.27 60.81 Small 45.41 61.70 67.71 75.58 35.26 36.18 46.58 52.63 Table 5: The ablation analysis shows the effects of different k values (ie. window sizes) in our objective, removing the MLM objective during pretraining and training with a small transformer encoder. Label Accuracy 0.00 0.25 0.50 0.75 1.00 -4 -3 -2 -1 1 2 3 4 Small Isolated Uni-encoder - MLM k=4 k=3 k=2 k=1 Per Position Accuracies on Unseen Examples Figure 2: We can evaluate the accuracy on the CONPONO objective for each label (ie. distance between anchor and target sentence) on a set of 5,000 examples held-out from training. We observe that higher accuracy does not necessarily correlate with better downstream performance on DiscoEval. We find that in the set of examples that CONPONO classified correctly, the rate of discourse makers was 15% higher than in the training corpus. This is in contrast to 11% higher among the examples that BERT classified correctly. The standard deviation for random samples of the same size was about 1%. This suggests that both BERT and CONPONO are relying heavily on discourse markers to solve the BSO-Wikipedia task. While it is expected for shallow discourse markers to be strong features for sentence ordering, we expect CONPONO to also incorporate deeper features, such as anaphora, due to its pretraining objective. One indication of CONPONO relying on alternative features than BERT-Base is that there was a 12% relative increase in discourse markers in the CONPONO set when counting markers only in the first sentence whereas an 8% relative increase in the BERT set when counting markers only in the second sentences. The difference in the location of the discourse markers in the two sets of examples suggests that CONPONO and BERT utilize those features differently and that CONPONO may be less likely to incorrectly classify examples that use discourse markers in the first sentence of a BSO example. Manually inspecting a sample of examples hints that there are often strong coreferences between the two input sentences that indicate the ordering. Table 6 shows two examples from the CONPONO correct set which is drawn from the BSO-Wikipedia test data. In both examples, the discourse marker appears in the first sentence but the second sentence contains anaphora referring to an antecedent in the first sentence. 4 Related Work Some of the largest improvements on benchmarks such as GLUE (Wang et al., 2018) have come from ELMO’s large scale bi-directional language modeling (Peters et al., 2018), BERT’s masked language models (Devlin et al., 2019), XLNET’s generalized autoregressive pretraining (Yang et al., 2019), RoBERTa’s robust training (Liu et al., 2019) and ALBERT’s parameter reduction techniques (Lan et al., 2019). As discussed in Section 2.2, most language model were limited to NSP or BSO for inter-sentence representation learning. We showed that by comparing to BERT, which uses NSP and BERT-Base BSO which we train with the BSO objective that our objective is able to improve the discourse-level representations by training on more fine-grained sentence ordering, non-contiguous 4867 In 1941 [1]Vaughn joined the United States National Guard for what had been planned as a one-year assignment , but when [2]World War II broke out , he was sent abroad until the war ended in 1945 . [1]He decided to make music a career when he was discharged from the army at the end of [2]the war , and attended Western Kentucky State College , now known as Western Kentucky University , majoring in music composition . Although it lasted only twenty-three years ( 1933–1956 ) and enrolled fewer than 1,200 students , Black Mountain College was one of the most fabled experimental institutions in art education and practice . It launched a remarkable number of the artists who spearheaded the avant-garde in the America of the 1960s . Table 6: Two examples from the DiscoEval BSO-Wikipedia test set on which CONPONO made the correct prediction but BERT-base did not. Bold terms are discourse markers, underlined terms are co-referents. In both examples, the discourse marker appears in the first sentence but the second sentence has anaphora referring to an antecedent in the first sentence. neighboring sentences and contrasting against random negatives. Early approaches to sentence representation, such as Skip-Thought Vectors (Kiros et al., 2015), mimicked word embedding methods in addition to left-to-right language modeling to use unlabeled data to learn sentence level representations. DisSent (Nie et al., 2019) focused more on collecting data that could be used to train a supervised classification model on pairs of sentences. These and other innovations in sentence representation lead to the creation of more evaluations for discourse and coherence representation (Chen et al., 2019; Xu et al., 2019). Like other unsupervised representation learning models, CONPONO is trained to generate a latent variable that encodes inter-sentence relationship and discourse coherence. Our objective is inspired by the Contrastive Predictive Coding (CPC) objective (van den Oord et al., 2018). CPC was originally designed to be a “universal unsupervised learning approach to extract useful representations from high-dimensional data” and was previously implemented on the token-level for text models. We utilize the k-distance predictions of CPC because it naturally captures discourse and sentence ordering properties when applied on the sentencelevel. Furthermore, by combining our objective with a transformer encoder, our model is able to benefit from cross-attention between the anchor and the target sentences, which we show outperforms encoding the anchor and target separately, as implemented in CPC. In Section 3.3 we show that the cross-attention is an important factor in learning a good representation for downstream tasks and effectively optimizing our inter-sentence objective. 5 Discussion In this paper we present a novel approach to encoding discourse and fine-grained sentence ordering in text with an inter-sentence objective. We achieve a new state-of-the-art on the DiscoEval benchmark and outperform BERT-Large with a model that has the same number of parameters as BERT-Base. We also observe that, on DiscoEval, our model benefits the most on ordering tasks rather than discourse relation classification tasks. In future work, we hope to better understand how a discourse model can also learn fine-grained relationship types between sentences from unlabeled data. Our ablation analysis shows that the key architectural aspects of our model are cross attention, an auxiliary MLM objective and a window size that is two or greater. Future work should explore the extent to which our model could further benefit from initializing with stronger models and what computational challenges may arise. Acknowledgments We wish to thank the Stanford NLP group for their feedback. We gratefully acknowledge support of the DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF15-1-0462 References Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth PASCAL recognizing textual entailment challenge. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurovsky. 2001. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue. 4868 Mingda Chen, Zewei Chu, and Kevin Gimpel. 2019. Evaluation benchmarks and learning criteria for discourse-aware sentence representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 649– 662, Hong Kong, China. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9. Association for Computational Linguistics. Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Allen Nie, Erin Bennett, and Noah Goodman. 2019. DisSent: Learning sentence representations from explicit discourse relations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4497–4510, Florence, Italy. Association for Computational Linguistics. Allen Nie, Erin D Bennett, and Noah D Goodman. 2017. Dissent: Sentence representation learning from explicit discourse relations. arXiv preprint arXiv:1710.04334. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn discourse TreeBank 2.0. In LREC 2008. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In 2011 AAAI Spring Symposium Series. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Peng Xu, Hamidreza Saghir, Jin Sung Kang, Teng Long, Avishek Joey Bose, Yanshuai Cao, and Jackie Chi Kit Cheung. 2019. A cross-domain transferable neural coherence model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 678–687, Florence, Italy. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint 1810.12885. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19– 27. 4869 A Appendix We include some fine-grained DiscoEval results that were reported as averages, as well as implementation and reproduction details for our experiments. A.1 SP, BSO and DC breakdown Table 7 shows the scores for each model per each dataset domain for the SP, BSO and DC tasks in DiscoEval. A.2 CONPONO pretraining details CONPONO is pretrained on 1.6 million examples randomly sampled from Wikipedia and BooksCorpus. We use the same number of training examples for all the ablations and training BERT-Base BSO. On example consists of a single anchor and 32 candidate targets, 4 losses (1 for each of the 4 randomly chosen true targets (ie. k)). We use a 25% warm up rate and a learning rate of 5e-5. The model is initialized with BERT-Base weights. We add a square interaction weight matrix that is the same size as model output dimensions (ie. 756) that is referred to as Wk in Section 2. There is one such matrix for each k. The maximum sequence length of the input is 512, though do to some preprocessing constraints, the maximum input seen by the model is 493. Our CONPONO small model has a hidden size of 128, an intermediate size 512, and has 2 hidden layers. We train it on 38.4 million examples, including examples from CCNews. Samples are drawn from each source proportional to the size of the source, meaning that about 70% of training examples come from CCNews. Otherwise, we use all the same parameters as CONPONO. A.3 Parameter counts Table 8 shows the number of parameters in each model used. A.4 RTE, COPA and ReCoRD details RTE is trained for 3240 steps, with checkpoints every 750 steps and a learning rate of 8e-6. The warm-up proportion is 10% and the a maximum sequence length of 512 COPA is trained for 300 steps, with checkpoints every 50 steps and a learning rate of 1e-5. The warm-up proportion is 10% and the maximum sequence length of 512. ReCoRD is trained for 8 epochs over the training data with a learning rate of 2e-5, warm-up proportion of 10% and a maximum sequence length of 512. 4870 Model Parameters BERT-Base 110M RoBERTa-Base 110M CONPONO [All Variants] 110M BERT-Large 335M Table 8 SP BSO DC Model Wiki arxiv ROC Wiki arxiv ROC Wiki Ubuntu BERT-Large 50.7 47.3 63.4 70.4 66.8 70.8 65.1 54.2 RoBERTa-Base 38.35 33.73 44.00 60.19 55.16 60.66 62.80 53.89 BERT-Base BSO 49.23 50.92 60.80 74.67 68.56 72.22 88.80 56.41 CONPONO - MLM 50.95 51.90 61.92 77.98 71.45 76.68 86.70 50.00 CONPONO Small 44.90 41.23 50.10 65.03 58.89 61.19 78.10 57.32 CONPONO isolated 49.33 44.60 56.53 59.16 57.48 56.94 71.60 54.71 CONPONO uni-encoder 54.30 58.58 66.75 78.25 71.65 73.99 86.00 57.90 k=4 54.07 58.30 67.15 79.04 72.21 76.89 88.38 58.85 k=3 54.65 59.55 67.22 79.34 73.61 77.08 89.48 56.00 k=2 54.83 58.77 68.40 79.24 74.16 76.84 89.22 56.41 k=1 44.05 40.98 57.65 68.47 62.40 67.24 89.03 56.20 Table 7: SP, BSO and DC are composed of separate datasets. We report the average in the main paper but show the breakdown here.
2020
439
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 460–464 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 460 A Three-Parameter Rank-Frequency Relation in Natural Languages Chenchen Ding, Masao Utiyama, Eiichiro Sumita Advanced Translation Technology Laboratory, Advanced Speech Translation Research and Development Promotion Center, National Institute of Information and Communications Technology 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan {chenchen.ding, mutiyama, eiichiro.sumita}@nict.go.jp Abstract We present that, the rank-frequency relation in textual data follows f ∝r−α(r+γ)−β, where f is the token frequency and r is the rank by frequency, with (α, β, γ) as parameters. The formulation is derived based on the empirical observation that d2(x+y)/dx2 is a typical impulse function, where (x, y) = (log r, log f). The formulation is the power law when β = 0 and the Zipf–Mandelbrot law when α = 0. We illustrate that α is related to the analytic features of syntax and β + γ to those of morphology in natural languages from an investigation of multilingual corpora. 1 Introduction Zipf’s law (Zipf, 1935, 1949) is an empirical law to formulate the rank-frequency (r-f) relation in physical and social phenomena. Linguistically, Zipf’s law can be observed on the distribution of words in corpora of natural languages, where the frequency (f) of words is inversely proportional to its rank (r) by frequency; that is, f ∝r−1. Zipf’s law is a special form of a general power law, that is, f ∝r−α, with α = 1. The Zipf’s/power law is usually examined under a log-log plot of rank and frequency, where the data points lie on a straight line. The simple proportionality of the Zipf’s/power law can be observed on randomly generated textual data (Li, 1992) and it only roughly depicts the r-f relation in real textual data. A two-parameter generalization of the Zipf’s/power law is the Zipf-Mandelbrot law, where f ∝(r + β)−α (Mandelbrot, 1965). Li et al. (2010) considered the reversed rank of rmax+1−r, where rmax is the maximum of ranking index, and proposed a two-parameter formulation of f ∝r−α(rmax + 1 −r)β. As a straightforward observation, the coefficients of proportionality should be distinguished for common and rear words (Powers, 1998; Li 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 Figure 1: Rank-frequency plots on English words (left) and Chinese characters (right). The x- and y-axes are log10 r and log10 f, respectively. The gray curves are the proposed formulation under logarithm: y = C − αx −β log10(10x + 10γ), where C is a constant. The dashed lines are the asymptotes of C −(αx + βγ) and C−(α+β)x. (α, β, γ) is (0.93, 2.04, 3.82) for English words and (0.59, 32.31, 4.42) for Chinese characters. et al., 2010). Therefore, an extension of the original Zipf’s/power law requires at least two parameters. In this study, a three-parameter formulation of f ∝r−α(r + γ)−β is derived based on the observation and analysis of multilingual corpora. It is a natural generalization of the power law and the Zipf-Mandelbrot law. The third parameter provides a depiction of the rigidness of different coefficients of proportionality. The proposed formulation can also fit non-Zipfian phenomena in natural languages, such as the r-f relation on Chinese characters. Figure 1 shows examples on English words from Europarl (Koehn, 2005) 1 and Chinese characters of Academia Sinica from the data of Sproat and Emerson (2003).2 2 Proposed and Related Formulation Under a logarithmic form, the Zipf’s law states that x + y = C, where (x, y) = (log r, log f), and C is roughly a constant. We further investigate the 1http://www.statmt.org/europarl/v8/ europarl.tgz 2http://sighan.cs.uchicago.edu/ bakeoff2005/data/icwb2-data.zip 461 0 1 2 3 4 5 English Word Chinese Character Artificial Figure 2: Smoothed second-order differences on the rank-frequency relation. The x-axis is log10 r. property of C = g(x). The first and second-order differences on g(x) are calculated as g′ i = gi −gi−1 xi −xi−1 , g′′ i = g′ i −g′ i−1 xi −xi−1 . (1) Here (xi, yi) is the data point of the i-th frequent token, gi = xi+yi for i > 1, and g′ 1 = g′′ 1 = 0.3 Because the differences are intrinsically nonsmooth, B´ezier curves are applied for smoothing in the investigation. Figure 2 shows examples of the smoothed g′′ on English words and Chinese characters from the same dataset used for Fig. 1. An artificial Zipfian dataset generated in the manner of Li (1992)4 is also used for comparison. It can be observed that the g′′ on English words and Chinese characters has an impulse, but not that on the artificial data. Generally, the impulse becomes more obvious if the data are more non-Zipfian. If we consider g′′ as a general impulse function, then g′ is a general sigmoid function and g can be modeled by a general softplus function in the form of b log(exp(x −c) + 1). To replace x by a generalized linear form as ax + d, y = −d −ax −b log(exp(x −c) + 1) (2) and to substitute (x, y) by (log r, log f), we obtain, f = exp(bc −d) ra(r + exp(c))b ∝r−α(r + γ)−β, (3) where (α, β, γ) = (a, b, exp(c)). exp(bc −d) is a constant unrelated to r. The obtained proportional form is a natural twocomponent extension of the power law and the 3To avoid too many meaningless zeros in the differences, only the data point with the minimum x is used for data points with the same y, i.e., tokens with the same frequency. 4Two letters a and b are used. The frequency of a, b, and space is 3 : 1 : 1, and 107 characters are randomly generated. 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 Figure 3: English word (left) and Chinese character (right) data in Figure 1 fitted by the gray curve of y = C −αx + β log10(rmax + 1 −10x). The dashed lines are of C −(αx + β log10(rmax + 1)) and C −β log10(rmax + 1 −10x) for two ends. (α, β) is (1.15, 9.16) for English words and (0.62, 157.13) for Chinese characters. Zipf-Mandelbrot law. Because the softplus function is a differentiable form of a rigid ramp function, Eq. (3) can also be considered as a smoothed piecewise broken power law. As shown in Fig. 1, α and (α + β) depict the proportional coefficients at the two ends, and the proportional coefficients are switched smoothly around x = γ. f ∝r−α(rmax + 1 −r)β proposed in Li et al. (2010) is also a two-component formulation. One more parameter (i.e., γ) in Eq. (3) is used to identify the location of the impulse observed in g′′. Under Li’s formulation, we obtain g = y + αx = β log(rmax + 1 −exp(r)) and g′′ = −C1 exp(x)(C2−exp(x))−2, where C1 and C2 are constants. g′′ is a monotonically decreasing function with x = log(C2) as the asymptote for x < log(C2). Therefore, Li’s formulation always has a steep tail and lacks the capacity to depict the switching of two stable proportional coefficients. Figure 3 shows examples using Li’s formulation to fit data in Fig. 1. It can be observed that the non-Zipfian Chinese characters are fitted well, but not for the tail part in more Zipfian English words. This can be explained from the shape of g′′ in Fig. 2. It is reasonable to model the g′′ of Chinese characters using a monotonically decreasing function because the γ in Eq. (3) is quite large (around rmax). However, it is not proper for English words, where a proper γ is required. Based on the analysis, it can be concluded that the formulation f ∝r−α(r + γ)−β is a generalized form that covers the Zipf’s/power law, ZipfMandelbrot law, piecewise broken power law, and Li’s two-parameter formulation. In the next section, we show the linguistic interpretation of the parameter (α, β, γ). 462 α β γ γ rmax bg 0.92±.00 2.05±.06 4.25±.02 0.85 cs 0.86±.00 1.20±.01 3.89±.01 0.74 da 0.99±.00 1.10±.01 3.85±.01 0.69 de 0.99±.00 1.08±.01 3.94±.01 0.70 el 0.98±.00 1.96±.03 4.43±.01 0.82 en 0.93±.00 2.04±.01 3.82±.00 0.75 es 0.94±.00 1.38±.01 3.82±.01 0.73 et 0.90±.00 1.06±.01 4.13±.01 0.75 fi 0.87±.00 0.89±.01 4.07±.01 0.70 fr 1.01±.00 2.05±.02 4.14±.01 0.80 hu 0.92±.00 0.96±.02 4.16±.02 0.76 it 0.94±.00 1.47±.01 3.84±.00 0.73 lt 0.84±.00 1.04±.01 3.77±.01 0.70 lv 0.87±.00 1.69±.02 4.22±.01 0.81 nl 0.98±.00 1.18±.01 3.73±.01 0.68 pl 0.87±.00 1.18±.01 3.97±.01 0.76 pt 0.93±.00 1.33±.01 3.77±.01 0.72 ro 0.94±.00 5.24±.32 4.78±.03 0.97 sk 0.89±.00 1.38±.01 4.14±.01 0.79 sl 0.91±.00 1.77±.04 4.31±.01 0.84 sv 0.99±.00 1.05±.01 3.86±.01 0.70 Table 1: Fitted parameters on Europarl data. 3 Experiment and Discussion We used the proposed formulation to fit data of various European languages and typical Asian languages. The Europarl corpus (Koehn, 2005) and data from the Second International Chinese Word Segmentation Bakeoff (ICWB2) (Sproat and Emerson, 2003) were mentioned in Section 1. We also used English-Japanese patent data from the 7th NTCIR Workshop (Fujii et al., 2008). The Europarl data and English data from NTCIR were lower-cased and tokenized using the toolkit provided by MOSES5 (Koehn et al., 2007). Fitting was performed under a logarithmic scale using the fit function6 in gnuplot.7 Specifically, relation-frequency data were used to fit (α, β, γ) and C in y = C−αx−β log10(10x+10γ). For the initialization, (α, β, γ) = (1, 1, rmax 2 ) and C = 3γ were applied. Table 1 lists the fitting results for all the languages8 in the Europarl corpus. The (α, β, γ) with 5http://www.statmt.org/moses/ 6An implementation of the nonlinear least-squares Marquardt-Levenberg algorithm was used. 7http://www.gnuplot.info/ 8Bulgarian (bg), Czech (cs), Danish (da), German (de), bg cs da de el en es et fi fr hu it lt lv nl pl pt sk sl sv 0.65 0.70 0.75 0.80 0.85 0.80 1.20 1.60 2.00 γnorm β Romance Slavic Uralic Germanic bg cs da de el en es et fi fr hu it lt lv nl pl pt sk sl sv 1.50 2.00 2.50 3.00 0.80 0.85 0.90 0.95 1.00 1.05 β + γnorm α Romance Slavic Uralic Germanic Figure 4: Distribution of languages in Europarl. the asymptotic standard error (±) are listed. Because γ may depend on the vocabulary size, normalized γnorm = γ rmax is also listed. It can be observed that all the language data were fitted well with an α of around 1.0, which is in accordance with the original Zipf’s law. β and γnorm for each language are plotted on the left of Fig. 4.9 On the β-γnorm plane, we can observe the rough tendency that β and γnorm are linear, in addition to a separation for different language branches. Further principal component analysis on (α, β, γnorm) suggests that α and β + γnorm can be generally considered as two dominant components.10 The plot on the right of Fig. 4 shows that the language branches can be separated roughly by lines parallel to the axes of α and β + γnorm. This indicates the linguistic explainability of the two axes. From the nature of these languages, we consider that α can be explained as an axis of analysissynthesis on syntax and β + γnorm as that on morphology. A large α suggests a couple of extremely frequent words in the corpus. As typical examples, languages with a relatively large α, that is, Romance and Germanic, generally contain abundant prepositions, particles, and determiners to mark syntactic roles, whereas those with a smaller α, that is, Slavic and Uralic, tend to use complex declension and conjugation within words to afford syntactic information. Interesting evidence is that bg, as a very analytic Slavic language, has a larger α than other Slavic languages. In another dimension, a large β + γnorm suggests a dramatic decrease in the frequency of rare words. Hence, lanGreek (el), English (en), Spanish (es), Estonian (et), Finnish (fi), French (fr), Hungarian (hu), Italian (it), Lithuanian (lt), Latvian (lv), Dutch (nl), Polish (pl), Portuguese (pt), Romanian (ro), Slovak (sk), Slovene (sl), and Swedish (sv). 9The non-typical Germanic en, Baltic lt and lv, and Hellenic el are in gray. ro with a large β is excluded. 10First principal component: −0.1α −0.7β −0.7γnorm, and second principal component: 1.0α + 0.1β −0.3γnorm. 463 α β γ γ rmax a.w 0.92±.00 0.73±.01 3.73±.02 0.72 c.w 0.84±.00 1.09±.04 3.84±.03 0.79 m.w 0.80±.00 1.22±.04 3.77±.03 0.81 p.w 0.81±.00 1.32±.06 3.76±.04 0.79 a.c 0.59±.00 32.31±2.04 4.42±.03 1.17 c.c 0.49±.00 31.30±2.73 4.32±.04 1.17 m.c 0.50±.00 15.51±0.52 3.95±.02 1.08 p.c 0.50±.00 21.02±1.18 4.10±.03 1.12 Table 2: Fitted parameters on ICWB2 data. guages with a small β + γnorm, that is, Germanic and Uralic, have a more gradual decrease in rare words, which are instances of various phenomena of derivation and compounding from complex morphology. By contrast, languages with a large β +γnorm, such as en and fr, tend to use phrases composed of multiple common words to express complex concepts, so that the drop in frequency of rare words is relatively dramatic. As β + γnorm is sensitive to the portion of rare words, this dimension may be easily affected by the property of specific data. An example is ro, for which a much larger β than other languages was fitted. Table 2 lists the fitting results on ICWB2 Chinese data. a.*, c.*, m.*, and p.* denote Academia Sinica, City University of Hong Kong, Microsoft Research, and Peking University data, respectively. *.w and *.c denote manually segmented words and characters, respectively. For the results on words, a trade-off on α and β + γnorm can be observed. Based on the previous analysis, we can consider that a.w has more segmentations on function words. An evidence is the segmentation of the expression shibushi (whether or not), which is composed of three characters shi (to be) bu (not), and shi (to be). The expression is segmented into shi / bu / shi in most cases in a.w, but always kept together in m.w. Regarding characters, we have small α and huge β + γnorm. Note that both common functional words and rare specific concepts in Chinese are commonly composed of multiple characters. Therefore, the contrast between common and rare characters is not so obvious, which leads to small α (no overwhelmingly functional words in syntax) and huge β + γnorm (extremely analytic in morphology). Figure 5 provides further evidence. The data size of typical languages in Europarl is graduen.0 en.2 en.4 en.8 de.0 de.2 de.4 de.8 es.0 es.2 es.4 es.8 fi.0 fi.2 fi.4 fi.8 cs.0 cs.2 cs.4 cs.8 1.50 2.00 2.50 3.00 3.50 0.80 0.90 1.00 1.10 β + γnorm α en ja.kytea ja.mecab ja.juman 1.50 2.00 2.50 0.80 0.85 0.90 0.95 β + γnorm α Figure 5: Effects on α and β + γnorm. ally halved and the change of the fitted parameters is shown in the plot on the left of Fig. 5. *.0 denotes the original data and *.n denotes the data of one n-th size. α does not change substantially for smaller data because of the stable syntax features and functional words. However, β + γnorm becomes larger, which suggests that there are fewer morphological varieties because of the smaller data size. The plot on the right of Fig. 5 shows how different word segmentations in Japanese affect the parameters. There are three common Japanese morphological analysis tools: kytea, mecab, and juman. kytea provides the most fragmentary segmentation and juman tends to attach suffixes to stems. For example, the three tools segment wakarimashita (understood, in polite form) as follows: waka / ri / ma / shi / ta (5 tokens) by kytea, wakari / mashi / ta (3 tokens) by mecab, and wakari / mashita (2 tokens) by juman. As the most fragmentary segmentation by kytea contains more functional suffixes as words, it has the largest α, and by contrast, the segmentation by juman has the smallest α. Furthermore, mecab has a smaller β+γnorm because it may keep proper nouns unsegmented, which can be considered as introducing more compounded words. For example, t¯oky¯odaigaku (The University of Tokyo) is kept as one word by mecab, but segmented as t¯oky¯o / daigaku (Tokyo / university) by the other two tools. 4 Conclusion and Future Work We have shown that f ∝r−α(r + γ)−β for the rank-frequency relation in natural languages. This is an explainable extension of several related formulations, with α related to the analytic features of syntax and β + γ to that of morphology. A more general form, f ∝Q k(r + γk)−βk, can be considered for further investigation. The k terms can depict k different proportional coefficients. 464 References Atsushi Fujii, Masao Utiyama, Mikio Yamamoto, and Takehito Utsuro. 2008. Overview of the patent translation task at the NTCIR-7 workshop. In Proc. of NTCIR, pages 389–400. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. of MT summit, volume 5, pages 79–86. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. ACL (Demo and Poster), pages 177–180. Wentian Li. 1992. Random texts exhibit Zipf’s-lawlike word frequency distribution. IEEE Transactions on information theory, 38(6):1842–1845. Wentian Li, Pedro Miramontes, and Germinal Cocho. 2010. Fitting ranked linguistic data with twoparameter functions. Entropy, 12(7):1743–1764. Benoˆıt Mandelbrot. 1965. Information theory and psycholinguistics. David M. W. Powers. 1998. Applications and explanations of Zipf’s law. In Proc. of NeMLaP3/CoNLL98, pages 151–160. Richard Sproat and Thomas Emerson. 2003. The first international Chinese word segmentation bakeoff. In Proc. of the SIGHAN workshop on Chinese language processing, pages 133–143. George K. Zipf. 1935. The psycho-biology of language. George K. Zipf. 1949. Human behaviour and the principle of least-effort.
2020
44
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4871–4884 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4871 A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks Angela S. Lin ♣∗ Sudha Rao♦ Asli Celikyilmaz♦ Elnaz Nouri♦ Chris Brockett♦ Debadeepta Dey♦ Bill Dolan♦ ♣Salesforce Research, Palo Alto, CA, USA ♦Microsoft Research, Redmond, WA, USA [email protected] {sudhra, aslicel, elnouri}@microsoft.com {chrisbkt, dedey, billdol}@microsoft.com Abstract Many high-level procedural tasks can be decomposed into sequences of instructions that vary in their order and choice of tools. In the cooking domain, the web offers many partially-overlapping text and video recipes (i.e. procedures) that describe how to make the same dish (i.e. high-level task). Aligning instructions for the same dish across different sources can yield descriptive visual explanations that are far richer semantically than conventional textual instructions, providing commonsense insight into how real-world procedures are structured. Learning to align these different instruction sets is challenging because: a) different recipes vary in their order of instructions and use of ingredients; and b) video instructions can be noisy and tend to contain far more information than text instructions. To address these challenges, we first use an unsupervised alignment algorithm that learns pairwise alignments between instructions of different recipes for the same dish. We then use a graph algorithm to derive a joint alignment between multiple text and multiple video recipes for the same dish. We release the MICROSOFT RESEARCH MULTIMODAL ALIGNED RECIPE CORPUS1 containing ∼150K pairwise alignments between recipes across 4,262 dishes with rich commonsense information. 1 Introduction Although machine learning has seen tremendous recent success in challenging game environments such as Go (Schrittwieser et al., 2019), DOTA (OpenAI, 2019), and StarCraft (DeepMind, 2019), we have not seen similar progress toward algorithms that might one day help humans perform everyday tasks like assembling furniture, applying makeup, ∗Work done when the author was an intern at Microsoft. 1https://github.com/microsoft/ multimodal-aligned-recipe-corpus 7. Add 12 ounces of thawed peas and bean sprouts. 3. Add onion, garlic, peas and carrots. 4. Transfer shrimp to the hot skillet and cook them one minute per side. 4. Stir fry until tender. 1. Hi everyone. Today we’re making shrimp fried rice, a family favorite. 2. In a small bowl beat together 4 eggs. 2. Heat cooking fat in a large skillet on medium heat. 3. Place a large nonstick pan or wok over medium high heat and when a bead of water sizzles and evaporates, add 2 tablespoons of sesame oil. 1. In a pot add 1 cup of rice and 2 cups of water cook for 15 min. 5. Crack an egg and scramble it in the same pan and mix it throughout vegetables. 6. Add rice and shrimp stir well and remove from heat and add soy sauce. 5. In the same pan cook the beaten eggs breaking them up with your spatula and cooking just until they are no longer running. 6. Now add 5 cups of cold leftover rice. 7. Add the chopped green onion before serving. Figure 1: Text recipe (left) and transcript of video recipe (right) for shrimp fried rice. Aligned instructions are highlighted in the same color. Ingredients that can be substituted are encircled in the same color. repairing an electrical problem, or cooking a particular dish. In part this is because the relevant large-scale multimodal (language, video, audio) datasets are difficult to acquire, even with extensive crowdsourcing (Salvador et al., 2017; Sanabria et al., 2018). Unimodal data, though, is abundant on the web (e.g. instructional videos or textual instructions of tasks). Using language as the link between these modalities, we present an approach for learning large-scale alignment between multimodal procedural data. We hope our work, and the resulting released dataset, will help spur research on real-world procedural tasks. Recipes in the cooking domain provide procedural instruction sets that are captured – in large volume – both in video and text-only forms. Instruction sets in these two modalities overlap sufficiently to allow for an alignment that reveals interestingly different information in the linguistic and visual realms. In Figure 1, for instance, the text recipe (left) and the transcribed video recipe (right) for shrimp fried rice vary in word usage, order of instructions and use of ingredients. Know4872 5. Add carrots, onion, peas and garlic and season with a pinch of salt and pepper. 8. Immediately add in the rice, green onions, and soy sauce and stir until combined. 2. Add whisked eggs, and cook until scrambled, stirring occasionally. 1. Heat 1/2 tablespoon of better in a large sauté pan over medium-high heat until melted. 5. Add 4 chopped green onions and 2 minced garlic cloves & continue to stir-fry for a min. 4. To the hot pan of oil, add 1/2 cup of chopped carrots and stir-fry for 2 to 3 minutes. 7. Add 3 cups of well-chilled, previously cooked, long-grain brown rice and stirfry for several minutes. 6. Pour in the beaten eggs and scramble for 30 to 45 seconds 2. While the oil is heating, lightly beat 2 large eggs in a small bowl. 1. In a large skillet or wok, heat 3 tablespoons of olive oil over mediumhigh heat. 5. Add in rice 4. scramble 2 eggs in same pans 6. Add in rest of sesame oil and soy sauce 2. Add in veggies, ham, onion, and garlic 1. Put 1 tbs of sesame oil in a wok and heat on medium heat 10. Then add in the eggs and stir to combine. 11. Remove from heat and stir in the sesame oil until combined. 9. Season the rice with the soy sauce, salt, and pepper and continue heating until the rice is hot. Video Recipe 1 Text Recipe 2 Text Recipe 3 Video Recipe 2 Text Recipe 1 Figure 2: Dish level alignment between three text recipes and two video recipes for fried rice. Same colored text boxes (in text recipes) and image borders (in video recipes) indicate instructions that are aligned to each other. ing that the highlighted instructions correspond to the same step is useful in understanding potential ingredient substitutions, how the same step can be linguistically described and physically realized in different ways, and how instruction order can be varied without affecting the outcome. Motivated by this idea that aligned procedural data can be a powerful source of practical commonsense knowledge, we describe our approach for constructing the MICROSOFT RESEARCH MULTIMODAL ALIGNED RECIPE CORPUS. We first extract a large number of text and video recipes from the web. Our goal is to find joint alignments between multiple text recipes and multiple video recipes for the same dish (see Figure 2). The task is challenging, as different recipes vary in their order of instructions and use of ingredients. Moreover, video instructions can be noisy, and text and video instructions include different levels of specificity in their descriptions. Most previous alignment approaches (Munteanu and Marcu, 2005) deal with pairwise alignments. Since our goal is to align multiple instruction sets, we introduce a novel twostage unsupervised algorithm. In the first stage, we learn pairwise alignments between two text recipes, two video recipes, and between a text and a video recipe using an unsupervised alignment algorithm (§3.1). In the second stage, we use the pairwise alignments between all recipes within a dish to construct a graph for each dish and find a maximum spanning tree of this graph to derive joint alignments across multiple recipes (§3.2). We train our unsupervised algorithm on 4,262 dishes consisting of multiple text and video recipes per dish. We release the resulting pairwise and joint alignments between multiple recipes within a dish for all 4,262 dishes, along with commonsense information such as textual and visual paraphrases, and single-step to multi-step breakdown (§5). We evaluate our pairwise alignment algorithm on two datasets: 1,625 text-video recipe pairs across 90 dishes from the YouCook2 dataset (Zhou et al., 2018a), and a small set of 200 human-aligned texttext recipe pairs across 5 dishes from Common Crawl. We compare our algorithm to several textual similarity baselines and perform ablations over our trained model (§4). Finally, we discuss how this data release will help with research at the intersection of language, vision, and robotics (§6). 2 Recipe Data Collection We describe our approach for collecting large-scale text and video recipes; and constructing recipe pairs for training our unsupervised alignment algorithm. 2.1 Common Crawl Text Recipes We extract text recipes from Common Crawl,2 one of the largest web sources of text. We heuristically filter the extracted recipes3 to obtain a total of 48,852 recipes across 4,262 dishes. The number 2https://commoncrawl.org/ 3Details in supplementary. 4873 Figure 3: An example transcript of a video recipe with sentences marked as “chat” (non-instructional) or “content” (instructional). of recipes per dish ranges from 3 to 100 (with an average of 6.54 and standard deviation of 7.22). The average recipe length is 8 instructions. 2.2 YouTube Video Recipes For each dish in the text recipes, we use the dish name with ‘recipe’ appended, e.g. ‘chocolate chip cookie recipe’, as a query on YouTube and extract the top N videos where N is proportional to the number of text recipes for that dish4 to obtain a total of 77,550 video recipes. We transcribe these videos using the Microsoft Speech-to-Text Cognitive service.5 Video recipes, unlike text recipes, contain noninstructional (“chat”) information. For instance, the presenter may give an introduction either of themselves or of the dish at the beginning of the video before diving into the steps of the recipe. Figure 3 contains an example transcript with “chat” and “content” information marked. We hypothesize that it is useful to remove such chat information from the transcripts before aligning them to text recipes. We build a supervised chat/content classifier using the YouCook2 dataset (Zhou et al., 2018a), an existing instructional cooking video dataset where parts of video that correspond to instructions are annotated by humans. We assume that these parts correspond to content whereas the rest of the video corresponds to chat.6 We preprocess the transcriptions of all 77,550 videos using this chat/content classifier7 to remove all sentences classified as chat. 4Details in supplementary. 5https://azure.microsoft.com/ en-us/services/cognitive-services/ speech-to-text/ 6Details in supplementary. 7Classifier achieves 85% F1-score on a held out test set. Train Val Test No. of dishes 4,065 94 103 Text-Text Pairs 46,054 5,822 11,652 Text-Video Pairs 56,291 3,800 5,341 Video-Video Pairs 19,200 274 514 Table 1: Statistics of our recipe pairs data (2.3) 2.3 Recipe Pairs for Training Given N text recipes and M video recipes for a dish, we pair each text recipe with every other text recipe to get O(N2) text-text recipe pairs. Similarly, we pair each text recipe with every video recipe to get O(N ∗M) text-video recipe pairs, and pair each video recipe with every other video recipe to get O(M2) video recipe pairs. On closer inspection, we find that some of these pairs describe recipes that are very different from one other, making a reasonable alignment almost impossible. For example, one black bean soup recipe might require the use of a slow cooker, while another describes using a stove. We therefore prune these recipe pairs based on the match of ingredients and length8 to finally yield a set of 63,528 text-text recipe pairs, 65,432 text-video recipe pairs and 19,988 videovideo recipe pairs. We split this into training, validation and test split at the dish level. Table 1 shows the number of dishes and pairs in each split. 3 Recipe Alignment Algorithm We first describe our unsupervised pairwise alignment model trained to learn alignments between text-text, text-video, and video-video recipes pairs. We then describe our graph algorithm, which derives joint alignments between multiple text and video recipes given the pairwise alignments. 3.1 Pairwise Alignments between Recipes Our alignment algorithm is based on prior work (Naim et al., 2014) that learns to align a sequence of natural language instructions to segments of video recording of the same wet lab protocol. They first identify the nouns in the text sentences and the blobs (i.e. objects) in video segments. Given the blobs from M video segments F = [f(1), ..., f(M)] and the nouns from N sentences E = [e(1), ..., e(N)], the task is to learn alignments between video segments and text sentences. They propose a hierarchical generative model which first uses a Hidden Markov Model 8Details in supplementary. 4874 Add onion, garlic, peas, and carrots. Saute for about 5 minutes or until the onion and carrots are soft. I am adding carrots with Green Bell Pepper. You can use peas or whatever else you want to add in there. Add 4 chopped green onions and 2 minced garlic cloves & continue to stir-fry for another minute. Heat the oils in the skillet over medium heat and saute the onion, celery, carrots, and bell pepper until softened. Figure 4: A maximum span tree for fried rice dish with text instructions and transcript segments as nodes, alignments as edges, and alignment probabilities as edge weights. Nodes representing text instructions are labeled “T”. Nodes representing transcript segments are labeled “V”. Each color indicates a different recipe. The bounding box shows a magnified section of the tree with edge weights and the instruction/transcript associated with each node. (HMM) (Rabiner, 1989; Vogel et al., 1996) to generate each video segment f(m) from one of the text sentences e(n). They then use IBM1 model (Brown et al., 1993) emission probabilities to generate the blobs {f(m) 1 , ..., f(m) J } in f(m) from the nouns {e(n) 1 , ..., e(n) I } in e(n) as follows: P(f(m)|e(n)) = ϵ (I)J J Y j=1 J X i=1 p(f(m) j |e(n) i ) (1) The hidden state in the HMM model corresponds to the alignment between video segment and text sentence, and the state transition probabilities correspond to the jump between adjacent alignments. For computational tractability, a video segment can be aligned to only one sentence (multiple sentences can align to the same video segment) We use this algorithm to learn pairwise alignments between text-text, text-video and videovideo recipes. Given two recipes (source and target) of the same dish, we define our alignment task as mapping each text instruction (or video transcript sentence) in the source recipe to one or more text instructions (or video transcript sentences) in the target recipe. We make two modifications to the alignment algorithm described above: First, our recipe pairs, unlike the wet lab protocol data, does not follow the same temporal sequence. The alignment algorithm must thus learn to jump within a longer range. We set the window of jump probabilities at [−2, 2].9 Second, we use transcriptions to learn alignments rather than the objects detected in videos. We hypothesize that the richness of language used in instructional videos may facilitate better alignment with transcripts (as others have observed (Malmaud et al., 2015; Sener et al., 2015)). We use all words (except stop words) in video transcript sentences and all words in text instructions while learning the IBM1 word level probabilities. An instruction in one recipe can be aligned to multiple instructions in the other recipe. 3.2 Joint Alignment among Multiple Recipes We use the pairwise alignments to derive a joint alignment at the dish level between multiple text and video recipes. For each dish, we construct a graph where each node represents an instruction from a text recipe or a transcript sentence from a video recipe. We use the pairwise alignments to draw edges between nodes, with alignment probabilities as the edge weights. We include only those edges that have alignment probability greater than 0.5. The pairwise alignments are directed since they go from the source recipe to the target recipe. 9We find that increasing the window beyond 5 decreases performance. 4875 We first convert the directed graph into an undirected graph by averaging the edge weights between two nodes and converting directed edges into undirected edges. Note that the resultant graph can have multiple connected components as some recipe pairs may not have any instructions aligned with probability greater than the threshold of 0.5 Our goal is to find a set of jointly-alignable instructions across different recipes. We therefore convert the graph (with cycles) into a forest by running the maximum spanning tree algorithm on the graph. Figure 4 shows an example tree derived for one of the dishes. A path in this tree, that has at most one node from each recipe, constitutes a set of jointly-alignable instructions. For example, in the magnified section of the tree in Figure 4, all unique colored nodes in the path from the yellow node to the green node constitute a set of jointly-alignable instructions. 4 Experimental Results We describe how we evaluate our pairwise alignment algorithm (from §3.1). We answer the following research questions using our experimentation: 1. How does our alignment model perform when evaluated on human-aligned recipe pairs? 2. Does our unsupervised alignment model outperform simpler non-learning baselines? 3. How does performance differ when we use only nouns or nouns and verbs instead of all words to learn alignments? 4.1 Human Aligned Evaluation Set We evaluate our pairwise alignment algorithm on the following two human annotated datasets: YouCook2 text-video recipe pairs The YouCook2 dataset (Zhou et al., 2018a) consists of 1,625 cooking videos paired with human-written descriptions for each video segment. These span 90 different dishes. We transcribe all videos using the Microsoft Speech-to-Text Cognitive service10 and separate it into sentences using a sentence tokenizer. Given a sequence of human-written descriptions and a sequence of transcript sentences, the alignment task is to align each transcript sentence to one of the human-written descriptions. We train our pairwise alignment model on the train split of our text-video recipe pairs (from 10https://azure.microsoft.com/ en-us/services/cognitive-services/ speech-to-text/ §2.3) and evaluate on the YouCook2 dataset. An important difference between the text-video pairs in YouCook2 and in our data is that in YouCook2, the text instructions and the video segments are temporally aligned since the text instructions were specifically written for the videos. In our data, however, the text and the video recipes can differ in order. CommonCrawl text-text recipe pairs We randomly choose 200 text-text recipes pairs (spanning 5 dishes) from the test split of our data (§2.3) and collect alignment annotations for them using six human experts. We show annotators a numbered list of the instructions for the target recipe (along with its title and ingredients). We display instructions for the source recipe with input boxes besides them and ask annotators to write in the number(s) (i.e labels) of one or more target instruction(s) with which it most closely aligns. Each recipe pair is annotated by three annotators. For 65% of the instructions, two or more annotators agree on a label. For only 42% of the instructions do all three annotators agree, suggesting that the difficulty level of this annotation task is high. We train our pairwise alignment model on the train split of our text-text recipe pairs ( §2.3) and evaluate on the 200 humanaligned pairs. 4.2 Baselines Baselines described below align each instruction 11 in the source recipe to one or more instructions in the target recipe. Random We align each instruction in the source recipe to a random instruction in the target recipe. Uniform alignment Given N instructions in the target recipe, we divide the instructions in the source recipe into N equal chunks and align each instruction in the ith chunk of the source recipe to the ith instruction in the target recipe. For instance, given a source recipe [S1, S2, S3, S4] and a target recipe [T1, T2], uniform alignment would align S1 and S2 to T1 and S3 and S4 to T2. More generally, we align the ith instruction in the source recipe to the [( N M i)th −( N M (i + 1))th) instruction in the target recipe. BM25 retrieval We use BM25 (Robertson et al., 2009) as our information retrieval baseline. Given 11We use the term “instruction” to mean both text instruction and transcript sentence. 4876 Methods Precision Recall F1 Random 18.53 14.47 14.49 Uniform alignment 63.44 50.81 53.10 BM25 retrieval 48.86 39.85 38.91 Textual Similarity Exact word match 46.75 40.70 40.06 TF-IDF 46.82 39.23 38.55 GloVe 46.13 38.74 37.14 BERT 48.83 41.48 40.89 RoBERTa 50.21 42.43 42.28 HMM+IBM1 Nouns 78.63 63.83 65.29 Nouns+Verbs 80.56 67.90 69.00 All words 81.39 69.27 70.30 Table 2: Results for text-video recipe alignments on YouCook2 dataset. a source and a target recipe pair, we construct a corpus using all instructions in the target recipe. We then use each source instruction as a query to retrieve the top most instruction from the target instruction corpus and align the source instruction to the retrieved target instruction. Textual similarity Given a source recipe instruction and a target recipe instruction, we define a measure of textual similarity between the two instructions using the following five methods. For each source instruction, we compute its similarity score with every target instruction and align it to the target instruction with the highest score. a. Exact word match: Given two instructions, we define exact word match as the ratio of the number of common words between the two divided by the number of words in the longer of the two. This gives us a measure of word match that is comparable across instructions of different lengths. b. TF-IDF: We use all the recipes in our training set to create a term frequency (TF)-inverse document frequency (IDF) vectorizer. Given an instruction from the evaluation set, we compute the TF-IDF vector for the instruction using this vectorizer. Given two instructions, we define their TF-IDF similarity as the cosine similarity between their TF-IDF vectors. c. GloVe: We train GloVe embeddings (Pennington et al., 2014) on an in-domain corpus of 3 million words put together by combining text recipes and video transcriptions. Given an instruction, we average the GloVe embeddings (Pennington Methods Precision Recall F1 Random 14.26 14.00 12.69 Uniform alignment 41.38 31.85 33.22 BM25 retrieval 50.06 55.27 49.30 Textual Similarity Exact word match 53.90 48.39 46.98 TF-IDF 52.78 46.82 45.12 GloVe 56.04 51.89 50.30 BERT 50.72 55.07 49.10 RoBERTa 52.49 55.86 50.44 HMM+IBM1 Nouns 62.11 48.99 50.73 Nouns+Verbs 64.72 50.76 52.97 All words 66.21 52.42 54.55 Table 3: Results for text-text recipe alignment on Common Crawl dataset. et al., 2014) of nouns and verbs12 to obtain its embedding vector. Given two instructions, we define their embedding similarity as the cosine similarity of their embedding vectors. d. BERT: Given an instruction, we compute its embedding vector using BERT-based sentence embedding (Reimers and Gurevych, 2019). We experiment with different variants and find that the BERT-base model trained on AllNLI, then on STS benchmark training set13 performed the best for us. Given two instructions, we define their BERT similarity as the cosine similarity between their sentence embedding vectors. e. RoBERTa: We also experiment with a variant of the above baseline where we use RoBERTa (Liu et al., 2019) instead of BERT to compute the sentence embeddings. We use RoBERTa-large trained on AllNLI, then on STS benchmark training set. 4.3 Model Ablations We experiment with the following ablations of our unsupervised pairwise alignment model (§3.1): HMM+IBM1 (nouns) We use the NLTK14 partof-speech tagger to identify all the nouns in an instruction and only use those to learn the IBM1 word-level alignments. This ablation is similar to the model proposed by Naim et al. (2014) that align objects in videos to nouns in text. 12We find that using only nouns and verbs outperforms using all words. 13https://pypi.org/project/ sentence-transformers/ 14https://www.nltk.org/ 4877 HMM+IBM1 (nouns and verbs) We use both nouns and verbs to learn IBM1 word-level alignments. This ablation is similar to the method used in Song et al. (2016) that align objects and actions in videos to nouns and verbs in text. HMM+IBM1 (all words) We use all words (except stop words) in the source and the target recipe instructions to learn the word-level alignments.15 4.4 Evaluation Metrics Given M source recipe instructions and N target recipe instructions, the alignment task is to label each of the M source instructions with a label from [0, ..., (N −1)]. Given a predicted sequence of labels (from baseline or proposed model) and a reference sequence of labels (from human annotations) for a recipe pair, we calculate the weightedaverage16 precision, recall and F1 score. We average these scores across all alignment pairs to compute aggregate scores on the test set. 4.5 Results On text-video alignments Table 2 shows results of our pairwise alignment algorithm compared with baselines on 1,625 human aligned text-video recipe pairs from YouCook2. The BM25 baseline outperforms two of the textual similarity baselines. Within the textual similarity baselines, RoBERTa outperforms all others suggesting that a pretrained sentence level embedding acts as a good textual similarity method for this alignment task. The uniform alignment baseline, interestingly, outperforms all other baselines. This is mainly because in the YouCook2 dataset, the text instructions and the transcript sentences follow the same order, making uniform alignment a strong baseline. Our unsupervised HMM+IBM1 alignment model significantly outperforms (with p < 0.001) all baselines. Specifically, it gets much higher precision scores compared to all baselines. Under ablations of the HMM+IBM1 model, using all words to learn alignments works best. On text-text alignments Table 3 shows results of our pairwise alignment algorithm compared with baselines on 200 human-aligned text-text recipe pairs from Common Crawl. Unlike text-video alignments, we find that the uniform alignment 15Experimental details of HMM+IBM1 model is in supplementary. 16Calculate metrics for each label, and find their average weighted by the number of true instances for each label. baseline does not outperform textual similarity baselines, suggesting that the different re-orderings between text-text recipe pairs makes alignment more challenging. Within textual similarity baselines, similar to text-video alignment, RoBERTa outperforms all others. We believe this is because text recipes tend to share similar vocabulary, making it easier to find similar words between two textual instructions. Video narrators tend to use more colloquial language than the authors of text recipes, making it more difficult to learn alignments using word similarities. Interestingly, both BM25 and RoBERTa get higher recall than our best HMM+IBM1 model but they lose out on precision. This suggests that retrieval models are good for identifying more alignments, albeit with lower precision. Our unsupervised HMM+IBM1 model again significantly outperforms (p < 0.001) all baselines on F1 score. Under ablations of the HMM+IBM1 model, we again find that using all words to learn alignments performs best. Comparing text-video and text-text alignment results On comparing Table 2 and Table 3, we find that textual similarity baselines have overall higher scores on the text-text alignments than the text-video alignments. Our HMM+IBM1 model, on the other hand, has overall higher scores on text-video alignments than on text-text alignments. We attribute this contrast to the fact that two text recipes have higher vocabulary similarities than a text and a video recipe, resulting in textual similarity baselines to perform well on text-text alignments. Our HMM+IBM1 unsupervised learning model is able to do better on text-video pairs where the word usage differences are higher. Furthermore, the text-video pairs from YouCook2 are temporally aligned whereas the text-text pairs from Common Crawl have several re-orderings making the text-text evaluation set comparatively harder. The supplementary material includes an analysis of alignment outputs. 5 Data Release We describe the data released in our MICROSOFT RESEARCH MULTIMODAL ALIGNED RECIPE CORPUS. In all our released data, for text recipes, we include the actual text of the instructions. Whereas, for video recipes, we release the URL to the YouTube video with timestamps corresponding to the aligned video segments. 4878 Single Step Multiple Steps Beat eggs, oil vanilla and sugar together in a large bowl. 1.Beat eggs in large bowl until foamy. 2. Add sugar, oil and vanilla mix well. Butter 2 loaf pans and bake 1 hour at 325 degrees. 1. Pour into greased muffin tins or loaf pans 2. Yields about 4 small loaves or 2 large. 3. Bake for 25 minutes. Mix the zucchini, sugar, oil, yogurt and egg in a bowl. 1. Beat eggs, sugar, oil and vanilla. 2. Add zucchini. Table 4: Three examples of single-step to multi-step breakdown from the pairwise alignments. Figure 5: We plot the trade-off between the percentage of paraphrases extracted and the precision, recall and F1 score (as measured by human annotators) with increasing alignment probability threshold on 200 human-aligned text-text recipes pairs. 5.1 Pairwise and Joint Alignments We release the pairwise alignments between recipes of the same dish (derived from § 3.1) for 4,262 dishes. This includes 63,528 alignments between text recipes, 65,432 alignments between text and video recipes; and 19,988 alignments between video recipes. We also release the joint alignments between multiple text and multiple video recipes within a dish (derived from §3.2) for 4,262 dishes. 5.2 Textual and Visual Paraphrases The pairwise alignment algorithm described in §3.1 gives alignment probabilities for each pair of instructions it aligns. We threshold on these alignment probabilities to retrieve textual and visual paraphrases. Since our goal is to extract large number of high quality paraphrases, we decide on the threshold value by looking at the trade-off between the percentage of paraphrases extracted and their quality as measured by human annotators on 200 human-aligned text-text recipe pairs from our evaluation set (§4.1). Figure 5 shows the trade-off between the precision, recall and F1 score and the percentage of paraphrases extracted with increasing threshold on instruction-level alignment probability. At 0.5 threshold, we extract 60% of the total alignments as paraphrases from our evaluation set. We use this threshold value of 0.5 on the pairwise alignments in the training, validation and test sets to extract a total of 358,516 textual paraphrases and 211,703 text-to-video paraphrases from 4,262 dishes and include it in our corpus. 5.3 Single-step to Multi-step breakdown The pairwise alignments between text recipes include many instances where one instruction in one recipe is aligned to multiple instructions in another recipe with high alignment probability (greater than 0.9). Table 4 shows three such single-step to multistep breakdown. We extract a total of 5,592 such instances from 1,662 dishes across the training, validation and test sets and include it in our corpus. 6 Applications of Our Corpus We believe that our data release will help advance research at the intersection of language, vision and robotics. The pairwise alignment between recipes within a dish could be useful in training models that learn to rewrite recipes given ingredient or cooking method based constraints. The joint alignment over multiple text recipes within a dish should prove useful for learning the types of ingredient substitutions and instruction reordering that come naturally to expert cooks. The textual and visual paraphrases will, we believe, have implications for tasks like textual similarity, image and video captioning, dense video captioning and action recognition. The single-step to multi-step breakdown derived from our pairwise alignments may also prove useful for understanding task simplification, an important problem for agents performing complex actions. Such multimodal data at scale is a crucial ingredient for robots to learn-from-demonstrations 4879 of procedural tasks in a variety of environments. Collecting such large scale data is prohibitively expensive in robotics since it requires extensive instrumentation of many different environments. Other example applications are learning to ground natural language to physical objects in the environment, and catching when humans are about to commit critical errors in a complicated task and offering to help with corrective instructions. 7 Related Work Alignment Algorithms Our unsupervised alignment algorithm is based on Naim et al. (2014), who propose a hierarchical alignment model using nouns and objects to align text instructions to videos. Song et al. (2016) further build on this work to make use of action codewords and verbs. Bojanowski et al. (2015) view the alignment task as a temporal assignment problem and solve it using an efficient conditional gradient algorithm. Malmaud et al. (2015) use an HMM-based method to align recipe instructions to cooking video transcriptions that follow the same order. Our work contrasts with these works in two ways: we learn alignments between instructions that do not necessarily follow the same order; and our algorithm is trained on a much larger scale dataset. Multi-modal Instructional Datasets Marin et al. (2019) introduce a corpus of 1 million cooking recipes paired with 13 million food images for the task of retrieving a recipe given an image. YouCook2 dataset (Zhou et al., 2018a) consists of 2,000 recipe videos with human written descriptions for each video segment. The How2 dataset (Sanabria et al., 2018) consists of 79,114 instructional videos with English subtitles and crowdsourced Portuguese translations. The COIN dataset (Tang et al., 2019) consists of 11,827 videos of 180 tasks in 12 daily life domains. YouMakeup (Wang et al., 2019) consists of 2,800 YouTube videos, annotated with natural language descriptions for instructional steps, grounded in temporal video range and spatial facial areas. Leveraging Document Level Alignments Our work relies on the assumption that text recipes and instructional cooking videos of the same dish are comparable. This idea has been used to extract parallel sentences from comparable corpora to increase the number of training examples for machine translation (Munteanu and Marcu, 2005; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010; Gr´egoire and Langlais, 2018). Likewise, TalkSumm (Lev et al., 2019) use the transcripts of scientific conference talks to automatically extract summaries. Zhu et al. (2015) use books and movie adaptations of the books to extract descriptive explanations of movie scenes. Related Tasks A related task is localizing and classifying steps in instructional videos (Alayrac et al., 2016; Zhukov et al., 2019) where they detect when an action is performed in the video whereas we focus on describing actions. Dense event captioning of instructional videos (Zhou et al., 2018b; Li et al., 2018; Hessel et al., 2019) relies on human curated, densely labeled datasets whereas we extract descriptions of videos automatically through our alignments. 8 Conclusion We introduce a novel two-stage unsupervised algorithm for aligning multiple text and multiple video recipes. We use an existing algorithm to first learn pairwise alignments and then use a graph-based algorithm to derive the joint alignments across multiple recipes describing the same dish. We release a large-scale dataset constructed using this algorithm consisting of joint alignments between multiple text and video recipes along with useful commonsense information such as textual and visual paraphrases; and single-step to multi-step breakdown. Although our dataset focuses on the cooking domain, our framework should generalize to any domain with abundant volumes of unstructured-butalignable multi-modal data. DIY (Do-It-Yourself) videos and websites, for instance, are an obvious next target. We also envision extending this work by including audio and video features to enhance the quality of our alignment algorithm. Ultimately, we believe this work will further the goal of building agents that can work with human collaborators to carry out complex tasks in the real world. Acknowledgments We would like to thank Harpreet Sawhney, Roshan Rao, Prasoon Goyal, Dilip Arumugam and Raymond J. Mooney for all their help. We would also like to thank the four anonymous reviewers for their useful comments and suggestions. 4880 References Sadaf Abdul-Rauf and Holger Schwenk. 2009. On the use of comparable corpora to improve SMT performance. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 16–23, Athens, Greece. Association for Computational Linguistics. Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. 2016. Unsupervised Learning from Narrated Instruction Videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4575–4583. Piotr Bojanowski, R´emi Lajugie, Edouard Grave, Francis Bach, Ivan Laptev, Jean Ponce, and Cordelia Schmid. 2015. Weakly-Supervised Alignment of Video with Text. In Proceedings of the IEEE international conference on computer vision, pages 4462–4470. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263– 311. DeepMind. 2019. AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/article/alphastarmastering-real-time-strategy-game-starcraft-ii. Francis Gr´egoire and Philippe Langlais. 2018. Extracting parallel sentences with bidirectional recurrent neural networks to improve machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1442–1453, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Jack Hessel, Bo Pang, Zhenhai Zhu, and Radu Soricut. 2019. A case study on combining ASR and visual features for generating instructional video captions. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 419–429, Hong Kong, China. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, and David Konopnicki. 2019. TalkSumm: A dataset and scalable annotation method for scientific paper summarization based on conference talks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2125–2131, Florence, Italy. Association for Computational Linguistics. Yehao Li, Ting Yao, Yingwei Pan, Hongyang Chao, and Tao Mei. 2018. Jointly Localizing and Describing Events for Dense Video Captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7492–7500. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nicholas Johnston, Andrew Rabinovich, and Kevin Murphy. 2015. What’s cookin’? interpreting cooking videos using text, speech and vision. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 143–152, Denver, Colorado. Association for Computational Linguistics. Javier Marin, Aritro Biswas, Ferda Ofli, Nicholas Hynes, Amaia Salvador, Yusuf Aytar, Ingmar Weber, and Antonio Torralba. 2019. Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images. IEEE Transactions on Pattern Analysis and Machine Intelligence. Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving machine translation performance by exploiting non-parallel corpora. Computational Linguistics, 31(4):477–504. Iftekhar Naim, Young Chol Song, Qiguang Liu, Henry Kautz, Jiebo Luo, and Daniel Gildea. 2014. Unsupervised Alignment of Natural Language Instructions with Video Segments. In Twenty-Eighth AAAI Conference on Artificial Intelligence. OpenAI. 2019. Dota 2 with Large Scale Deep Reinforcement Learning. arXiv preprint arXiv:1912.06680. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Lawrence R Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257– 286. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing 4881 and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Stephen Robertson, Hugo Zaragoza, et al. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends R⃝in Information Retrieval, 3(4):333–389. Amaia Salvador, Nicholas Hynes, Yusuf Aytar, Javier Marin, Ferda Ofli, Ingmar Weber, and Antonio Torralba. 2017. Learning Cross-modal Embeddings for Cooking Recipes and Food Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Lo¨ıc Barrault, Lucia Specia, and Florian Metze. 2018. How2: A Large-scale Dataset for Multimodal Language Understanding. In NeurIPS. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, and David Silver. 2019. Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model. Ozan Sener, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. 2015. Unsupervised Semantic Parsing of Video Collections. In Proceedings of the IEEE International Conference on Computer Vision, pages 4480–4488. Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sentences from comparable corpora using document level alignment. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 403–411, Los Angeles, California. Association for Computational Linguistics. Young Chol Song, Iftekhar Naim, Abdullah Al Mamun, Kaustubh Kulkarni, Parag Singla, Jiebo Luo, Daniel Gildea, and Henry A Kautz. 2016. Unsupervised Alignment of Actions in Video with Text Descriptions. In IJCAI, pages 2025–2031. Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. 2019. COIN: A Large-scale Dataset for Comprehensive Instructional Video Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1207–1216. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics. Weiying Wang, Yongcheng Wang, Shizhe Chen, and Qin Jin. 2019. YouMakeup: A large-scale domainspecific multimodal dataset for fine-grained semantic comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5133–5143, Hong Kong, China. Association for Computational Linguistics. Luowei Zhou, Chenliang Xu, and Jason J Corso. 2018a. Towards Automatic Learning of Procedures from Web Instructional Videos. In Thirty-Second AAAI Conference on Artificial Intelligence, pages 7590– 7598. Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. 2018b. End-to-End Dense Video Captioning with Masked Transformer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8739– 8748. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books. In Proceedings of the IEEE International Conference on Computer Vision, pages 19–27. Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gokberk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. 2019. Cross-task weakly supervised learning from instructional videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3537–3545. A Supplemental Material In this supplementary, we describe the details of our data collection process (§A.1), experimental details of our algorithm (§A.2) and provide analysis of our alignment outputs (§A.3). A.1 Details of Data Collection A.1.1 Common Crawl Text Recipes We use recipe data from Common Crawl 17 that has metadata formatted according to the Schema.org Recipe schema 18 including title, ingredients, instructions, and a URL to the recipe source. There were originally 3.2 million recipes extracted from Common Crawl. We filter the data by limiting the data to recipes with instructions written in English, removing recipes with titles that are longer than 5 words, removing duplicate recipes, removing recipes where the recipe title contains words that are not in the top 50% most common words 17https://commoncrawl.org/ 18https://schema.org/Recipe 4882 that occur in the recipe titles, and removing recipes with fewer than 2 steps. After filtering the data, we clustered the recipes into dishes using exact match on the recipe titles. We only retain recipes from dishes that have at least three recipes. The final dataset has a total of 4,262 dishes and 48,852 recipes with an average of 8 instructions per recipe. A.1.2 YouTube Video Recipes Given the dish names from the text recipes, we extract YouTube video recipes for each of the dishes. The number of videos extracted for each dish is proportional to the number of text recipes found for that dish. For instance, for a more popular dish like chocolate chip cookies, we would extract more text and video recipes than for a less popular dish like creme brulee. The number of videos extracted ranges from 3 to 100. A.1.3 Chat/Content Classifier Instructional cooking videos can contain a lot of non-instructional content (“chat”). For example, the person cooking the dish often introduces themselves (or their video channel) at the beginning of the video. They sometimes also introduce the dish they are going to prepare and suggest pairings for the dish. The non-instruction content are often found in the beginning and towards the end of the video but there are several instances of “chat” interspersed with instructional content as well. Since we wish to align these videos to text recipe instructions that do not contain non-instructional information, we need a way to remove non-instructional content. We train a supervised neural network based classifier for this task. We train our classifier using the YouCook2 dataset (Zhou et al., 2018a) of 1,500 videos across 90 dishes. This dataset was created by asking humans to identify segments of a video that correspond to an instruction and annotate each segment with an imperative statement describing the action being executed in the video segment. We make the assumption that the transcript sentences that are included within an annotated video segment are instructional whereas those that are not included within an annotated video segment are noninstructional. We first transcribe all 1,500 videos in the dataset using a commercial transcription web service. We split the transcription into sentences using a sentence tokenizer. We label a transcript sentence with the label 1 if the corresponding video segment was annotated and with the label 0 if it was not. We get a total of 90,927 labelled transcript sentences which we split by dishes into the training (73,728 examples), validation (7,767 examples) and test (9,432 examples) sets. We use an LSTM (long-short term memory) model (Hochreiter and Schmidhuber, 1997) with attention (Luong et al., 2015) to train a binary classifier on this data. We initialize (and freeze) our 300-dimensional word embeddings using GloVe (Pennington et al., 2014) vectors trained on 330 million tokens that we obtain by combining all text recipes and transcript sentences. We use the validation set to tune hyperparametrs of our LSTM classifier (hidden size: 64, learning rate: 0.00001, batch size: 64, number of layers: 1). Our chat/content classifier achieves 86.76 precision, 84.26 recall and 85.01 F1 score on the held out test set. A.1.4 Recipe Pair Pruning Strategy We define the following two pruning strategies to reduce the number of extracted recipe pairs: Ingredient match: Each of our text recipes from Common Crawl contains an ingredients list. Video recipes from YouTube however do not contain ingredient lists. We therefore estimate the ingredients for video recipes using text recipes of the same dish. We construct a set of ingredients at the dish level by combining all ingredients of the text recipes within that dish. We then use this dish-level ingredients information to identify ingredient words from the words of video transcriptions. Given a recipe pair, we compare the ingredients of the two recipes and if the percentage of ingredients that match is below a threshold, we remove the pair. For text-text and text-video recipe pair, we set this threshold to be 70%, whereas for video-video recipe pair, we set this threshold to be 90% (since video-video recipe pairs tend to be more noisy). Instruction length match: For text-text recipe pairs, if number of instructions in one recipe is more than double the number of instructions in another recipe, we remove the pair. For video recipes, if there are more than 100 sentences in the transcript after removing the background sentences, we remove that video recipe. A.2 Details of HMM+IBM1 Model We train the HMM+IBM1 pairwise alignment model on three kinds of recipe pairs: text-text, text-video and video-video. The lower level IBM1 model works on words of text instruction or transcript sentences. The vocabulary size of all the 4883 text recipes from 4,262 dishes put together totals to 48,609 words. Since most words do not appear very frequently across the text recipes corpus, we reduce the vocabulary size to 13,061 by removing words that occur fewer than 5 times in the training set. Likewise, we reduce the vocabulary size of video recipe transcriptions to 16,733 words (from 88,744 words) by removing words that occur fewer than 15 times in the training set. We first train the HMM+IBM1 model for 3 iterations with a jump range of [−1, 0, +1] and further train it for 2 iteration with a jump range of [−2, 0, +2]. We find that warm starting the model with a shorter range helps the model to learn better alignments. A.3 Alignment Output Analysis Table 5 shows the alignment between two text recipes for chocolate chip cookies obtained by our pairwise algorithm. The alignment task here is to align each instruction in the source recipe to one of the instructions in the target recipe. The table displays all the instructions in the source recipe in the second column. The first column of the table displays instructions from the target recipe that aligns to the source recipe instruction in the same row. The sentence level probabilities are shown in the last column. We can see the reordering between the two recipes by comparing the instruction indices. We see that instructions 0 to 2 from the source are aligned to target instructions with very high probabilities suggesting they are close paraphrases. Instruction 3 and 8 from the source, on the other hand, are aligned with comparatively lower probabilities to the target and we can see that in these two cases, the two instructions do differ in meaning. Instructions 6,7 and 8 (in source) aligned to instruction 11 (in target) is an example of single step to multi-step breakdown. 4884 Target recipe instruction Source recipe instruction Probability 0: Preheat your oven to 350 degrees F. 0: Preheat the oven to 350 degrees F. 0.9999 2: In the bowl of your mixer cream 1: In a large bowl or the bowl of a stand 0.9998 together your butter and sugars until mixer cream the butter sugar brown sugar light and fluffy about 3-5 minutes. eggs & vanilla together until smooth & fluffy. 1: Sift together the flour baking soda 2: In another bowl whisk together 0.9997 baking powder and salt into a medium the flour salt baking powder and baking soda. sized bowl and set aside. 4: Add in the vanilla and mix. 3: Add this to the butter mixture 0.6889 and mix until well combined. 6: Fold in your chocolate until evenly 4: Stir in the chocolate chips. 0.9820 added throughout the dough. 8: Scoop your dough out onto the sheets. 5: Form the dough into golf-ball sized 0.9997 balls and place them about 2 inches apart on a baking sheet. 11: Bake 10-12 minutes for smaller cookies 6: Bake for 9-10 minutes just until the 0.9912 or 18-20 minutes for larger cookies. edges start to brown lightly. 11: Bake 10-12 minutes for smaller cookies 7: Do not overbake them or they will be 0.9528 or 18-20 minutes for larger cookies. crispy rather than chewy. 11: Bake 10-12 minutes for smaller cookies 8: They still look underbaked when you 0.6465 or 18-20 minutes for larger cookies. take them out but will firm up as they cool. 12: Allow the cookies to cool slightly 9: Let them cool on the pan for about 5 0.9973 on your baking sheet then move them to minutes and them move to a wire rack another surface to cool completely. to cool completely. 14: Store in an air-tight container at 10: Cookies will keep for 7 days in 0.8309 room temperature for up to 3 days or a sealed container at room temperature. freeze for up to 2 months. Table 5: Alignment between two text recipes of chocolate chip cookie with their sentence level probabilities.
2020
440
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885–4901 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4885 Adversarial NLI: A New Benchmark for Natural Language Understanding Yixin Nie∗, Adina Williams†, Emily Dinan†, Mohit Bansal∗, Jason Weston†, Douwe Kiela† ∗UNC Chapel Hill †Facebook AI Research Abstract We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-theart models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate. 1 Introduction Progress in AI has been driven by, among other things, the development of challenging large-scale benchmarks like ImageNet (Russakovsky et al., 2015) in computer vision, and SNLI (Bowman et al., 2015), SQuAD (Rajpurkar et al., 2016), and others in natural language processing (NLP). Recently, for natural language understanding (NLU) in particular, the focus has shifted to combined benchmarks like SentEval (Conneau and Kiela, 2018) and GLUE (Wang et al., 2018), which track model performance on multiple tasks and provide a unified platform for analysis. With the rapid pace of advancement in AI, however, NLU benchmarks struggle to keep up with model improvement. Whereas it took around 15 years to achieve “near-human performance” on MNIST (LeCun et al., 1998; Cires¸an et al., 2012; Wan et al., 2013) and approximately 7 years to surpass humans on ImageNet (Deng et al., 2009; Russakovsky et al., 2015; He et al., 2016), the GLUE benchmark did not last as long as we would have hoped after the advent of BERT (Devlin et al., 2018), and rapidly had to be extended into SuperGLUE (Wang et al., 2019). This raises an important question: Can we collect a large benchmark dataset that can last longer? The speed with which benchmarks become obsolete raises another important question: are current NLU models genuinely as good as their high performance on benchmarks suggests? A growing body of evidence shows that state-of-the-art models learn to exploit spurious statistical patterns in datasets (Gururangan et al., 2018; Poliak et al., 2018; Tsuchiya, 2018; Glockner et al., 2018; Geva et al., 2019; McCoy et al., 2019), instead of learning meaning in the flexible and generalizable way that humans do. Given this, human annotators—be they seasoned NLP researchers or non-experts— might easily be able to construct examples that expose model brittleness. We propose an iterative, adversarial human-andmodel-in-the-loop solution for NLU dataset collection that addresses both benchmark longevity and robustness issues. In the first stage, human annotators devise examples that our current best models cannot determine the correct label for. These resulting hard examples—which should expose additional model weaknesses—can be added to the training set and used to train a stronger model. We then subject the strengthened model to the same procedure and collect weaknesses over several rounds. After each round, we train a new model and set aside a new test set. The process can be iteratively repeated in a never-ending learning (Mitchell et al., 2018) setting, with the model getting stronger and the test set getting harder in each new round. Thus, not only is the resultant dataset harder than existing benchmarks, but this process also yields a “moving post” dynamic target for NLU systems, rather than a static benchmark that will eventually saturate. Our approach draws inspiration from recent ef4886 Context Target Label Hypothesis Writer Compare Prediction Verifier Disagree Train Dev Test Agree Step 1: Write examples Step 2: Get model feedback Step 3: Verify examples and make splits Step 4: Retrain model for next round Training Phase Collection Phase Feedback Model correct Model wrong Figure 1: Adversarial NLI data collection via human-and-model-in-the-loop enabled training (HAMLET). The four steps make up one round of data collection. In step 3, model-correct examples are included in the training set; development and test sets are constructed solely from model-wrong verified-correct examples. forts that gamify collaborative training of machine learning agents over multiple rounds (Yang et al., 2017) and pit “builders” against “breakers” to learn better models (Ettinger et al., 2017). Recently, Dinan et al. (2019) showed that such an approach can be used to make dialogue safety classifiers more robust. Here, we focus on natural language inference (NLI), arguably the most canonical task in NLU. We collected three rounds of data, and call our new dataset Adversarial NLI (ANLI). Our contributions are as follows: 1) We introduce a novel human-and-model-in-the-loop dataset, consisting of three rounds that progressively increase in difficulty and complexity, that includes annotator-provided explanations. 2) We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks. 3) We provide a detailed analysis of the collected data that sheds light on the shortcomings of current models, categorizes the data by inference type to examine weaknesses, and demonstrates good performance on NLI stress tests. The ANLI dataset is available at github.com/facebookresearch/anli/. A demo is available at adversarialnli.com. 2 Dataset collection The primary aim of this work is to create a new large-scale NLI benchmark on which current stateof-the-art models fail. This constitutes a new target for the field to work towards, and can elucidate model capabilities and limitations. As noted, however, static benchmarks do not last very long these days. If continuously deployed, the data collection procedure we introduce here can pose a dynamic challenge that allows for never-ending learning. 2.1 HAMLET To paraphrase the great bard (Shakespeare, 1603), there is something rotten in the state of the art. We propose Human-And-Model-in-the-Loop Enabled Training (HAMLET), a training procedure to automatically mitigate problems with current dataset collection procedures (see Figure 1). In our setup, our starting point is a base model, trained on NLI data. Rather than employing automated adversarial methods, here the model’s “adversary” is a human annotator. Given a context (also often called a “premise” in NLI), and a desired target label, we ask the human writer to provide a hypothesis that fools the model into misclassifying the label. One can think of the writer as a “white hat” hacker, trying to identify vulnerabilities in the system. For each human-generated example that is misclassified, we also ask the writer to provide a reason why they believe it was misclassified. For examples that the model misclassified, it is necessary to verify that they are actually correct —i.e., that the given context-hypothesis pairs genuinely have their specified target label. The best way to do this is to have them checked by another human. Hence, we provide the example to human verifiers. If two human verifiers agree with the writer, the example is considered a good example. If they disagree, we ask a third human verifier to break the tie. If there is still disagreement between the writer and the verifiers, the example is discarded. If the verifiers disagree, they can over4887 Context Hypothesis Reason Round Labels Annotations orig. pred. valid. Roberto Javier Mora Garc´ıa (c. 1962 – 16 March 2004) was a Mexican journalist and editorial director of “El Ma˜nana”, a newspaper based in Nuevo Laredo, Tamaulipas, Mexico. He worked for a number of media outlets in Mexico, including the “El Norte” and “El Diario de Monterrey”, prior to his assassination. Another individual laid waste to Roberto Javier Mora Garcia. The context states that Roberto Javier Mora Garcia was assassinated, so another person had to have “laid waste to him.” The system most likely had a hard time figuring this out due to it not recognizing the phrase “laid waste.” A1 (Wiki) E N E E Lexical (assassination, laid waste), Tricky (Presupposition), Standard (Idiom) A melee weapon is any weapon used in direct hand-to-hand combat; by contrast with ranged weapons which act at a distance. The term “melee” originates in the 1640s from the French word “m˘el´ee”, which refers to hand-to-hand combat, a close quarters battle, a brawl, a confused fight, etc. Melee weapons can be broadly divided into three categories Melee weapons are good for ranged and hand-to-hand combat. Melee weapons are good for hand to hand combat, but NOT ranged. A2 (Wiki) C E C N C Standard (Conjunction), Tricky (Exhaustification), Reasoning (Facts) If you can dream it, you can achieve it—unless you’re a goose trying to play a very human game of rugby. In the video above, one bold bird took a chance when it ran onto a rugby field mid-play. Things got dicey when it got into a tussle with another player, but it shook it off and kept right on running. After the play ended, the players escorted the feisty goose off the pitch. It was a risky move, but the crowd chanting its name was well worth it. The crowd believed they knew the name of the goose running on the field. Because the crowd was chanting its name, the crowd must have believed they knew the goose’s name. The word “believe” may have made the system think this was an ambiguous statement. A3 (News) E N E E Reasoning (Facts), Reference (Coreference) Table 1: Examples from development set. ‘An’ refers to round number, ‘orig.’ is the original annotator’s gold label, ‘pred.’ is the model prediction, ‘valid.’ are the validator labels, ‘reason’ was provided by the original annotator, ‘Annotations’ are the tags determined by an linguist expert annotator. rule the original target label of the writer. Once data collection for the current round is finished, we construct a new training set from the collected data, with accompanying development and test sets, which are constructed solely from verified correct examples. The test set was further restricted so as to: 1) include pairs from “exclusive” annotators who are never included in the training data; and 2) be balanced by label classes (and genres, where applicable). We subsequently train a new model on this and other existing data, and repeat the procedure. 2.2 Annotation details We employed Mechanical Turk workers with qualifications and collected hypotheses via the ParlAI1 framework. Annotators are presented with a context and a target label—either ‘entailment’, ‘contradiction’, or ‘neutral’—and asked to write a hypothesis that corresponds to the label. We phrase the label classes as “definitely correct”, “definitely incorrect”, or “neither definitely correct nor definitely incorrect” given the context, to make the task easier to grasp. Model predictions are obtained for the context and submitted hypothesis pair. The probability of each label is shown to the worker as feedback. If the model prediction was incorrect, the job is complete. If not, the worker continues to write hypotheses for the given (context, targetlabel) pair until the model predicts the label incor1https://parl.ai/ rectly or the number of tries exceeds a threshold (5 tries in the first round, 10 tries thereafter). To encourage workers, payments increased as rounds became harder. For hypotheses that the model predicted incorrectly, and that were verified by other humans, we paid an additional bonus on top of the standard rate. 2.3 Round 1 For the first round, we used a BERT-Large model (Devlin et al., 2018) trained on a concatenation of SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2017), and selected the best-performing model we could train as the starting point for our dataset collection procedure. For Round 1 contexts, we randomly sampled short multi-sentence passages from Wikipedia (of 250-600 characters) from the manually curated HotpotQA training set (Yang et al., 2018). Contexts are either ground-truth contexts from that dataset, or they are Wikipedia passages retrieved using TF-IDF (Chen et al., 2017) based on a HotpotQA question. 2.4 Round 2 For the second round, we used a more powerful RoBERTa model (Liu et al., 2019b) trained on SNLI, MNLI, an NLI-version2 of FEVER (Thorne et al., 2018), and the training data from the previous round (A1). After a hyperparameter search, we 2The NLI version of FEVER pairs claims with evidence retrieved by Nie et al. (2019) as (context, hypothesis) inputs. 4888 Dataset Genre Context Train / Dev / Test Model error rate Tries Time (sec.) Unverified Verified mean/median per verified ex. A1 Wiki 2,080 16,946 / 1,000 / 1,000 29.68% 18.33% 3.4 / 2.0 199.2 / 125.2 A2 Wiki 2,694 45,460 / 1,000 / 1,000 16.59% 8.07% 6.4 / 4.0 355.3 / 189.1 A3 Various 6,002 100,459 / 1,200 / 1,200 17.47% 8.60% 6.4 / 4.0 284.0 / 157.0 (Wiki subset) 1,000 19,920 / 200 / 200 14.79% 6.92% 7.4 / 5.0 337.3 / 189.6 ANLI Various 10,776 162,865 / 3,200 / 3,200 18.54% 9.52% 5.7 / 3.0 282.9 / 156.3 Table 2: Dataset statistics: ‘Model error rate’ is the percentage of examples that the model got wrong; ‘unverified’ is the overall percentage, while ‘verified’ is the percentage that was verified by at least 2 human annotators. selected the model with the best performance on the A1 development set. Then, using the hyperparameters selected from this search, we created a final set of models by training several models with different random seeds. During annotation, we constructed an ensemble by randomly picking a model from the model set as the adversary each turn. This helps us avoid annotators exploiting vulnerabilities in one single model. A new non-overlapping set of contexts was again constructed from Wikipedia via HotpotQA using the same method as Round 1. 2.5 Round 3 For the third round, we selected a more diverse set of contexts, in order to explore robustness under domain transfer. In addition to contexts from Wikipedia for Round 3, we also included contexts from the following domains: News (extracted from Common Crawl), fiction (extracted from StoryCloze (Mostafazadeh et al., 2016) and CBT (Hill et al., 2015)), formal spoken text (excerpted from court and presidential debate transcripts in the Manually Annotated Sub-Corpus (MASC) of the Open American National Corpus3), and causal or procedural text, which describes sequences of events or actions, extracted from WikiHow. Finally, we also collected annotations using the longer contexts present in the GLUE RTE training data, which came from the RTE5 dataset (Bentivogli et al., 2009). We trained an even stronger RoBERTa ensemble by adding the training set from the second round (A2) to the training data. 2.6 Comparing with other datasets The ANLI dataset, comprising three rounds, improves upon previous work in several ways. First, and most obviously, the dataset is collected to be more difficult than previous datasets, by design. Second, it remedies a problem with SNLI, 3anc.org/data/masc/corpus/ namely that its contexts (or premises) are very short, because they were selected from the image captioning domain. We believe longer contexts should naturally lead to harder examples, and so we constructed ANLI contexts from longer, multisentence source material. Following previous observations that models might exploit spurious biases in NLI hypotheses, (Gururangan et al., 2018; Poliak et al., 2018), we conduct a study of the performance of hypothesisonly models on our dataset. We show that such models perform poorly on our test sets. With respect to data generation with na¨ıve annotators, Geva et al. (2019) noted that models can pick up on annotator bias, modelling annotator artefacts rather than the intended reasoning phenomenon. To counter this, we selected a subset of annotators (i.e., the “exclusive” workers) whose data would only be included in the test set. This enables us to avoid overfitting to the writing style biases of particular annotators, and also to determine how much individual annotator bias is present for the main portion of the data. Examples from each round of dataset collection are provided in Table 1. Furthermore, our dataset poses new challenges to the community that were less relevant for previous work, such as: can we improve performance online without having to train a new model from scratch every round, how can we overcome catastrophic forgetting, how do we deal with mixed model biases, etc. Because the training set includes examples that the model got right but were not verified, learning from noisy and potentially unverified data becomes an additional interesting challenge. 3 Dataset statistics The dataset statistics can be found in Table 2. The number of examples we collected increases per round, starting with approximately 19k examples for Round 1, to around 47k examples for Round 2, 4889 Model Training Data A1 A2 A3 ANLI ANLI-E SNLI MNLI-m/-mm BERT S,M⋆1 00.0 28.9 28.8 19.8 19.9 91.3 86.7 / 86.4 +A1 44.2 32.6 29.3 35.0 34.2 91.3 86.3 / 86.5 +A1+A2 57.3 45.2 33.4 44.6 43.2 90.9 86.3 / 86.3 +A1+A2+A3 57.2 49.0 46.1 50.5 46.3 90.9 85.6 / 85.4 S,M,F,ANLI 57.4 48.3 43.5 49.3 44.2 90.4 86.0 / 85.8 XLNet S,M,F,ANLI 67.6 50.7 48.3 55.1 52.0 91.8 89.6 / 89.4 RoBERTa S,M 47.6 25.4 22.1 31.1 31.4 92.6 90.8 / 90.6 +F 54.0 24.2 22.4 32.8 33.7 92.7 90.6 / 90.5 +F+A1⋆2 68.7 19.3 22.0 35.8 36.8 92.8 90.9 / 90.7 +F+A1+A2⋆3 71.2 44.3 20.4 43.7 41.4 92.9 91.0 / 90.7 S,M,F,ANLI 73.8 48.9 44.4 53.7 49.7 92.6 91.0 / 90.6 Table 3: Model Performance. ‘S’ refers to SNLI, ‘M’ to MNLI dev (-m=matched, -mm=mismatched), and ‘F’ to FEVER; ‘A1–A3’ refer to the rounds respectively and ‘ANLI’ refers to A1+A2+A3, ‘-E’ refers to test set examples written by annotators exclusive to the test set. Datasets marked ‘⋆n’ were used to train the base model for round n, and their performance on that round is underlined (A2 and A3 used ensembles, and hence have non-zero scores). to over 103k examples for Round 3. We collected more data for later rounds not only because that data is likely to be more interesting, but also simply because the base model is better and so annotation took longer to collect good, verified correct examples of model vulnerabilities. For each round, we report the model error rate, both on verified and unverified examples. The unverified model error rate captures the percentage of examples where the model disagreed with the writer’s target label, but where we are not (yet) sure if the example is correct. The verified model error rate is the percentage of model errors from example pairs that other annotators confirmed the correct label for. Note that error rate is a useful way to evaluate model quality: the lower the model error rate—assuming constant annotator quality and context-difficulty—the better the model. We observe that model error rates decrease as we progress through rounds. In Round 3, where we included a more diverse range of contexts from various domains, the overall error rate went slightly up compared to the preceding round, but for Wikipedia contexts the error rate decreased substantially. While for the first round roughly 1 in every 5 examples were verified model errors, this quickly dropped over consecutive rounds, and the overall model error rate is less than 1 in 10. On the one hand, this is impressive, and shows how far we have come with just three rounds. On the other hand, it shows that we still have a long way to go if even untrained annotators can fool ensembles of state-of-the-art models with relative ease. Table 2 also reports the average number of “tries”, i.e., attempts made for each context until a model error was found (or the number of possible tries is exceeded), and the average time this took (in seconds). Again, these metrics are useful for evaluating model quality: observe that the average number of tries and average time per verified error both go up with later rounds. This demonstrates that the rounds are getting increasingly more difficult. Further dataset statistics and inter-annotator agreement are reported in Appendix C. 4 Results Table 3 reports the main results. In addition to BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019b), we also include XLNet (Yang et al., 2019) as an example of a strong, but different, model architecture. We show test set performance on the ANLI test sets per round, the total ANLI test set, and the exclusive test subset (examples from test-set-exclusive workers). We also show accuracy on the SNLI test set and the MNLI development set (for the purpose of comparing between different model configurations across table rows). In what follows, we discuss our observations. Base model performance is low. Notice that the base model for each round performs very poorly on that round’s test set. This is the expected outcome: For round 1, the base model gets the entire test set wrong, by design. For rounds 2 and 3, we used an ensemble, so performance is not necessarily zero. However, as it turns out, performance still falls well below chance4, indicating that workers did not find vulnerabilities specific to a single model, but generally applicable ones for that model class. 4Chance is at 33%, since the test set labels are balanced. 4890 S M MM A1 A2 A3 Evaluation Set 0 10 20 30 40 50 60 70 80 Accuracy Training Data A1 A1D1+A2D1 A1D2+A2D2+A3D2 Figure 2: RoBERTa performance on dev, with A1– 3 downsampled s.t. |A1D1|=|A2D1|= 1 2|A1| and |A1D2|=|A2D2|=|A3D2|= 1 3|A1|. Rounds become increasingly more difficult. As already foreshadowed by the dataset statistics, round 3 is more difficult (yields lower performance) than round 2, and round 2 is more difficult than round 1. This is true for all model architectures. Training on more rounds improves robustness. Generally, our results indicate that training on more rounds improves model performance. This is true for all model architectures. Simply training on more “normal NLI” data would not help a model be robust to adversarial attacks, but our data actively helps mitigate these. RoBERTa achieves state-of-the-art performance... We obtain state of the art performance on both SNLI and MNLI with the RoBERTa model finetuned on our new data. The RoBERTa paper (Liu et al., 2019b) reports a score of 90.2 for both MNLI-matched and -mismatched dev, while we obtain 91.0 and 90.7. The state of the art on SNLI is currently held by MT-DNN (Liu et al., 2019a), which reports 91.6 compared to our 92.9. ...but is outperformed when it is base model. However, the base (RoBERTa) models for rounds 2 and 3 are outperformed by both BERT and XLNet (rows 5, 6 and 10). This shows that annotators found examples that RoBERTa generally struggles with, which cannot be mitigated by more examples alone. It also implies that BERT, XLNet, and RoBERTa all have different weaknesses, possibly as a function of their training data (BERT, XLNet and RoBERTa were trained on different data sets, which might or might not have contained information relevant to the weaknesses). A1 A2 A3 Evaluation Set 0 10 20 30 40 50 60 Accuracy Training Data Verified Verified+Unverified Unverified Figure 3: Comparison of verified, unverified and combined data, where data sets are downsampled to ensure equal training sizes. Continuously augmenting training data does not downgrade performance. Even though ANLI training data is different from SNLI and MNLI, adding it to the training set does not harm performance on those tasks. Our results (see also rows 2-3 of Table 6) suggest the method could successfully be applied for multiple additional rounds. Exclusive test subset difference is small. We included an exclusive test subset (ANLI-E) with examples from annotators never seen in training, and find negligible differences, indicating that our models do not over-rely on annotator’s writing styles. 4.1 The effectiveness of adversarial training We examine the effectiveness of the adversarial training data in two ways. First, we sample from respective datasets to ensure exactly equal amounts of training data. Table 5 shows that the adversarial data improves performance, including on SNLI and MNLI when we replace part of those datasets with the adversarial data. This suggests that the adversarial data is more data efficient than “normally collected” data. Figure 2 shows that adversarial data collected in later rounds is of higher quality and more data-efficient. Second, we compared verified correct examples of model vulnerabilities (examples that the model got wrong and were verified to be correct) to unverified ones. Figure 3 shows that the verified correct examples are much more valuable than the unverified examples, especially in the later rounds (where the latter drops to random). 4.2 Stress Test Results We also test models on two recent hard NLI test sets: SNLI-Hard (Gururangan et al., 2018) and 4891 Model SNLI-Hard NLI Stress Tests AT (m/mm) NR LN (m/mm) NG (m/mm) WO (m/mm) SE (m/mm) Previous models 72.7 14.4 / 10.2 28.8 58.7 / 59.4 48.8 / 46.6 50.0 / 50.2 58.3 / 59.4 BERT (All) 82.3 75.0 / 72.9 65.8 84.2 / 84.6 64.9 / 64.4 61.6 / 60.6 78.3 / 78.3 XLNet (All) 83.5 88.2 / 87.1 85.4 87.5 / 87.5 59.9 / 60.0 68.7 / 66.1 84.3 / 84.4 RoBERTa (S+M+F) 84.5 81.6 / 77.2 62.1 88.0 / 88.5 61.9 / 61.9 67.9 / 66.2 86.2 / 86.5 RoBERTa (All) 84.7 85.9 / 82.1 80.6 88.4 / 88.5 62.2 / 61.9 67.4 / 65.6 86.3 / 86.7 Table 4: Model Performance on NLI stress tests (tuned on their respective dev. sets). All=S+M+F+ANLI. AT=‘Antonym’; ‘NR’=Numerical Reasoning; ‘LN’=Length; ‘NG’=Negation; ‘WO’=Word Overlap; ‘SE’=Spell Error. Previous models refers to the Naik et al. (2018) implementation of Conneau et al. (2017, InferSent) for the Stress Tests, and to the Gururangan et al. (2018) implementation of Gong et al. (2018, DIIN) for SNLI-Hard. Train Data A1 A2 A3 S M-m/mm SMD1+SMD2 45.1 26.1 27.1 92.5 89.8/89.7 SMD1+A 72.6 42.9 42.0 92.3 90.3/89.6 SM 48.0 24.8 31.1 93.2 90.8/90.6 SMD3+A 73.3 42.4 40.5 93.3 90.8/90.7 Table 5: RoBERTa performance on dev set with different training data. S=SNLI, M=MNLI, A=A1+A2+A3. ‘SM’ refers to combined S and M training set. D1, D2, D3 means down-sampling SM s.t. |SMD2|=|A| and |SMD3|+|A|=|SM|. Therefore, training sizes are identical in every pair of rows. the NLI stress tests (Naik et al., 2018) (see Appendix A for details). The results are in Table 4. We observe that all our models outperform the models presented in original papers for these common stress tests. The RoBERTa models perform best on SNLI-Hard and achieve accuracy levels in the high 80s on the ‘antonym’ (AT), ‘numerical reasoning’ (NR), ‘length’ (LN), ‘spelling error’(SE) sub-datasets, and show marked improvement on both ‘negation’ (NG), and ‘word overlap’ (WO). Training on ANLI appears to be particularly useful for the AT, NR, NG and WO stress tests. 4.3 Hypothesis-only results For SNLI and MNLI, concerns have been raised about the propensity of models to pick up on spurious artifacts that are present just in the hypotheses (Gururangan et al., 2018; Poliak et al., 2018). Here, we compare full models to models trained only on the hypothesis (marked H). Table 6 reports results on ANLI, as well as on SNLI and MNLI. The table shows that hypothesis-only models perform poorly on ANLI5, and obtain good performance on SNLI and MNLI. Hypothesis-only performance 5Obviously, without manual intervention, some bias remains in how people phrase hypotheses—e.g., contradiction might have more negation—which explains why hypothesisonly performs slightly above chance when trained on ANLI. Train Data A1 A2 A3 S M-m/mm ALL 73.8 48.9 44.4 92.6 91.0/90.6 S+M 47.6 25.4 22.1 92.6 90.8/90.6 ANLI-Only 71.3 43.3 43.0 83.5 86.3/86.5 ALLH 49.7 46.3 42.8 71.4 60.2/59.8 S+MH 33.1 29.4 32.2 71.8 62.0/62.0 ANLI-OnlyH 51.0 42.6 41.5 47.0 51.9/54.5 Table 6: Performance of RoBERTa with different data combinations. ALL=S,M,F,ANLI. Hypothesisonly models are marked H where they are trained and tested with only hypothesis texts. decreases over rounds for ANLI. We observe that in rounds 2 and 3, RoBERTa is not much better than hypothesis-only. This could mean two things: either the test data is very difficult, or the training data is not good. To rule out the latter, we trained only on ANLI (∼163k training examples): RoBERTa matches BERT when trained on the much larger, fully in-domain SNLI+MNLI combined dataset (943k training examples) on MNLI, with both getting ∼86 (the third row in Table 6). Hence, this shows that the test sets are so difficult that state-of-the-art models cannot outperform a hypothesis-only prior. 5 Linguistic analysis We explore the types of inferences that fooled models by manually annotating 500 examples from each round’s development set. A dynamically evolving dataset offers the unique opportunity to track how model error rates change over time. Since each round’s development set contains only verified examples, we can investigate two interesting questions: which types of inference do writers employ to fool the models, and are base models differentially sensitive to different types of reasoning? The results are summarized in Table 7. We devised an inference ontology containing six types of inference: Numerical & Quantitative (i.e., reason4892 Round Numerical & Quant. Reference & Names Standard Lexical Tricky Reasoning & Facts Quality A1 38% 13% 18% 13% 22% 53% 4% A2 32% 20% 21% 21% 20% 59% 3% A3 10% 18% 27% 27% 27% 63% 3% Average 27% 17% 22% 22% 23% 58% 3% Table 7: Analysis of 500 development set examples per round and on average. ing about cardinal and ordinal numbers, inferring dates and ages from numbers, etc.), Reference & Names (coreferences between pronouns and forms of proper names, knowing facts about name gender, etc.), Standard Inferences (conjunctions, negations, cause-and-effect, comparatives and superlatives etc.), Lexical Inference (inferences made possible by lexical information about synonyms, antonyms, etc.), Tricky Inferences (wordplay, linguistic strategies such as syntactic transformations/reorderings, or inferring writer intentions from contexts), and reasoning from outside knowledge or additional facts (e.g., “You can’t reach the sea directly from Rwanda”). The quality of annotations was also tracked; if a pair was ambiguous or a label debatable (from the expert annotator’s perspective), it was flagged. Quality issues were rare at 3-4% per round. Any one example can have multiple types, and every example had at least one tag. We observe that both round 1 and 2 writers rely heavily on numerical and quantitative reasoning in over 30% of the development set—the percentage in A2 (32%) dropped roughly 6% from A1 (38%)—while round 3 writers use numerical or quantitative reasoning for only 17%. The majority of numerical reasoning types were references to cardinal numbers that referred to dates and ages. Inferences predicated on references and names were present in about 10% of rounds 1 & 3 development sets, and reached a high of 20% in round 2, with coreference featuring prominently. Standard inference types increased in prevalence as the rounds increased, ranging from 18%–27%, as did ‘Lexical’ inferences (increasing from 13%–31%). The percentage of sentences relying on reasoning and outside facts remains roughly the same, in the mid50s, perhaps slightly increasing over the rounds. For round 3, we observe that the model used to collect it appears to be more susceptible to Standard, Lexical, and Tricky inference types. This finding is compatible with the idea that models trained on adversarial data perform better, since annotators seem to have been encouraged to devise more creative examples containing harder types of inference in order to stump them. Further analysis is provided in Appendix B. 6 Related work Bias in datasets Machine learning methods are well-known to pick up on spurious statistical patterns. For instance, in the first visual question answering dataset (Antol et al., 2015), biases like “2” being the correct answer to 39% of the questions starting with “how many” allowed learning algorithms to perform well while ignoring the visual modality altogether (Jabri et al., 2016; Goyal et al., 2017). In NLI, Gururangan et al. (2018), Poliak et al. (2018) and Tsuchiya (2018) showed that hypothesis-only baselines often perform far better than chance. NLI systems can often be broken merely by performing simple lexical substitutions (Glockner et al., 2018), and struggle with quantifiers (Geiger et al., 2018) and certain superficial syntactic properties (McCoy et al., 2019). In question answering, Kaushik and Lipton (2018) showed that question- and passage-only models can perform surprisingly well, while Jia and Liang (2017) added adversarially constructed sentences to passages to cause a drastic drop in performance. Many tasks do not actually require sophisticated linguistic reasoning, as shown by the surprisingly good performance of random encoders (Wieting and Kiela, 2019). Similar observations were made in machine translation (Belinkov and Bisk, 2017) and dialogue (Sankar et al., 2019). Machine learning also has a tendency to overfit on static targets, even if that does not happen deliberately (Recht et al., 2018). In short, the field is rife with dataset bias and papers trying to address this important problem. This work presents a potential solution: if such biases exist, they will allow humans to fool the models, resulting in valuable training examples until the bias is mitigated. Dynamic datasets. Bras et al. (2020) proposed AFLite, an approach for avoiding spurious biases through adversarial filtering, which is a modelin-the-loop approach that iteratively probes and improves models. Kaushik et al. (2019) offer a 4893 causal account of spurious patterns, and counterfactually augment NLI datasets by editing examples to break the model. That approach is human-inthe-loop, using humans to find problems with one single model. In this work, we employ both human and model-based strategies iteratively, in a form of human-and-model-in-the-loop training, to create completely new examples, in a potentially never-ending loop (Mitchell et al., 2018). Human-and-model-in-the-loop training is not a new idea. Mechanical Turker Descent proposes a gamified environment for the collaborative training of grounded language learning agents over multiple rounds (Yang et al., 2017). The “Build it Break it Fix it” strategy in the security domain (Ruef et al., 2016) has been adapted to NLP (Ettinger et al., 2017) as well as dialogue safety (Dinan et al., 2019). The QApedia framework (Kratzwald and Feuerriegel, 2019) continuously refines and updates its content repository using humans in the loop, while human feedback loops have been used to improve image captioning systems (Ling and Fidler, 2017). Wallace et al. (2019) leverage trivia experts to create a model-driven adversarial question writing procedure and generate a small set of challenge questions that QA-models fail on. Relatedly, Lan et al. (2017) propose a method for continuously growing a dataset of paraphrases. There has been a flurry of work in constructing datasets with an adversarial component, such as Swag (Zellers et al., 2018) and HellaSwag (Zellers et al., 2019), CODAH (Chen et al., 2019), Adversarial SQuAD (Jia and Liang, 2017), Lambada (Paperno et al., 2016) and others. Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART. 7 Discussion & Conclusion In this work, we used a human-and-model-in-theloop training method to collect a new benchmark for natural language understanding. The benchmark is designed to be challenging to current stateof-the-art models. Annotators were employed to act as adversaries, and encouraged to find vulnerabilities that fool the model into misclassifying, but that another person would correctly classify. We found that non-expert annotators, in this gamified setting and with appropriate incentives, are remarkably creative at finding and exploiting weaknesses. We collected three rounds, and as the rounds progressed, the models became more robust and the test sets for each round became more difficult. Training on this new data yielded the state of the art on existing NLI benchmarks. The ANLI benchmark presents a new challenge to the community. It was carefully constructed to mitigate issues with previous datasets, and was designed from first principles to last longer. The dataset also presents many opportunities for further study. For instance, we collected annotatorprovided explanations for each example that the model got wrong. We provided inference labels for the development set, opening up possibilities for interesting more fine-grained studies of NLI model performance. While we verified the development and test examples, we did not verify the correctness of each training example, which means there is probably some room for improvement there. A concern might be that the static approach is probably cheaper, since dynamic adversarial data collection requires a verification step to ensure examples are correct. However, verifying examples is probably also a good idea in the static case, and adversarially collected examples can still prove useful even if they didn’t fool the model and weren’t verified. Moreover, annotators were better incentivized to do a good job in the adversarial setting. Our finding that adversarial data is more data-efficient corroborates this theory. Future work could explore a detailed cost and time trade-off between adversarial and static collection. It is important to note that our approach is modelagnostic. HAMLET was applied against an ensemble of models in rounds 2 and 3, and it would be straightforward to put more diverse ensembles in the loop to examine what happens when annotators are confronted with a wider variety of architectures. The proposed procedure can be extended to other classification tasks, as well as to ranking with hard negatives either generated (by adversarial models) or retrieved and verified by humans. It is less clear how the method can be applied in generative cases. Adversarial NLI is meant to be a challenge for measuring NLU progress, even for as yet undiscovered models and architectures. Luckily, if the benchmark does turn out to saturate quickly, we will always be able to collect a new round. Acknowledgments YN interned at Facebook. YN and MB were sponsored by DARPA MCS Grant #N66001-19-2-4031, ONR Grant #N00014-18-1-2871, and DARPA YFA17-D17AP00022. Special thanks to Sam Bowman for comments on an earlier draft. 4894 References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433. Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. arXiv preprint arXiv:1711.02173. Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The Fifth PASCAL Recognizing Textual Entailment Challenge. TAC. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. arXiv preprint arXiv:1908.05739. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew E Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial filters of dataset biases. arXiv preprint arXiv:2002.04108. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Association for Computational Linguistics (ACL). Michael Chen, Mike D’Arcy, Alisa Liu, Jared Fernandez, and Doug Downey. 2019. CODAH: an adversarially authored question-answer dataset for common sense. CoRR, abs/1904.04365. Dan Cires¸an, Ueli Meier, and J¨urgen Schmidhuber. 2012. Multi-column deep neural networks for image classification. arXiv preprint arXiv:1202.2745. Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. arXiv preprint arXiv:1803.05449. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680. Association for Computational Linguistics. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack. In Proceedings of EMNLP. Allyson Ettinger, Sudha Rao, Hal Daum´e III, and Emily M Bender. 2017. Towards linguistically generalizable nlp systems: A workshop and shared task. arXiv preprint arXiv:1711.01505. Atticus Geiger, Ignacio Cases, Lauri Karttunen, and Christopher Potts. 2018. Stress-testing neural models of natural language inference with multiply-quantified sentences. arXiv preprint arXiv:1810.13033. Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. arXiv preprint arXiv:1908.07898. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In Proceedings of ACL. Yichen Gong, Heng Luo, and Jian Zhang. 2018. Natural language inference over interaction space. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904–6913. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of NAACL. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301. Allan Jabri, Armand Joulin, and Laurens Van Der Maaten. 2016. Revisiting visual question answering baselines. In European conference on computer vision, pages 727–739. Springer. 4895 Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of EMNLP. Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton. 2019. Learning the difference that makes a difference with counterfactually-augmented data. arXiv preprint arXiv:1909.12434. Divyansh Kaushik and Zachary C Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. arXiv preprint arXiv:1808.04926. Bernhard Kratzwald and Stefan Feuerriegel. 2019. Learning from on-line user feedback in neural question answering on the web. In The World Wide Web Conference, pages 906–916. ACM. Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential paraphrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1224–1234, Copenhagen, Denmark. Association for Computational Linguistics. Yann LeCun, L´eon Bottou, Yoshua Bengio, Patrick Haffner, et al. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324. Huan Ling and Sanja Fidler. 2017. Teaching machines to describe images via natural language feedback. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 5075–5085. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Tom Mitchell, William Cohen, Estevam Hruschka, Partha Talukdar, Bo Yang, Justin Betteridge, Andrew Carlson, B Dalvi, Matt Gardner, Bryan Kisiel, et al. 2018. Never-ending learning. Communications of the ACM, 61(5):103–115. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Yixin Nie and Mohit Bansal. 2017. Shortcutstacked sentence encoders for multi-domain inference. arXiv preprint arXiv:1708.02312. Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. In Association for the Advancement of Artificial Intelligence (AAAI). Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. 2016. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2018. Do cifar-10 classifiers generalize to cifar-10? arXiv preprint arXiv:1806.00451. Andrew Ruef, Michael Hicks, James Parker, Dave Levin, Michelle L Mazurek, and Piotr Mardziel. 2016. Build it, break it, fix it: Contesting secure development. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 690–703. ACM. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252. Chinnadhurai Sankar, Sandeep Subramanian, Christopher Pal, Sarath Chandar, and Yoshua Bengio. 2019. Do neural dialog systems use the conversation history effectively? an empirical study. arXiv preprint arXiv:1906.01603. 4896 William Shakespeare. 1603. The Tragedy of Hamlet, Prince of Denmark. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355. Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recognizing textual entailment. In Proceedings of LREC. Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, and Jordan Boyd-Graber. 2019. Trick me if you can: Human-in-the-loop generation of adversarial question answering examples. In Transactions of the Association for Computational Linguistics. Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. 2013. Regularization of neural networks using dropconnect. In International conference on machine learning, pages 1058–1066. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. John Wieting and Douwe Kiela. 2019. No training required: Exploring random encoders for sentence classification. arXiv preprint arXiv:1901.10444. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600. Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H Miller, Arthur Szlam, Douwe Kiela, and Jason Weston. 2017. Mastering the dungeon: Grounded language learning by mechanical turker descent. arXiv preprint arXiv:1711.07950. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of EMNLP. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In Proceedings of ACL. 4897 A Performance on challenge datasets Recently, several hard test sets have been made available for revealing the biases NLI models learn from their training datasets (Nie and Bansal, 2017; McCoy et al., 2019; Gururangan et al., 2018; Naik et al., 2018). We examine model performance on two of these: the SNLI-Hard (Gururangan et al., 2018) test set, which consists of examples that hypothesis-only models label incorrectly, and the NLI stress tests (Naik et al., 2018), in which sentences containing antonyms pairs, negations, high word overlap, i.a., are heuristically constructed. We test our models on these stress tests after tuning on each test’s respective development set to account for potential domain mismatches. For comparison, we also report results from the original papers: for SNLI-Hard from Gururangan et al.’s implementation of the hierarchical tensor-based Densely Interactive Inference Network (Gong et al., 2018, DIIN) on MNLI, and for the NLI stress tests, Naik et al.’s implementation of InferSent (Conneau et al., 2017) trained on SNLI. B Further linguistic analysis We compare the incidence of linguistic phenomena in ANLI with extant popular NLI datasets to get an idea of what our dataset contains. We observe that FEVER and SNLI datasets generally contain many fewer hard linguistic phenomena than MultiNLI and ANLI (see Table 8). ANLI and MultiNLI have roughly the same percentage of hypotheses that exceeding twenty words in length, and/or contain negation (e.g., ‘never’, ’no’), tokens of ‘or’, and modals (e.g., ‘must’, ‘can’). MultiNLI hypotheses generally contains more pronouns, quantifiers (e.g., ‘many’, ‘every’), WH-words (e.g., ‘who’, ‘why’), and tokens of ‘and’ than do their ANLI counterparts—although A3 reaches nearly the same percentage as MultiNLI for negation, and modals. However, ANLI contains more cardinal numerals and time terms (such as ‘before’, ‘month’, and ‘tomorrow’) than MultiNLI. These differences might be due to the fact that the two datasets are constructed from different genres of text. Since A1 and A2 contexts are constructed from a single Wikipedia data source (i.e., HotPotQA data), and most Wikipedia articles include dates in the first line, annotators appear to prefer constructing hypotheses that highlight numerals and time terms, leading to their high incidence. Focusing on ANLI more specifically, A1 has roughly the same incidence of most tags as A2 (i.e., within 2% of each other), which, again, accords with the fact that we used the same Wikipedia data source for A1 and A2 contexts. A3, however, has the highest incidence of every tag (except for numbers and time) in the ANLI dataset. This could be due to our sampling of A3 contexts from a wider range of genres, which likely affected how annotators chose to construct A3 hypotheses; this idea is supported by the fact that A3 contexts differ in tag percentage from A1 and A2 contexts as well. The higher incidence of all tags in A3 is also interesting, because it could be taken as providing yet another piece of evidence that our HAMLET data collection procedure generates increasingly more difficult data as rounds progress. C Dataset properties Table 9 shows the label distribution. Figure 4 shows a histogram of the number of tries per good verified example across for the three different rounds. Figure 5 shows the time taken per good verified example. Figure 6 shows a histogram of the number of tokens for contexts and hypotheses across three rounds. Figure 7 shows the proportion of different types of collected examples across three rounds. Inter-annotator agreement Table 10 reports the inter-annotator agreement for verifiers on the dev and test sets. For reference, the Fleiss’ kappa of FEVER (Thorne et al., 2018) is 0.68 and of SNLI (Bowman et al., 2015) is 0.70. Table 11 shows the percentage of agreement of verifiers with the intended author label. D Examples We include more examples of collected data in Table 12. E User interface Examples of the user interface are shown in Figures 8, 9 and 10. 4898 5 10 15 20 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Proportion (%) R1 5 10 15 20 R2 5 10 15 20 R3 Number of Tries for Good Verified Examples Figure 4: Histogram of the number of tries for each good verified example across three rounds. 0 200 400 600 800 0.000 0.001 0.002 0.003 0.004 0.005 0.006 Proportion (%) R1 0 200 400 600 800 R2 0 200 400 600 800 R3 Time Spent for Good Verified Examples (Seconds) Figure 5: Histogram of the time spent per good verified example across three rounds. 0 25 50 75 100 0.00 0.02 0.04 0.06 0.08 0.10 Proportion (%) R1 0 25 50 75 100 R2 0 25 50 75 100 R3 Number of Tokens (Byte Pair Encoding) Context Hypothesis Figure 6: Histogram of the number of tokens in contexts and hypotheses across three rounds. Figure 7: Proportion across three rounds. A=Examples that model got right, B1=Examples that model got wrong and the first two verifiers agreed with the writer, B2=Examples that model got wrong and only one of the first two verifiers agreed with the writer and a third verifier also agreed with the writer, C=Examples where two verifiers agreed with each other and overruled the writer, D=Examples for which there is no agreement among verifiers. A and C are added only to training set. B1 and B2 are added to training, dev, or test set. D was discarded. 4899 Figure 8: UI for Creation. (Provide the context to annotator) Figure 9: Collection UI for Creation. (Give the model feedback to annotator) Figure 10: UI for Verification Task. 4900 Other Datasets ANLI SNLI MNLIm MNLImm F A1 A2 A3 Tag % c % h % c % h % c % h % claim % c % h % c % h % c % h Negation < 1 1 14 16 12 16 3 2 6 3 10 22 14 ‘and’ 30 7 41 15 42 18 6 85 12 88 11 75 11 ‘or’ 1 < 1 7 2 8 2 < 1 6 0 6 < 1 15 1 Numbers 10 4 16 8 15 9 9 72 30 73 27 42 15 Time 12 4 15 7 16 9 6 57 22 56 19 49 11 WH-words 3 1 16 7 18 9 2 28 5 27 5 35 5 Pronouns 11 7 37 20 39 24 2 30 9 28 7 60 13 Quantifiers 5 3 21 16 22 17 3 14 10 17 12 38 12 Modals < 1 < 1 17 13 18 14 < 1 2 3 3 2 35 14 >20 words 14 < 1 37 2 39 3 < 1 100 5 100 4 98 4 # exs 10k 10k 10k 9999 1k 1k 1200 Table 8: Percentage of development set sentences with tags in several datasets: AdvNLI, SNLI, MuliNLI and FEVER. ‘%c’ refers to percentage in contexts, and‘%h’ refers to percentage in hypotheses. Bolded values label linguistic phenomena that have higher incidence in adversarially created hypotheses than in hypotheses from other NLI datasets, and italicized values have roughly the same (within 5%) incidence. Entailment / Neutral / Contradiction Round Train Dev Test A1 5,371 / 7,052 / 4,523 334 / 333 / 333 334 / 333 / 333 A2 14,448 / 20,959 / 10,053 334 / 333 / 333 334 / 333 / 333 A3 32,292 / 40,778 / 27,389 402 / 402 / 396 402 / 402 / 396 ANLI 52,111 / 68,789 / 41,965 1,070 / 1,068 / 1,062 1,070 / 1,068 /1,062 Table 9: Label distribution in splits across rounds. Round Dev + Test Dev Test A1 0.7210 0.7020 0.7400 A2 0.6910 0.7100 0.6720 A3 0.6786 0.6739 0.6832 Table 10: Inter-annotator agreement (Fleiss’ kappa) for writers and the first two verifiers. SNLI MNLI A1 A2 A3 85.8 85.2 86.1 84.6 83.9 Table 11: Percentage of agreement of verifiers (“validators” for SNLI and MNLI) with the author label. 4901 Context Hypothesis Reason Round Labels Annotations orig. pred. valid. Eduard Schulte (4 January 1891 in D¨usseldorf 6 January 1966 in Z¨urich) was a prominent German industrialist. He was one of the first to warn the Allies and tell the world of the Holocaust and systematic exterminations of Jews in Nazi Germany occupied Europe. Eduard Schulte is the only person to warn the Allies of the atrocities of the Nazis. The context states that he is not the only person to warn the Allies about the atrocities committed by the Nazis. A1 (Wiki) C N C C Tricky Presupposition, Numerical Ordinal Kota Ramakrishna Karanth (born May 1, 1894) was an Indian lawyer and politician who served as the Minister of Land Revenue for the Madras Presidency from March 1, 1946 to March 23, 1947. He was the elder brother of noted Kannada novelist K. Shivarama Karanth. Kota Ramakrishna Karanth has a brother who was a novelist and a politician Although Kota Ramakrishna Karanth’s brother is a novelist, we do not know if the brother is also a politician A1 (Wiki) N E N E N Standard Conjunction, Reasoning Plausibility Likely, Tricky Syntactic The Macquarie University Hospital (abbreviated MUH) is a private teaching hospital. Macquarie University Hospital, together with the Faculty of Medicine and Health Science, Macquarie University, formerly known as ASAM, Australian School of Advanced Medicine, will integrate the three essential components of an academic health science centre: clinical care, education and research. The Macquarie University Hospital have still not integrated the three essential components of an academic health science centre: clinical care, education and research the statement says that the universities are getting together but have not integrated the systems yet A1 (Wiki) E C E E Tricky Presupposition, Standard Negation Bernardo Provenzano (31 January 1933 – 13 July 2016) was a member of the Sicilian Mafia (“Cosa Nostra”) and was suspected of having been the head of the Corleonesi, a Mafia faction that originated in the town of Corleone, and de facto “capo di tutti capi” (boss of all bosses) of the entire Sicilian Mafia until his arrest in 2006. It was never confirmed that Bernardo Provenzano was the leader of the Corleonesi. Provenzano was only suspected as the leader of the mafia. It wasn’t confirmed. A2 (Wiki) E N E E Tricky Presupposition, Standard Negation HMAS “Lonsdale” is a former Royal Australian Navy (RAN) training base that was located at Beach Street, Port Melbourne , Victoria, Australia. Originally named “Cerberus III”, the Naval Reserve Base was commissioned as HMAS “Lonsdale” on 1 August 1940 during the Second World War. Prior to being renamed, Lonsdale was located in Perth, Australia. A naval base cannot be moved based on the information in the scenario, the base has always been located in Victoria. A2 C N C C Tricky Presupposition, Reasoning Facts Toolbox Murders is a 2004 horror film directed by Tobe Hooper, and written by Jace Anderson and Adam Gierasch. It is a remake of the 1978 film of the same name and was produced by the same people behind the original. The film centralizes on the occupants of an apartment who are stalked and murdered by a masked killer. Toolbox Murders is both 41 years old and 15 years old. Both films are named Toolbox Murders one was made in 1978, one in 2004. Since it is 2019 that would make the first 41 years old and the remake 15 years old. A2 (Wiki) E C E E Reasoning Facts, Numerical Cardinal Age, Tricky Wordplay A biker is critically ill in hospital after colliding with a lamppost in Pete The incident happened at 1.50pm yesterday in Thorpe Road. The 23-year-old was riding a Lexmoto Arrow 125 when, for an unknown reason, he left the road and collided with a lamppost. He was taken to James Cook University Hospital, in Middlesbrough, where he remains in a critical condition. Any witnesses to the collision are asked to call Durham Police on 101, quoting incident number 288 of July 9. The Lamppost was stationary. Lampposts don’t typically move. A3 (News) E N E E Reasoning Facts, Standard “We had to make a decision between making payroll or paying the debt,” Melton said Monday. “If we are unable to make payroll Oct. 19, we will definitely be able to make it next week Oct. 26 based on the nature of our sales taxes coming in at the end of the month. However we will have payroll the following week again on Nov. 2 and we are not sure we will be able to make that payroll because of the lack of revenue that is coming in.” The company will not be able to make payroll on October 19th and will instead dispense it on October 26th It’s not definitely correct nor definitely incorrect because the company said “if” they can’t make it on the 19th they will do it on the 26th, they didn’t definitely say they won’t make it on the 19th A3 (News) N E N C N Reasoning Plausibility Likely, Tricky Presupposition The Survey: Greg was answering questions. He had been asked to take a survey about his living arrangements. He gave all the information he felt comfortable sharing. Greg hoped the survey would improve things around his apartment. THe complex had really gone downhill lately. He gave some of the information he felt comfortable sharing. Greg gave all of the information he felt comfortable, not some. It was difficult for the system because it couldn’t tell a significant difference between to word “some” and “all.” A3 (Fiction) C E C C Tricky (Scalar Implicature) Table 12: Extra examples from development sets. ‘An’ refers to round number, ‘orig.’ is the original annotator’s gold label, ‘pred.’ is the model prediction, ‘valid.’ is the validator labels, ‘reason’ was provided by the original annotator, ‘Annotations’ is the tags determined by linguist expert annotator.
2020
441
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902–4912 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4902 Beyond Accuracy: Behavioral Testing of NLP Models with CheckList Marco Tulio Ribeiro1 Tongshuang Wu2 Carlos Guestrin2 Sameer Singh3 1Microsoft Research 2University of Washington 3University of California, Irvine [email protected] {wtshuang,guestrin}@cs.uw.edu [email protected] Abstract Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a taskagnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it. 1 Introduction One of the primary goals of training NLP models is generalization. Since testing “in the wild” is expensive and does not allow for fast iterations, the standard paradigm for evaluation is using trainvalidation-test splits to estimate the accuracy of the model, including the use of leader boards to track progress on a task (Rajpurkar et al., 2016). While performance on held-out data is a useful indicator, held-out datasets are often not comprehensive, and contain the same biases as the training data (Rajpurkar et al., 2018), such that real-world performance may be overestimated (Patel et al., 2008; Recht et al., 2019). Further, by summarizing the performance as a single aggregate statistic, it becomes difficult to figure out where the model is failing, and how to fix it (Wu et al., 2019). A number of additional evaluation approaches have been proposed, such as evaluating robustness to noise (Belinkov and Bisk, 2018; Rychalska et al., 2019) or adversarial changes (Ribeiro et al., 2018; Iyyer et al., 2018), fairness (Prabhakaran et al., 2019), logical consistency (Ribeiro et al., 2019), explanations (Ribeiro et al., 2016), diagnostic datasets (Wang et al., 2019b), and interactive error analysis (Wu et al., 2019). However, these approaches focus either on individual tasks such as Question Answering or Natural Language Inference, or on a few capabilities (e.g. robustness), and thus do not provide comprehensive guidance on how to evaluate models. Software engineering research, on the other hand, has proposed a variety of paradigms and tools for testing complex software systems. In particular, “behavioral testing” (also known as black-box testing) is concerned with testing different capabilities of a system by validating the input-output behavior, without any knowledge of the internal structure (Beizer, 1995). While there are clear similarities, many insights from software engineering are yet to be applied to NLP models. In this work, we propose CheckList, a new evaluation methodology and accompanying tool1 for comprehensive behavioral testing of NLP models. CheckList guides users in what to test, by providing a list of linguistic capabilities, which are applicable to most tasks. To break down potential capability failures into specific behaviors, CheckList introduces different test types, such as prediction invariance in the presence of certain perturbations, or performance on a set of “sanity checks.” Finally, our implementation of CheckList includes multiple abstractions that help users generate large numbers of test cases easily, such as templates, lexicons, general-purpose perturbations, visualizations, and context-aware suggestions. 1https://github.com/marcotcr/checklist 4903 Test case Expected Predicted Pass? Testing Negation with MFT Template: I {NEGATION} {POS_VERB} the {THING}. I can’t say I recommend the food. neg pos ︎X I didn’t love the flight. neg neutral ︎X Failure rate = 76.4% Testing NER with INV @AmericanAir thank you we got on a different flight to [ Chicago → Dallas ]. inv ︎X @VirginAmerica I can’t lose my luggage, moving to [ Brazil → Turkey ] soon, ugh. inv ︎X Failure rate = 20.8% Testing Vocabulary with DIR @AmericanAir service wasn't great. You are lame. ↓ ︎X @JetBlue why won't YOU help them?! Ugh. I dread you. ↓ ︎X Failure rate = 34.6% Capability Min Func Test INVariance DIRectional Vocabulary Fail. rate=15.0% 16.2% 34.6% NER 0.0% 20.8% % N/A Negation 76.4% N/A N/A B A C A B C pos neutral Same pred. (inv) after removals / additions 
 Labels: negative, positive, neutral Sentiment monotonic decreasing (↓) … … … … neutral neg neg neutral neg neutral Figure 1: CheckListing a commercial sentiment analysis model (). Tests are structured as a conceptual matrix with capabilities as rows and test types as columns (examples of each type in A, B and C). As an example, we CheckList a commercial sentiment analysis model in Figure 1. Potential tests are structured as a conceptual matrix, with capabilities as rows and test types as columns. As a test of the model’s Negation capability, we use a Minimum Functionality test (MFT), i.e. simple test cases designed to target a specific behavior (Figure 1A). We generate a large number of simple examples filling in a template (“I {NEGATION} {POS_VERB} the {THING}.”) with pre-built lexicons, and compute the model’s failure rate on such examples. Named entity recognition (NER) is another capability, tested in Figure 1B with an Invariance test (INV) – perturbations that should not change the output of the model. In this case, changing location names should not change sentiment. In Figure 1C, we test the model’s Vocabulary with a Directional Expectation test (DIR) – perturbations to the input with known expected results – adding negative phrases and checking that sentiment does not become more positive. As these examples indicate, the matrix works as a guide, prompting users to test each capability with different test types. We demonstrate the usefulness and generality of CheckList via instantiation on three NLP tasks: sentiment analysis (Sentiment), duplicate question detection (QQP; Wang et al., 2019b), and machine comprehension (MC; Rajpurkar et al., 2016). While traditional benchmarks indicate that models on these tasks are as accurate as humans, CheckList reveals a variety of severe bugs, where commercial and research models do not effectively handle basic linguistic phenomena such as negation, named entities, coreferences, semantic role labeling, etc, as they pertain to each task. Further, CheckList is easy to use and provides immediate value – in a user study, the team responsible for a commercial sentiment analysis model discovered many new and actionable bugs in their own model, even though it had been extensively tested and used by customers. In an additional user study, we found that NLP practitioners with CheckList generated more than twice as many tests (each test containing an order of magnitude more examples), and uncovered almost three times as many bugs, compared to users without CheckList. 2 CheckList Conceptually, users “CheckList” a model by filling out cells in a matrix (Figure 1), each cell potentially containing multiple tests. In this section, we go into more detail on the rows (capabilities), columns (test types), and how to fill the cells (tests). CheckList applies the behavioral testing principle of “decoupling testing from implementation” by treating the model as a black box, which allows for comparison of different models trained on different data, or third-party models where access to training data or model structure is not granted. 2.1 Capabilities While testing individual components is a common practice in software engineering, modern NLP models are rarely built one component at a time. Instead, CheckList encourages users to consider how different natural language capabilities are manifested on the task at hand, and to create tests to evaluate the model on each of these capabilities. For example, the Vocabulary+POS capability pertains to whether a model has the necessary vocabulary, and whether it can appropriately handle the impact of words with different parts of speech on the task. For Sentiment, we may want to check if the model is able to identify words that carry positive, negative, or neutral sentiment, by verifying how it behaves on examples like “This was a good flight.” For QQP, we might want the model to 4904 understand when modifiers differentiate questions, e.g. accredited in (“Is John a teacher?”, “Is John an accredited teacher?”). For MC, the model should be able to relate comparatives and superlatives, e.g. (Context: “Mary is smarter than John.”, Q: “Who is the smartest kid?”, A: “Mary”). We suggest that users consider at least the following capabilities: Vocabulary+POS (important words or word types for the task), Taxonomy (synonyms, antonyms, etc), Robustness (to typos, irrelevant changes, etc), NER (appropriately understanding named entities), Fairness, Temporal (understanding order of events), Negation, Coreference, Semantic Role Labeling (understanding roles such as agent, object, etc), and Logic (ability to handle symmetry, consistency, and conjunctions). We will provide examples of how these capabilities can be tested in Section 3 (Tables 1, 2, and 3). This listing of capabilities is not exhaustive, but a starting point for users, who should also come up with additional capabilities that are specific to their task or domain. 2.2 Test Types We prompt users to evaluate each capability with three different test types (when possible): Minimum Functionality tests, Invariance, and Directional Expectation tests (the columns in the matrix). A Minimum Functionality test (MFT), inspired by unit tests in software engineering, is a collection of simple examples (and labels) to check a behavior within a capability. MFTs are similar to creating small and focused testing datasets, and are particularly useful for detecting when models use shortcuts to handle complex inputs without actually mastering the capability. The Vocabulary+POS examples in the previous section are all MFTs. We also introduce two additional test types inspired by software metamorphic tests (Segura et al., 2016). An Invariance test (INV) is when we apply label-preserving perturbations to inputs and expect the model prediction to remain the same. Different perturbation functions are needed for different capabilities, e.g. changing location names for the NER capability for Sentiment (Figure 1B), or introducing typos to test the Robustness capability. A Directional Expectation test (DIR) is similar, except that the label is expected to change in a certain way. For example, we expect that sentiment will not become more positive if we add “You are lame.” to the end of tweets directed at an airline (Figure 1C). The expectation may also be a target label, e.g. replacing locations in only one of the questions in QQP, such as (“How many people are there in England?”, “What is the population of England ) Turkey?”), ensures that the questions are not duplicates. INVs and DIRs allow us to test models on unlabeled data – they test behaviors that do not rely on ground truth labels, but rather on relationships between predictions after perturbations are applied (invariance, monotonicity, etc). 2.3 Generating Test Cases at Scale Users can create test cases from scratch, or by perturbing an existing dataset. Starting from scratch makes it easier to create a small number of highquality test cases for specific phenomena that may be underrepresented or confounded in the original dataset. Writing from scratch, however, requires significant creativity and effort, often leading to tests that have low coverage or are expensive and time-consuming to produce. Perturbation functions are harder to craft, but generate many test cases at once. To support both these cases, we provide a variety of abstractions that scale up test creation from scratch and make perturbations easier to craft. Templates Test cases and perturbations can often be generalized into a template, to test the model on a more diverse set of inputs. In Figure 1 we generalized “I didn’t love the food.” with the template “I {NEGATION} {POS_VERB} the {THING}.”, where {NEGATION} = {didn’t, can’t say I, ...}, {POS_VERB} = {love, like, ...}, {THING} = {food, flight, service, ...}, and generated all test cases with a Cartesian product. A more diverse set of inputs is particularly helpful when a small set of test cases could miss a failure, e.g. if a model works for some forms of negation but not others. Expanding Templates While templates help scale up test case generation, they still rely on the user’s creativity to create fill-in values for each Figure 2: Templating with masked language models. “I really {mask} the flight.” yields verbs that the user can interactively filter into positive, negative, and neutral fill-in lists. 4905 Labels: positive, negative, or neutral; INV: same pred. (INV) after removals/ additions; DIR: sentiment should not decrease ( Ò ) or increase ( Ó ) Test TYPE and Description Failure Rate (%) Example test cases & expected behavior q  À RoB Vocab.+POS MFT: Short sentences with neutral adjectives and nouns 0.0 7.6 4.8 94.6 81.8 The company is Australian. neutral That is a private aircraft. neutral MFT: Short sentences with sentiment-laden adjectives 4.0 15.0 2.8 0.0 0.2 That cabin crew is extraordinary. pos I despised that aircraft. neg INV: Replace neutral words with other neutral words 9.4 16.2 12.4 10.2 10.2 @Virgin should I be concerned that ) when I’m about to fly ... INV @united the ) our nightmare continues... INV DIR: Add positive phrases, fails if sent. goes down by ą 0.1 12.6 12.4 1.4 0.2 10.2 @SouthwestAir Great trip on 2672 yesterday... You are extraordinary. Ò @AmericanAir AA45 ... JFK to LAS. You are brilliant. Ò DIR: Add negative phrases, fails if sent. goes up by ą 0.1 0.8 34.6 5.0 0.0 13.2 @USAirways your service sucks. You are lame. Ó @JetBlue all day. I abhor you. Ó Robust. INV: Add randomly generated URLs and handles to tweets 9.6 13.4 24.8 11.4 7.4 @JetBlue that selfie was extreme. @pi9QDK INV @united stuck because stafftook a break? Not happy 1K.... https://t.co/PWK1jb INV INV: Swap one character with its neighbor (typo) 5.6 10.2 10.4 5.2 3.8 @JetBlue ) @JeBtlue I cri INV @SouthwestAir no thanks ) thakns INV NER INV: Switching locations should not change predictions 7.0 20.8 14.8 7.6 6.4 @JetBlue I want you guys to be the first to fly to # Cuba ) Canada... INV @VirginAmerica I miss the #nerdbird in San Jose ) Denver INV INV: Switching person names should not change predictions 2.4 15.1 9.1 6.6 2.4 ...Airport agents were horrendous. Sharon ) Erin was your saviour INV @united 8602947, Jon ) Sean at http://t.co/58tuTgli0D, thanks. INV Temporal MFT: Sentiment change over time, present should prevail 41.0 36.6 42.2 18.8 11.0 I used to hate this airline, although now I like it. pos In the past I thought this airline was perfect, now I think it is creepy. neg Negation MFT: Negated negative should be positive or neutral 18.8 54.2 29.4 13.2 2.6 The food is not poor. pos or neutral It isn’t a lousy customer service. pos or neutral MFT: Negated neutral should still be neutral 40.4 39.6 74.2 98.4 95.4 This aircraft is not private. neutral This is not an international flight. neutral MFT: Negation of negative at the end, should be pos. or neut. 100.0 90.4 100.0 84.8 7.2 I thought the plane would be awful, but it wasn’t. pos or neutral I thought I would dislike that plane, but I didn’t. pos or neutral MFT: Negated positive with neutral content in the middle 98.4 100.0 100.0 74.0 30.2 I wouldn’t say, given it’s a Tuesday, that this pilot was great. neg I don’t think, given my history with airplanes, that this is an amazing staff. neg SRL MFT: Author sentiment is more important than of others 45.4 62.4 68.0 38.8 30.0 Some people think you are excellent, but I think you are nasty. neg Some people hate you, but I think you are exceptional. pos MFT: Parsing sentiment in (question, “yes”) form 9.0 57.6 20.8 3.6 3.0 Do I think that airline was exceptional? Yes. neg Do I think that is an awkward customer service? Yes. neg MFT: Parsing sentiment in (question, “no”) form 96.8 90.8 81.6 55.4 54.8 Do I think the pilot was fantastic? No. neg Do I think this company is bad? No. pos or neutral Table 1: A selection of tests for sentiment analysis. All examples (right) are failures of at least one model. placeholder (e.g. positive verbs for {POS_VERB}). We provide users with an abstraction where they mask part of a template and get masked language model (RoBERTa (Liu et al., 2019) in our case) suggestions for fill-ins, e.g. “I really {mask} the flight.” yields {enjoyed, liked, loved, regret, ...}, which the user can filter into positive, negative, and neutral fill-in lists and later reuse across multiple tests (Figure 2). Sometimes RoBERTa suggestions can be used without filtering, e.g. “This is a good {mask}” yields multiple nouns that don’t need filtering. They can also be used in perturbations, e.g. replacing neutral words like that or the for other words in context (Vocabulary+POS INV examples in Table 1). RoBERTa suggestions can be combined with WordNet categories (synonyms, antonyms, etc), e.g. such that only contextappropriate synonyms get selected in a perturbation. We also provide additional common fill-ins for general-purpose categories, such as Named Entities (common male and female first/last names, cities, countries) and protected group adjectives (nationalities, religions, gender and sexuality, etc). Open source We release an implementation of CheckList at https://github.com/marcotcr/ checklist. In addition to templating features and mask language model suggestions, it contains various visualizations, abstractions for writing test expectations (e.g. monotonicity) and perturbations, saving/sharing tests and test suites such that tests can be reused with different models and by different teams, and general-purpose perturbations such as char swaps (simulating typos), contractions, name and location changes (for NER tests), etc. 3 Testing SOTA models with CheckList We CheckList the following commercial Sentiment analysis models via their paid APIs2: Microsoft’s Text Analytics (q), Google Cloud’s Natural Language (), and Amazon’s Comprehend (À). We also CheckList BERT-base ( ) and RoBERTabase (RoB) (Liu et al., 2019) finetuned on SST-23 (acc: 92.7% and 94.8%) and on the QQP dataset 2From 11/2019, but obtained similar results from 04/2020. 3Predictions with probability of positive sentiment in the p1{3, 2{3q range are considered neutral. 4906 Label: duplicate =, or non-duplicate ,; INV: same pred. (INV) after removals/ additions Test TYPE and Description Failure Rate Example Test cases & expected behavior RoB Vocab. MFT: Modifiers changes question intent 78.4 78.0 { Is Mark Wright a photographer? | Is Mark Wright an accredited photographer? } , Taxonomy MFT: Synonyms in simple templates 22.8 39.2 { How can I become more vocal? | How can I become more outspoken? } = INV: Replace words with synonyms in real pairs 13.1 12.7 Is it necessary to follow a religion? Is it necessary to follow an organized ) organised religion? * INV MFT: More X = Less antonym(X) 69.4 100.0 { How can I become more optimistic? | How can I become less pessimistic? } = Robust. INV: Swap one character with its neighbor (typo) 18.2 12.0 { Why am I getting ) gettnig lazy? | Why are we so lazy? } INV DIR: Paraphrase of question should be duplicate 69.0 25.0 Can I gain weight from not eating enough? Can I ) Do you think I can gain weight from not eating enough? * = NER INV: Change the same name in both questions 11.8 9.4 Why isn’t Hillary Clinton ) Nicole Perez in jail? Is Hillary Clinton ) Nicole Perez going to go to jail? * INV DIR: Change names in one question, expect , 35.1 30.1 What does India think of Donald Trump? What India thinks about Donald Trump ) John Green? * , DIR: Keep first word and entities of a question, fill in the gaps with RoBERTa; expect , 30.0 32.8 Will it be difficult to get a US Visa if Donald Trump gets elected? Will the US accept Donald Trump? * , Temporal MFT: Is , used to be, non-duplicate 61.8 96.8 { Is Jordan Perry an advisor? | Did Jordan Perry use to be an advisor? } , MFT: before , after, non-duplicate 98.0 34.4 { Is it unhealthy to eat after 10pm? | Is it unhealthy to eat before 10pm? } , MFT: before becoming , after becoming 100.0 0.0 What was Danielle Bennett’s life before becoming an agent? What was Danielle Bennett’s life after becoming an agent? * , Negation MFT: simple negation, non-duplicate 18.6 0.0 { How can I become a person who is not biased? | How can I become a biased person? } , MFT: negation of antonym, should be duplicate 81.6 88.6 { How can I become a positive person? | How can I become a person who is not negative } , Coref MFT: Simple coreference: he , she 79.0 96.6 If Joshua and Chloe were alone, do you think he would reject her? If Joshua and Chloe were alone, do you think she would reject him? * , MFT: Simple resolved coreference, his and her 99.6 100.0 If Jack and Lindsey were married, do you think Lindsey’s family would be happy? If Jack and Lindsey were married, do you think his family would be happy? * , SRL MFT: Order is irrelevant for comparisons 99.6 100.0 { Are tigers heavier than insects? | What is heavier, insects or tigers? } = MFT: Orders is irrelevant in symmetric relations 81.8 100.0 { Is Nicole related to Heather? | Is Heather related to Nicole? } = MFT: Order is relevant for asymmetric relations 71.4 100.0 { Is Sean hurting Ethan? | Is Ethan hurting Sean? } , MFT: Active / passive swap, same semantics 65.8 98.6 { Does Anna love Benjamin? | Is Benjamin loved by Anna? } = MFT: Active / passive swap, different semantics 97.4 100.0 { Does Danielle support Alyssa? | Is Danielle supported by Alyssa? } , Logic INV: Symmetry: pred(a, b) = pred(b, a) 4.4 2.2 { (q1, q2) | (q2, q1) } INV DIR: Implications, eg. (a=b)^(a=c)ñ(b=c) 9.7 8.5 no example Table 2: A selection of tests for Quora Question Pair. All examples (right) are failures of at least one model. (acc: 91.1% and 91.3%). For MC, we use a pretrained BERT-large finetuned on SQuAD (Wolf et al., 2019), achieving 93.2 F1. All the tests presented here are part of the open-source release, and can be easily replicated and applied to new models. Sentiment Analysis Since social media is listed as a use case for these commercial models, we test on that domain and use a dataset of unlabeled airline tweets for INV4 and DIR perturbation tests. We create tests for a broad range of capabilities, and present subset with high failure rates in Table 1. The Vocab.+POS MFTs are sanity checks, where we expect models to appropriately handle common neutral or sentiment-laden words. and RoB do poorly on neutral predictions (they were trained on binary labels only). Surprisingly,  and À fail (7.6% and 4.8%) on sentences that are clearly neutral, with also failing (15%) on nonneutral sanity checks (e.g. “I like this seat.”). In the DIR tests, the sentiment scores predicted by q and frequently (12.6% and 12.4%) go down con4For all the INV tests, models fail whenever their prediction changes and the probability changes by more than 0.1. siderably when clearly positive phrases (e.g. “You are extraordinary.”) are added, or up (: 34.6%) for negative phrases (e.g. “You are lame.”). All models are sensitive to addition of random (not adversarial) shortened URLs or Twitter handles (e.g. 24.8% of À predictions change), and to name changes, such as locations (: 20.8%, À: 14.8%) or person names (: 15.1%, À: 9.1%). None of the models do well in tests for the Temporal, Negation, and SRL capabilities. Failures on negations as simple as “The food is not poor.” are particularly notable, e.g. (54.2%) and À (29.4%). The failure rate is near 100% for all commercial models when the negation comes at the end of the sentence (e.g “I thought the plane would be awful, but it wasn’t.”), or with neutral content between the negation and the sentiment-laden word. Commercial models do not fail simple Fairness sanity checks such as “I am a black woman.” (template: “I am a {PROTECTED} {NOUN}.”), always predicting them as neutral. Similar to software engineering, absence of test failure does not imply that these models are fair – just that they are not unfair enough to fail these simple tests. On 4907 Test TYPE Failure Example Test cases (with expected behavior and prediction) and Description Rate ( ) Vocab MFT: comparisons 20.0 C: Victoria is younger than Dylan. Q: Who is less young? A: Dylan : Victoria MFT: intensifiers to superlative: most/least 91.3 C: Anna is worried about the project. Matthew is extremely worried about the project. Q: Who is least worried about the project? A: Anna : Matthew Taxonomy MFT: match properties to categories 82.4 C: There is a tiny purple box in the room. Q: What size is the box? A: tiny : purple MFT: nationality vs job 49.4 C: Stephanie is an Indian accountant. Q: What is Stephanie’s job? A: accountant : Indian accountant MFT: animal vs vehicles 26.2 C: Jonathan bought a truck. Isabella bought a hamster. Q: Who bought an animal? A: Isabella : Jonathan MFT: comparison to antonym 67.3 C: Jacob is shorter than Kimberly. Q: Who is taller? A: Kimberly : Jacob MFT: more/less in context, more/less antonym in question 100.0 C: Jeremy is more optimistic than Taylor. Q: Who is more pessimistic? A: Taylor : Jeremy Robust. INV: Swap adjacent characters in Q (typo) 11.6 C: ...Newcomen designs had a duty of about 7 million, but most were closer to 5 million.... Q: What was the ideal duty ) udty of a Newcomen engine? A: INV : 7 million ) 5 million INV: add irrelevant sentence to C 9.8 (no example) Temporal MFT: change in one person only 41.5 C: Both Luke and Abigail were writers, but there was a change in Abigail, who is now a model. Q: Who is a model? A: Abigail : Abigail were writers, but there was a change in Abigail MFT: Understanding before/after, last/first 82.9 C: Logan became a farmer before Danielle did. Q: Who became a farmer last? A: Danielle : Logan Neg. MFT: Context has negation 67.5 C: Aaron is not a writer. Rebecca is. Q: Who is a writer? A: Rebecca : Aaron MFT: Q has negation, C does not 100.0 C: Aaron is an editor. Mark is an actor. Q: Who is not an actor? A: Aaron : Mark Coref. MFT: Simple coreference, he/she. 100.0 C: Melissa and Antonio are friends. He is a journalist, and she is an adviser. Q: Who is a journalist? A: Antonio : Melissa MFT: Simple coreference, his/her. 100.0 C: Victoria and Alex are friends. Her mom is an agent Q: Whose mom is an agent? A: Victoria : Alex MFT: former/latter 100.0 C: Kimberly and Jennifer are friends. The former is a teacher Q: Who is a teacher? A: Kimberly : Jennifer SRL MFT: subject/object distinction 60.8 C: Richard bothers Elizabeth. Q: Who is bothered? A: Elizabeth : Richard MFT: subj/obj distinction with 3 agents 95.7 C: Jose hates Lisa. Kevin is hated by Lisa. Q: Who hates Kevin? A: Lisa : Jose Table 3: A selection of tests for Machine Comprehension. the other hand, always predicts negative when {PROTECTED} is black, atheist, gay, and lesbian, while predicting positive for Asian, straight, etc. With the exception of tests that depend on predicting “neutral”, and RoB did better than all commercial models on almost every other test. This is a surprising result, since the commercial models list social media as a use case, and are under regular testing and improvement with customer feedback, while and RoB are research models trained on the SST-2 dataset (movie reviews). Finally, and RoB fail simple negation MFTs, even though they are fairly accurate (91.5%, 93.9%, respectively) on the subset of the SST-2 validation set that contains negation in some form (18% of instances). By isolating behaviors like this, our tests are thus able to evaluate capabilities more precisely, whereas performance on the original dataset can be misleading. Quora Question Pair While and RoB surpass human accuracy on QQP in benchmarks (Wang et al., 2019a), the subset of tests in Table 2 indicate that these models are far from solving the question paraphrase problem, and are likely relying on shortcuts for their high accuracy. Both models lack what seems to be crucial skills for the task: ignoring important modifiers on the Vocab. test, and lacking basic Taxonomy understanding, e.g. synonyms and antonyms of common words. Further, neither is robust to typos or simple paraphrases. The failure rates for the NER tests indicate that these models are relying on shortcuts such as anchoring on named entities too strongly instead of understanding named entities and their impact on whether questions are duplicates. Surprisingly, the models often fail to make simple Temporal distinctions (e.g. is,used to be and before,after), and to distinguish between simple Coreferences (he,she). In SRL tests, neither model is able to handle agent/predicate changes, or active/passive swaps. Finally, and RoB change predictions 4.4% and 2.2% of the time when the question order is flipped, failing a basic task requirement (if q1 is a duplicate of q2, so is q2 of q1). They are also not consistent with Logical implications of their predictions, such as transitivity. 4908 Machine Comprehension Vocab+POS tests in Table 3 show that often fails to properly grasp intensity modifiers and comparisons/superlatives. It also fails on simple Taxonomy tests, such as matching properties (size, color, shape) to adjectives, distinguishing between animals-vehicles or jobsnationalities, or comparisons involving antonyms. The model does not seem capable of handling short instances with Temporal concepts such as before, after, last, and first, or with simple examples of Negation, either in the question or in the context. It also does not seem to resolve basic Coreferences, and grasp simple subject/object or active/passive distinctions (SRL), all of which are critical to true comprehension. Finally, the model seems to have certain biases, e.g. for the simple negation template “{P1} is not a {PROF}, {P2} is.” as context, and “Who is a {PROF}?” as question, if we set {PROF} = doctor, {P1} to male names and {P2} to female names (e.g. “John is not a doctor, Mary is.”; “Who is a doctor?”), the model fails (picks the man as the doctor) 89.1% of the time. If the situation is reversed, the failure rate is only 3.2% (woman predicted as doctor). If {PROF} = secretary, it wrongly picks the man only 4.0% of the time, and the woman 60.5% of the time. Discussion We applied the same process to very different tasks, and found that tests reveal interesting failures on a variety of task-relevant linguistic capabilities. While some tests are task specific (e.g. positive adjectives), the capabilities and test types are general; many can be applied across tasks, as is (e.g. testing Robustness with typos) or with minor variation (changing named entities yields different expectations depending on the task). This small selection of tests illustrates the benefits of systematic testing in addition to standard evaluation. These tasks may be considered “solved” based on benchmark accuracy results, but the tests highlight various areas of improvement – in particular, failure to demonstrate basic skills that are de facto needs for the task at hand (e.g. basic negation, agent/object distinction, etc). Even though some of these failures have been observed by others, such as typos (Belinkov and Bisk, 2018; Rychalska et al., 2019) and sensitivity to name changes (Prabhakaran et al., 2019), we believe the majority are not known to the community, and that comprehensive and structured testing will lead to avenues of improvement in these and other tasks. 4 User Evaluation The failures discovered in the previous section demonstrate the usefulness and flexibility of CheckList. In this section, we further verify that CheckList leads to insights both for users who already test their models carefully and for users with little or no experience in a task. 4.1 CheckListing a Commercial System We approached the team responsible for the general purpose sentiment analysis model sold as a service by Microsoft (q on Table 1). Since it is a public-facing system, the model’s evaluation procedure is more comprehensive than research systems, including publicly available benchmark datasets as well as focused benchmarks built in-house (e.g. negations, emojis). Further, since the service is mature with a wide customer base, it has gone through many cycles of bug discovery (either internally or through customers) and subsequent fixes, after which new examples are added to the benchmarks. Our goal was to verify if CheckList would add value even in a situation like this, where models are already tested extensively with current practices. We invited the team for a CheckList session lasting approximately 5 hours. We presented CheckList (without presenting the tests we had already created), and asked them to use the methodology to test their own model. We helped them implement their tests, to reduce the additional cognitive burden of having to learn the software components of CheckList. The team brainstormed roughly 30 tests covering all capabilities, half of which were MFTs and the rest divided roughly equally between INVs and DIRs. Due to time constraints, we implemented about 20 of those tests. The tests covered many of the same functionalities we had tested ourselves (Section 3), often with different templates, but also ones we had not thought of. For example, they tested if the model handled sentiment coming from camel-cased twitter hashtags correctly (e.g. “#IHateYou”, “#ILoveYou”), implicit negation (e.g. “I wish it was good”), and others. Further, they proposed new capabilities for testing, e.g. handling different lengths (sentences vs paragraphs) and sentiment that depends on implicit expectations (e.g. “There was no {AC}” when {AC} is expected). Qualitatively, the team stated that CheckList was very helpful: (1) they tested capabilities they had not considered, (2) they tested capabilities that they had considered but are not in the benchmarks, 4909 and (3) even capabilities for which they had benchmarks (e.g. negation) were tested much more thoroughly and systematically with CheckList. They discovered many previously unknown bugs, which they plan to fix in the next model iteration. Finally, they indicated that they would definitely incorporate CheckList into their development cycle, and requested access to our implementation. This session, coupled with the variety of bugs we found for three separate commercial models in Table 1, indicates that CheckList is useful even in pipelines that are stress-tested and used in production. 4.2 User Study: CheckList MFTs We conduct a user study to further evaluate different subsets of CheckList in a more controlled environment, and to verify if even users with no previous experience in a task can gain insights and find bugs in a model. We recruit 18 participants (8 from industry, 10 from academia) who have at least intermediate NLP experience5, and task them with testing finetuned on QQP for a period of two hours (including instructions), using Jupyter notebooks. Participants had access to the QQP validation dataset, and are instructed to create tests that explore different capabilities of the model. We separate participants equally into three conditions: In Unaided, we give them no further instructions, simulating the current status-quo for commercial systems (even the practice of writing additional tests beyond benchmark datasets is not common for research models). In Cap. only, we provide short descriptions of the capabilities listed in Section 2.1 as suggestions to test, while in Cap.+templ. we further provide them with the template and fill-in tools described in Section 2.3. Only one participant (in Unaided) had prior experience with QQP. Due to the short study duration, we only asked users to write MFTs in all conditions; thus, even Cap.+templ. is a subset of CheckList. We present the results in Table 4. Even though users had to parse more instructions and learn a new tool when using CheckList, they created many more tests for the model in the same time. Further, templates and masked language model suggestions helped users generate many more test cases per test in Cap.+templ. than in the other two conditions – although users could use arbitrary Python code rather than write examples by hand, only one user in Unaided did (and only for one test). 5i.e. have taken a graduate NLP course or equivalent. Unaided CheckList Cap. only Cap.+templ. #Tests 5.8 ˘ 1.1 10.2 ˘ 1.8 13.5 ˘ 3.4 #Cases/test 7.3 ˘ 5.6 5.0 ˘ 1.2 198.0 ˘ 96 #Capabilities tested 3.2 ˘ 0.7 7.5 ˘ 1.9 7.8 ˘ 1.1 Total severity 10.8 ˘ 3.8 21.7 ˘ 5.7 23.7 ˘ 4.2 #Bugs (sev ě 3q 2.2 ˘ 1.2 5.5 ˘ 1.7 6.2 ˘ 0.9 Table 4: User Study Results: first three rows indicate number of tests created, number of test cases per test and number of capabilities tested. Users report the severity of their findings (last two rows). Users explored many more capabilities on Cap. only and Cap.+templ. (we annotate tests with capabilities post-hoc); participants in Unaided only tested Robustness, Vocabulary+POS, Taxonomy, and few instances of SRL, while participants in the other conditions covered all capabilities. Users in Cap. only and Cap.+templ. collectively came up with tests equivalent to almost all MFTs in Table 2, and more that we had not contemplated. Users in Unaided and Cap. only often did not find more bugs because they lacked test case variety even when testing the right concepts (e.g. negation). At the end of the experiment, we ask users to evaluate the severity of the failures they observe on each particular test, on a 5 point scale6. While there is no “ground truth”, these severity ratings provide each user’s perception on the magnitude of the discovered bugs. We report the severity sum of discovered bugs (for tests with severity at least 2), in Table 4, as well as the number of tests for which severity was greater or equal to 3 (which filters out minor bugs). We note that users with CheckList (Cap. only and Cap.+templ.) discovered much more severe problems in the model (measured by total severity or # bugs) than users in the control condition (Unaided). We ran a separate round of severity evaluation of these bugs with a new user (who did not create any tests), and obtain nearly identical aggregate results to self-reported severity. The study results are encouraging: with a subset of CheckList, users without prior experience are able to find significant bugs in a SOTA model in only 2 hours. Further, when asked to rate different aspects of CheckList (on a scale of 1-5), users indicated the testing session helped them learn more about the model (4.7 ˘ 0.5), capabilities helped them test the model more thoroughly (4.5 ˘ 0.4), and so did templates (4.3 ˘ 1.1). 61 (not a bug), 2 (minor bug), 3 (bug worth investigating and fixing), 4 (severe bug, model may not be fit for production), and 5 (no model with this bug should be in production). 4910 5 Related Work One approach to evaluate specific linguistic capabilities is to create challenge datasets. Belinkov and Glass (2019) note benefits of this approach, such as systematic control over data, as well as drawbacks, such as small scale and lack of resemblance to “real” data. Further, they note that the majority of challenge sets are for Natural Language Inference. We do not aim for CheckList to replace challenge or benchmark datasets, but to complement them. We believe CheckList maintains many of the benefits of challenge sets while mitigating their drawbacks: authoring examples from scratch with templates provides systematic control, while perturbation-based INV and DIR tests allow for testing behavior in unlabeled, naturally-occurring data. While many challenge sets focus on extreme or difficult cases (Naik et al., 2018), MFTs also focus on what should be easy cases given a capability, uncovering severe bugs. Finally, the user study demonstrates that CheckList can be used effectively for a variety of tasks with low effort: users created a complete test suite for sentiment analysis in a day, and MFTs for QQP in two hours, both revealing previously unknown, severe bugs. With the increase in popularity of end-toend deep models, the community has turned to “probes”, where a probing model for linguistic phenomena of interest (e.g. NER) is trained on intermediate representations of the encoder (Tenney et al., 2019; Kim et al., 2019). Along similar lines, previous work on word embeddings looked for correlations between properties of the embeddings and downstream task performance (Tsvetkov et al., 2016; Rogers et al., 2018). While interesting as analysis methods, these do not give users an understanding of how a fine-tuned (or end-to-end) model can handle linguistic phenomena for the end-task. For example, while Tenney et al. (2019) found that very accurate NER models can be trained using BERT (96.7%), we show BERT finetuned on QQP or SST-2 displays severe NER issues. There are existing perturbation techniques meant to evaluate specific behavioral capabilities of NLP models such as logical consistency (Ribeiro et al., 2019) and robustness to noise (Belinkov and Bisk, 2018), name changes (Prabhakaran et al., 2019), or adversaries (Ribeiro et al., 2018). CheckList provides a framework for such techniques to systematically evaluate these alongside a variety of other capabilities. However, CheckList cannot be directly used for non-behavioral issues such as data versioning problems (Amershi et al., 2019), labeling errors, annotator biases (Geva et al., 2019), worst-case security issues (Wallace et al., 2019), or lack of interpretability (Ribeiro et al., 2016). 6 Conclusion While useful, accuracy on benchmarks is not sufficient for evaluating NLP models. Adopting principles from behavioral testing in software engineering, we propose CheckList, a model-agnostic and task-agnostic testing methodology that tests individual capabilities of the model using three different test types. To illustrate its utility, we highlight significant problems at multiple levels in the conceptual NLP pipeline for models that have “solved” existing benchmarks on three different tasks. Further, CheckList reveals critical bugs in commercial systems developed by large software companies, indicating that it complements current practices well. Tests created with CheckList can be applied to any model, making it easy to incorporate in current benchmarks or evaluation pipelines. Our user studies indicate that CheckList is easy to learn and use, and helpful both for expert users who have tested their models at length as well as for practitioners with little experience in a task. The tests presented in this paper are part of CheckList’s open source release, and can easily be incorporated into existing benchmarks. More importantly, the abstractions and tools in CheckList can be used to collectively create more exhaustive test suites for a variety of tasks. Since many tests can be applied across tasks as is (e.g. typos) or with minor variations (e.g. changing names), we expect that collaborative test creation will result in evaluation of NLP models that is much more robust and detailed, beyond just accuracy on held-out data. CheckList is open source, and available at https://github.com/marcotcr/checklist. Acknowledgments We would like to thank Sara Ribeiro, Scott Lundberg, Matt Gardner, Julian Michael, and Ece Kamar for helpful discussions and feedback. Sameer was funded in part by the NSF award #IIS-1756023, and in part by the DARPA MCS program under Contract No. N660011924033 with the United States Office of Naval Research. 4911 References Saleema Amershi, Andrew Begel, Christian Bird, Rob DeLine, Harald Gall, Ece Kamar, Nachi Nagappan, Besmira Nushi, and Tom Zimmermann. 2019. Software engineering for machine learning: A case study. In International Conference on Software Engineering (ICSE 2019) - Software Engineering in Practice track. IEEE Computer Society. Boris Beizer. 1995. Black-box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons, Inc., New York, NY, USA. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations. Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49–72. Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In Empirical Methods in Natural Language Processing (EMNLP), pages 1161–1166. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of NAACL-HLT, pages 1875–1885. Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, et al. 2019. Probing what different nlp tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (* SEM 2019), pages 235–249. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress Test Evaluation for Natural Language Inference. In International Conference on Computational Linguistics (COLING). Kayur Patel, James Fogarty, James A Landay, and Beverly Harrison. 2008. Investigating statistical machine learning as a tool for software development. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 667–676. ACM. Vinodkumar Prabhakaran, Ben Hutchinson, and Margaret Mitchell. 2019. Perturbation sensitivity analysis to detect unintended model biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5740–5745, Hong Kong, China. Association for Computational Linguistics. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784– 789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do imagenet classifiers generalize to imagenet? In International Conference on Machine Learning, pages 5389–5400. Marco Tulio Ribeiro, Carlos Guestrin, and Sameer Singh. 2019. Are red roses red? evaluating consistency of question-answering models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6174–6184. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. ACM. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Association for Computational Linguistics (ACL). Anna Rogers, Shashwath Hosur Ananthakrishna, and Anna Rumshisky. 2018. What’s in your embedding, and how it predicts task performance. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2690–2703, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Barbara Rychalska, Dominika Basaj, Alicja Gosiewska, and Przemysław Biecek. 2019. Models in the wild: On corruption robustness of neural nlp systems. In International Conference on Neural Information Processing, pages 235–247. Springer. Sergio Segura, Gordon Fraser, Ana B Sanchez, and Antonio Ruiz-Cortés. 2016. A survey on metamorphic testing. IEEE Transactions on software engineering, 42(9):805–824. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In 4912 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Yulia Tsvetkov, Manaal Faruqui, and Chris Dyer. 2016. Correlation-based intrinsic evaluation of word vector representations. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 111–115, Berlin, Germany. Association for Computational Linguistics. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing nlp. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, pages 3261–3275. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel S Weld. 2019. Errudite: Scalable, reproducible, and testable error analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 747–763.
2020
442
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4913–4926 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4913 Code and Named Entity Recognition in StackOverflow Jeniya Tabassum, Mounica Maddela, Wei Xu, Alan Ritter Department of Computer Science and Engineering The Ohio State University {bintejafar.1, maddela.4, xu.1265, ritter.1492}@osu.edu Abstract There is an increasing interest in studying natural language and computer code together, as large corpora of programming texts become readily available on the Internet. For example, StackOverflow currently has over 15 million programming related questions written by 8.5 million users. Meanwhile, there is still a lack of fundamental NLP techniques for identifying code tokens or software-related named entities that appear within natural language sentences. In this paper, we introduce a new named entity recognition (NER) corpus for the computer programming domain, consisting of 15,372 sentences annotated with 20 fine-grained entity types. We trained indomain BERT representations (BERTOverflow) on 152 million sentences from StackOverflow, which lead to an absolute increase of +10 F1 score over off-the-shelf BERT. We also present the SoftNER model which achieves an overall 79.10 F1 score for code and named entity recognition on StackOverflow data. Our SoftNER model incorporates a context-independent code token classifier with corpus-level features to improve the BERTbased tagging model.1 1 Introduction Recently there has been significant interest in modeling human language together with computer code (Quirk et al., 2015; Iyer et al., 2016; Yin and Neubig, 2018), as more data becomes available on websites such as StackOverflow and GitHub. This is an ambitious yet promising direction for scaling up language understanding to richer domains. Access to domain-specific NLP tools could help a wide range of downstream applications. For example, extracting software knowledge bases from 1Our code and data are available at: https:// github.com/jeniyat/StackOverflowNER/ Figure 1: Examples of software-related named entities in a StackOverflow post. text (Movshovitz-Attias and Cohen, 2015), developing better quality measurements of StackOverflow posts (Ravi et al., 2014), finding similar questions (Amirreza Shirani, 2019) and more. However, there is a lack of NLP resources and techniques for identifying software-related named entities (e.g., variable names or application names) within natural language texts. In this paper, we present a comprehensive study that investigates the unique challenges of named entity recognition in the social computer programming domain. These named entities are often ambiguous and have implicit reliance on the accompanied code snippets. For example, the word ‘list’ commonly refers to a data structure, but can also be used as a variable name (Figure 1). In order to recognize these entities, we propose a software-related named entity recognizer (SoftNER) that utilizes an attention network to combine the local sentence-level context with corpuslevel information extracted from the code snippets. Using our newly annotated corpus of 15,372 sentences in StackOverflow, we rigorously test our proposed SoftNER model, which outperforms BiLSTM-CRF model and fine-tuned BERT model for identifying 20 types of software-related named entities. Our key contributions are the following: • A new StackOverflow NER corpus manually annotated with 20 types of named en4914 tities, including all in-line code within natural language sentences (§2). We demonstrate that NER in the software domain is an ideal benchmark task for testing effectiveness of contextual word representations, such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), due to its inherent polysemy and salient reliance on context. • An in-domain trained neural SoftNER tagger for StackOveflow (§3) that can recognize 20 fine-grained named entity types related to software developing. We also tested its performance on GitHub data of readme files and issue reports. • A code token recognizer (§3.1) that utilizes StackOveflow code snippets to capture the spelling patterns of code-related tokens, and consistently improves the NER tagger. • In-domain pretrained ELMo and BERT representations (§3.3) on 152 million sentences from StackOverflow that significantly outperforms off-the-shelf ELMo and leads to more than 21 points increase in F1 score over offthe-shelf BERT. Overall, our named entity tagger (SoftNER) achieves a 79.10% F1 score on StackOverflow and 61.08% F1 score on GitHub data for extracting the 20 software related named entity types. We believe this performance is sufficiently strong to be practically useful. We have released our data and code, including the named entity tagger, our annotated corpus, annotation guideline, a specially designed tokenizer, and pre-trained StackOverflow BERT and ELMo embeddings. 2 Annotated StackOverflow Corpus In this section, we describe the construction of our StackOverflow NER corpus. We randomly selected 1,237 question-answer threads from StackOverflow 10-year archive (from September 2008 to March 2018) and manually annotated them with 20 types of entities. For each question, four answers were annotated, including the accepted answer, the most upvoted answer, as well as two randomly selected answers (if they exist). Table 1 shows the statistics of our corpus. 40% of the question-answer threads were double-annotated, which are used as the development and test sets in our experiments (§4). We also annotated 6,501 sentences from GitHub readme files and issue reports as additional evaluation data. Train Dev Test Total #questions 741 247 249 1,237 #answers 897 289 315 1,501 #sentences 9,315 2,942 3,115 15,372 #tokens 136,996 43,296 45,541 225,833 #entities 11,440 3,949 3,733 19,122 per Question per Answer avg. #sentences 6.84 4.60 avg. #tokens 98.46 69.37 avg. #entities 7.62 5.11 avg. #tokens per sent. 14.38 15.08 Table 1: Statistics of our StackOverflow NER corpus. These counts exclude all the code blocks and output blocks (i.e., lines that appear within ⟨code⟩and ⟨blockquote⟩tags). 2.1 Annotation Schema We defined and annotated 20 types of fine-grained entities, including 8 code-related entities and 12 natural language entities. The code entities include mentions of CLASS, VARIABLE, IN LINE CODE, FUNCTION, LIBRARY, VALUE, DATA TYPE, and HTML XML TAG. Whereas the natural language entities include mentions of APPLICATION, UI ELEMENT, LANGUAGE, DATA STRUCTURE, ALGORITHM, FILE TYPE, FILE NAME, VERSION, DEVICE, OS, WEBSITE, and USER NAME. Our annotation guideline was developed through several pilots and further updated with notes to resolve difficult cases as the annotation progressed.2 Each entity type was defined to encourage maximum span length (e.g., ‘SGML parser’ instead of ‘SGML’). We annotated noun phrases without including modifiers (e.g., ‘C’ instead of ‘Plain C’), except a few special cases (e.g., ‘rich text’ as a common FILE TYPE). On average, an entity contains about 1.5 tokens. While VARIABLE, FUNCTION and CLASS names mostly consist of only a single token, our annotators found that some are written as multiple tokens when mentioned in natural language text (e.g., ‘array list’ for ‘ArrayList’ in Figure 1). The annotators were asked to read relevant code blocks or software repositories to make a decision, if needed. Annotators also searched Google or Wikipedia to categorize unfamiliar cases. The annotators were asked to update, correct, or add annotations from the user provided ⟨code⟩ markdown tags. StackOverflow users can utilize ⟨code⟩markdowns to highlight the code entities 2Our annotation guideline is available at: https:// github.com/jeniyat/StackOverflowNER/. 4915 within the natural language sentences. However, in reality, many users do not enclose the code snippets within the ⟨code⟩tags; and sometimes use them to highlight non-code elements, such as email addresses, user names, or natural language words. While creating the StackOverflow NER corpurs, we found that 59.73% of code-related entities are not marked by the StackOverflow users. Moreover, only 75.54% of the ⟨code⟩enclosed texts are actually code-related, while 10.12% used to are highlighting natural language texts. The rest of cases are referring to non-code entities, such as SOFTWARE NAMES and VERSIONS. While markdown tag could be a useful feature for entity segmentation (§3.1.3), we emphasize the importance of having a human annotated corpus for training and evaluating NLP tools in the software domain. 2.2 Annotation Agreement Our corpus was annotated by four annotators who are college students majored in computer science. We used a web-based annotation tool, BRAT (Stenetorp et al., 2012), and provided annotators with links to the original post on StackOverflow. For every iteration, each annotator was given 50 question-answer threads to annotate, 20 of which were double-annotated. An adjudicator then discussed disagreements with annotators, who also cross-checked the 30 single-annotated questions in each batch. The inter-annotator agreement is 0.62 before adjudication, measured by span-level Cohen’s Kappa (Cohen, 1960). 2.3 Additional GitHub Data To better understand the domain adaptability of our work, we further annotated the readme files and issue reports from 143 randomly sampled repositories in the GitHub dump (Gousios and Spinellis, 2012) (from October 29, 2007 to December 31, 2017). We removed all the code blocks from the issue reports and readme files collected from these 143 repositories. The resulting GitHub NER dataset consists of 6,510 sentences and 10,963 entities of 20 types labeled by two inhouse annotators. The inter-annotator agreement of this dataset is 0.68, measured by span-level Cohen’s Kappa. 2.4 StackOverflow/GitHub Tokenization We designed a new tokenizer, SOTOKENIZER, specifically for the social computer programming domain. StackOverflow and GitHub posts exhibit common features of web texts, including abbreviations, emoticons, URLs, ungrammatical sentences and spelling errors. We found that tokenization is non-trivial as many code-related tokens are mistakenly split by the existing web-text tokenizers, including the CMU Twokenizer (Gimpel et al., 2011), Stanford TweetTokenizer (Manning et al., 2014), and NLTK Twitter Tokenizer (Bird et al., 2009): txScope.Complete() [ ‘txScope’ ‘.’ ‘Complete’ ‘(’ ‘)’ ] std::condition variable [ ‘std’ ‘:’ ‘:’ ‘condition variable’] math.h [ ‘math’ ‘.’ ‘h’] ⟨span⟩ [‘⟨’ ‘span’ ‘⟩’] a==b [‘a’ ‘=’ ‘=’ ‘b’] Therefore, we implemented a new tokenizer, using Twokenizer3 as the starting point and added additional regular expression rules to avoid splitting code-related tokens. 3 Named Entity Recognition Models The extraction of software-related named entities imposes significant challenges as it requires resolving a significant amount of unseen tokens, inherent polysemy, and salient reliance on context. Unlike news or biomedical data, spelling patterns and long-distance dependencies are more crucial in the software domain to resolve ambiguities and categorize unseen words. Taken in isolation, many tokens are highly ambiguous and can refer to either programming concepts or common English words, such as: ‘go’, ‘react’, ‘spring’, ‘while’, ‘if’, ‘select’. To address these challenges, we design the SoftNER model that leverages sentential context to disambiguate and domain-specific character representations to handle rare words. Figure 2 shows the architecture of our model, which consists of primarily three components: • An input embedding layer (§3.1) that extracts contextualized embeddings from the BERTbase model and two new domainspecific embeddings for each word in the input sentence. • A embedding attention layer (§3.2) that combines the three word embeddings using an attention network. • A linear-CRF layer that predicts the entity type of each word using the attentive word representations from the previous layer. 3https://github.com/myleott/ ark-twokenize-py 4916 Figure 2: Our SoftNER model. It utilizes an attention network to combine the contextual word embeddings (BERTbase) with the domain-specific embeddings (Code Recognizer and Entity Segmenter). The detailed structure of the attention network is depicted on the right. 3.1 Input Embeddings For each word in the input sentence, we extract in-domain BERT (Devlin et al., 2019) representations and two new domain-specific embeddings produced by (i) a Code Recognizer, which represents if a word can be part of a code entity regardless of context; and (ii) an Entity Segmenter, that predicts whether a word is part of any named entity in the given sentence. Each domain-specific embedding is created by passing a binary value, predicted by a network independent from the SoftNER model. We describe the two standalone auxiliary models that generate these domain-based vectors below. 3.1.1 In-domain Word Embeddings Texts in the software engineering domain contain programming language tokens, such as variable names or code segments, interspersed with natural language words. This makes input representations pre-trained on general book or Wikipedia texts unsuitable for software domain. We pre-trained different in-domain word embeddings, including BERT (BERTOverflow), ELMo (ELMoVerflow), and GloVe (GloVerflow) vectors on the StackOverflow 10-year archive4 of 152 million sentences and 2.3 billion tokens (§3.3). 3.1.2 Context-independent Code Recognition Humans with prior programming knowledge can easily recognize that ‘list()’ is code, ‘list’ can be either code or a common English word, whereas ‘listing’ is more likely a non-code natural language token. We thus introduce a code recognition module to capture such prior probability of how 4https://archive.org/details/ stackexchange likely a word can be a code token without considering any contextual information. It is worth noting that this standalone code recognition model is also useful for language-and-code research, such as retrieving code snippets based on natural language queries (Iyer et al., 2016; Giorgi and Bader, 2018; Yao et al., 2019) Our code recognition model (Code Recognizer) is a binary classifier. It utilizes language model features and spelling patterns to predict whether a word is a code entity. The input features include unigram word and 6-gram character probabilities from two language models (LMs) that are trained on the Gigaword corpus (Napoles et al., 2012) and all the code-snippets in the StackOverflow 10-year archive respectively. We also pre-trained FastText (Joulin et al., 2016) word embeddings using these code-snippets, where a word vector is represented as a sum of its character ngrams. We first transform each ngram probability into a k-dimensional vector using Gaussian binning (Maddela and Xu, 2018), which has shown to improve the performance of neural models using numeric features (Sil et al., 2017; Liu et al., 2016; Maddela and Xu, 2018). We then feed the vectorized features into a linear layer, concatenate the output with FastText character-level embeddings, and pass them through another hidden layer with sigmoid activation. We predict the token as a codeentity if the output probability is greater than 0.5. This binary prediction is then converted into a vector and used as an input to the SoftNER model. 3.1.3 Entity Segmentation The segmentation task refers to identifying entity spans without assigning entity category. Entity segmentation is simpler and less error-prone 4917 than entity recognition as it does not require a fine-grained classification of the entity types. In fact, a segmentation model (Entity Segmenter) trained on our annotated StackOverflow corpus can achieve 90.41% precision on the dev set (details in §4.5), predicting whether each token is a part of entity in the given sentence. Our segmentation model fine-tunes the in-domain BERT after concatenating it with two hand-crafted features: • Word Frequency represents the word occurrence count in the training set. As many code tokens are defined by individual users, they occur much less frequently than normal English words. In fact, code and non-code tokens have an average frequency of 1.47 and 7.41 respectively in our corpus. Moreover, ambiguous token that can be either code or non-code entities, such as ‘windows’, have a much higher average frequency of 92.57. To leverage this observation, we include word frequency as a feature, converting the scalar value into a k-dimensional vector by Gaussian binning (Maddela and Xu, 2018). • Code Markdown indicates whether the given token appears inside a ⟨code⟩markdown tag in the StackOverflow post. It is worth noting that ⟨code⟩tags are noisy as users do not always enclose inline code in a ⟨code⟩tag or sometimes use the tag to highlight non-code texts (details in §2.1). Nevertheless, we find it helpful to include the markdown information as a feature as it improves the performance of our segmentation model. The inclusion of hand-crafted features is influenced by Wu et al. (2018), where word-shapes and POS tags were shown to improve the performance of sequence tagging models. 3.2 Embedding-Level Attention For each input word wi in the input sentence, we have three embeddings: BERT (wi1), Code Recognizer (wi2), and Entity Segmenter (wi3). We introduce the embedding-level attention αit (t ∈ {1, 2, 3}), which captures each embedding’s contribution towards the meaning of the word, to combine them together. To compute αit, we pass the input embeddings through a bidirectional GRU and generate their corresponding hidden representations hit = ←−−→ GRU(wit). These vectors are then passed through a non-linear layer, which outputs uit = tanh(Wehit + be). We introduce an embedding-level context vector ue, which is randomly initialized and updated during the training process. This context vector is combined with the hidden embedding representation using a softmax function to extract weight of the embeddings: αit = exp(uitT ue) P texp(uitT ue). Finally, we create the word vector by a weighted sum of all the information from different embeddings as wordi = P tαithit. The aggregated word vector wordi is then fed into a linear-CRF layer, which predicts the entity category for each word based the BIO tagging schema. 3.3 Implementation Details We use PyTorch framework to implement our proposed SoftNER model and its two auxiliary components, namely code recognition and entity segmentation. The input to the SoftNER model include 850-dimensional vectors extracted from both the code recognizer and the entity segmenter. We pre-trained BERTbase, ELMo and GloVe vectors on 152 million sentences from the StackOverflow, excluding sentences from the 1,237 posts in our annotated corpus. The pretraining of the 768-dimensional BERTbase model with 64,000 WordPiece vocabulary took 7 days on a Google TPU. The pre-training of 1024dimensional ELMo vectors took 46 days on 3 NVIDIA Titan X Pascal GPUs. The pre-training of 300-dimensional GloVe embeddings (Pennington et al., 2014) with a frequency cut-off of 5 took 8 hours on a server with 32 CPU cores and 386 GB memory. We train the SoftNER model and the two auxiliary models separately. Our segmentation model follows the simple BERT fine-tuning architecture except for the input, where BERT embeddings are concatenated with 100-dimensional code markdown and 10-dimensional word frequency features. We set the number of bins k to 10 for Gaussian vectorization. Our code recognition model is a feedforward network with two hidden layers and a single output node with sigmoid activation. 4 Evaluation In this section, we show that our SoftNER model outperforms all the previous NER approaches on the StackOverflow and GitHub data. We also discuss the factors pivotal to the performance of our model, namely pre-trained in-domain BERT embeddings and two domain-specific auxiliary tasks. 4918 P R F1 Test set Feature-based CRF 71.77 39.70 51.12 BiLSTM-CRF (ELMoVerflow) 73.03 64.82 68.68 Attentive BiLSTM-CRF (ELMoVerflow) 78.22 78.59 78.41 Fine-tuned BERT 77.02 45.92 57.54 Fine-tuned BERTOverflow 68.77 67.47 68.12 SoftNER (BERTOverflow) 78.42 79.79 79.10 Dev set Feature-based CRF 66.85 46.19 54.64 BiLSTM-CRF (ELMoVerflow) 74.44 68.71 71.46 Attentive BiLSTM-CRF (ELMoVerflow) 79.43 80.00 79.72 Fine-tuned BERT 79.57 46.42 58.64 Fine-tuned BERTOverflow 72.11 70.51 71.30 SoftNER (BERTOverflow) 78.81 81.72 80.24 Table 2: Evaluation on the dev and test sets of the StackOverflow NER corpus. Our SoftNER model outperforms the existing approaches. 4.1 Data We train and evaluate our SoftNER model on the StackOverflow NER corpus of 9,352 train, 2,942 development and 3,115 test sentences we constructed in §2. We use the same data for our segmentation model but replace all the entity tags with an I-ENTITY tag. For the code recognition model, we created a new lexicon of 6,000 unique tokens randomly sampled from the training set of the StackOverflow NER corpus. Each token was labelled independently without context as CODE, AMBIGUOUS or NON-CODE by two annotators with computer science background. The inter-annotator agreement was 0.89, measured by Cohen’s Kappa. After discarding disagreements, we divided the remaining 5,312 tokens into 4,312 train and 1,000 test instances. Then, we merged AMBIGUOUS and NON-CODE categories to facilitate binary classification. We name this dataset of 5312 individual tokens as SOLEXICON. 4.2 Baselines We compare our model with the following baseline and state-of-the-art approaches: • A Feature-based Linear CRF model which uses the standard orthographic, context and gazetteer features, along with the code markdown tags and handcrafted regular expressions to recognize code entities (details in Appendix A). • A BiLSTM-CRF model with in-domain ELMo embeddings (ELMoVerflow; details in §3.3). This architecture is used as the stateof-the-art baseline named-entity recognition models in various domains (Lample et al., 2016; Kulkarni et al., 2018; Dai et al., 2019). • An Attentive BiLSTM-CRF model with in-domain ELMo embeddings as well as domain-specific embeddings from the code recognizer and the entity segmenter. This model combines these three word embeddings using an attention network and then utilizes a BiLSTM-CRF layer to predict the entity type of each input word (details in Appendix B). • A Fine-tuned out-of-domain BERT model where we fine-tune the original BERTbase cased checkpoint5 on our annotated corpus. • A Fine-tuned in-domain BERT model where we fine-tune the in-domain pre-trained BERTbase (BERTOverflow; details in §3.3) cased checkpoint6 on our annotated corpus. 4.3 Results Table 2 shows the precision (P), recall (R) and F1 score comparison of different models evaluated on the StackOverflow NER corpus. Our SoftNER model outperforms the existing NER approaches in all the three metrics. Fine-tuning over in-domain trained BERT (BERTOverflow), in particular, improves F1 score by more than 10 points in comparison to using the original BERT. 4.4 In-domain vs. out-of-domain Word Embeddings Table 3 shows the performance comparison between in-domain and out-of-domain word embeddings. We consider off-the-shelf BERT (Devlin et al., 2019), ELMo (Peters et al., 2018) and GloVe (Pennington et al., 2014) vectors trained on newswire and web texts as out-of-domain embeddings. When using the BiLSTM-CRF model (Lample et al., 2016; Kulkarni et al., 2018; Dai et al., 2019), we observe a large increase of 13.64 F1 score when employing in-domain ELMo (ELMoVerflow) representations over indomain GloVe (GloVeOverflow), and an increase of 15.71 F1 score over out-of-domain ELMo. We found that fine-tuning out-of-domain BERT (Devlin et al., 2019) outperforms the out-of-domain 5https://github.com/google-research/ BERT 6https://github.com/lanwuwei/ BERTOverflow/ 4919 P R F1 out-of-domain Word Embeddings GloVe (newswire+Wiki+Web) 61.71 49.08 54.67 ELMo (newswire+Wiki) 67.66 47.41 55.75 Fine-tuned BERT (book+Wiki) 45.92 77.02 57.54 In-Domain Word Embeddings GloVeOverflow 66.28 51.28 57.82 ELMoVerflow 74.44 68.71 71.46 Fine-tuned BERTOverflow 72.11 70.51 71.30 Table 3: Performance of fine-tuned BERT model, BiLSTM-CRF model with GloVe and ELMo embeddings on the dev set of our StackOverflow NER corpus. Contextualized word representations show a clear benefit when trained on the in-domain StackOverflow data. ELMo (Table 3), although it underperforms indomain ELMo (ELMoVerflow) by 12.92 F1 score and in-domain BERt (BERTOverflow) by 12.76 F1 score (Table 2). Similarly, in-domain ELMo outperforms the out-of-domain fine-tuned BERT by 10.67 F1 score on Github data (Table 8; more details in §5). It is worth noting that, the performance improvements from contextual word embeddings are more pronounced on our software domain than on newswire and biomedical domains. Original ELMo and BERT outperform GloVe by 2.06 and 2.12 points in F1 respectively on CoNLL 2003 NER task of newswire data (Peters et al., 2018; Devlin et al., 2019). For biomedical domain, indomain ELMo outperforms out-of-domain ELMo by only 1.33 points in F1 on the BC2GM dataset (Sheikhshabbafghi et al., 2018). We hypothesized that the performance gains from the in-domain contextual embeddings are largely aided by the model’s ability to handle ambiguous and unseen tokens. The increase in performance is especially notable (41% −→70% accuracy) for unseen tokens, which constitute 38% of the tokens inside gold entity spans in our dataset. This experiment also demonstrates that our annotated NER corpus provides an attractive test-bed for measuring the adaptability of different contextual word representations. 4.5 Evaluation of Auxiliary Systems The domain-specific vectors produced by the Code Recognizer and the Entity Segmenter are also crucial for the overall performance of our SoftNER model. Table 4 shows an ablation study. Removing code recognizer vectors and entity segmenter vectors results in a drop of 2.19 and 3.69 in F1 scores respectively. If we replace embeddinglevel attention with a simple concatenation of emP R F1 SoftNER 78.81 81.72 80.24 – Embedding Attention 75.83 79.09 77.43 – Code Recognizer 78.76 77.35 78.05 – Entity Segmenter 77.82 75.32 76.55 Table 4: Ablation study of SoftNER on the dev set of StackOverflow NER corpus. P R F1 Token Frequency 33.33 2.25 4.22 Most Frequent Label 82.21 58.59 68.42 Our Code Recognition Model 78.43 83.33 80.80 – Character ngram LMs 64.13 84.51 72.90 – Word ngram LMs 67.98 72.96 70.38 – FastText Embeddings 76.12 81.69 78.81 Table 5: Evaluation results and feature ablation of our code recognition model on SOLEXICON test set of 1000 manually labeled unique tokens, which are sampled from the train set of StackOverflow NER corpus. beddings, the performance also drop by 2.81 F1. In addition, we evaluate the effectiveness of our two domain-specific auxiliary systems on their respective tasks. Code Recognition: Table 5 compares the performance of our code recognition model with other baselines on the SLEXICON test set (§4.1), which consists of 1,000 random words from the train set of StackOverflow NER corpus classified as either a code or a non-code token. The baselines include: (i) a Most Frequent Label baseline, which assigns the most frequent label according to the human annotation in SOLEXICON train set; and (ii) a frequency baseline, which learns a threshold over token frequency in the train set of StackOverflow NER corpus using a decision tree classifier. Our model outperforms both baselines in terms of F1 score. Although the most frequent label baseline achieves better precision than our model, it performs poorly on unseen tokens resulting in a large drop in recall and F1 score. The ablation experiments show that the FastText word embeddings along with the character and word-level features are crucial for the code recognition model. Entity Segmentation: Table 6 shows the performance of our segmentation model on the dev set of our StackOverflow corpus, where the entity tags are replaced by an I-ENTITY tag. Our model achieves an F1 score of 88.09 and with 90.41% precision and 85.89% recall. Incorporating word frequency and code markdown feature increases the F1 score by 1.57 and 2.66 points respectively. The low 10.5 F1 score of Stanford NER 4920 P R F1 Stanford NER Tagger 63.02 5.74 10.52 Our Entity Segmentation Model 90.41 85.89 88.09 – Word Frequency 88.32 84.79 86.52 – Code Markdown 86.23 84.64 85.43 Table 6: Evaluation of our segmentation model on the dev set of the StackOverflow NER corpus. tagger (Manning et al., 2014), which is trained on newswire text, demonstrates the importance of domain-specific tools for the software engineering domain. 4.6 Error Analysis Based on our manual inspection, the incorrect predictions made by NER systems on StackOverflow data can be largely classified into the following two categories (see examples in Table 7): • Segmentation Mismatch refers to the cases where model predicts the boundary of entities incorrectly. Our SoftNER model reduces such segmentation errors by 89.36% compared to the fine-tuned BERTOverflow baseline. • Entity-Type Mismatch refers to the errors where a code entity (e.g., names of variables) is predicted as a non-code entity (e.g., names of devices), and vice-versa. Our SoftNER model reduces such entity type errors by 13.54% compared to the fine-tuned BERTOverflow baseline. As illustrated in Figure 3, our SoftNER model reduced the errors in both categories by incorporating the auxiliary outputs from segmenter and code recognizer model. 5 Domain Adaptation to GitHub data To understand the domain adaptability of our StackOverflow based SoftNER, we evaluate its performance on readme files and issue reports from 143 randomly sampled repositories in the GitHub dump (Gousios and Spinellis, 2012). We also trained ELMo embeddings (ELMoGithub) on 4 million sentences from randomly sampled 5,000 GitHub repositories. Table 8 shows that the performance of our SoftNER model using StackOverflow ELMo embeddings is similar to the top performing BiLSTMCRF model using GitHub ELMo embeddings with a difference of only 1.61 points in F1. We also did not observe any significant gain after adding Segmentation Mismatch Entity-Type Mismatch Table 7: Representative examples of system errors. Figure 3: Comparison of errors made by the fine-tuned BERTOverflow baseline and our SoftNER model on the dev set of the StackOverflow NER corpus. In the heatmap, darker cell color corresponds to higher error counts. Our SoftNER model reduces errors in all the categories. P R F1 Feature-Based CRF 43.16 35.71 39.09 BiLSTM-CRF (ELMoGitHub) 64.53 60.96 62.69 Attentive BiLSTM-CRF (ELMoVerflow) 62.05 59.20 60.59 Attentive BiLSTM-CRF (ELMoGitHub) 63.29 60.89 62.07 Fine-tuned out-of-domain BERT 56.59 48.13 52.02 Fine-tuned BERTOverflow 61.71 58.75 60.19 SoftNER (BERTOverflow) 61.92 60.26 61.08 Table 8: Evaluation on the GitHub NER dataset of readme files and issue posts. All the models are trained on our StackOverflow NER corpus. Our SoftNER model performs close to BiLSTM-CRF model trained on the GitHub ELMo embeddings. the code recognizer and segmenter vectors to the Github ELMo embeddings. We think one likely explanation is that GitHub data contains less coderelated tokens when compared to StackOverflow. The percentage of code-related entity tokens is 63.20% in GitHub and 77.21% in StackOverflow. Overall, we observe a drop of our SoftNER tagger from 79.10 F1 on StackOverflow (Table 2) to 61.08 F1 on GitHub data (Table 8) in F1 due to domain mismatch. However, we believe that our NER tagger still achieves sufficient performance to be useful for applications on GitHub.7 We leave investigation of semi-supervised learning and other domain adaptation approaches for future work. 7As a reference, the state-of-the-art performance for 10class Twitter NER is 70.69 F1(Zhang et al., 2018). 4921 6 Related Work The CoNLL 2003 dataset (Sang and De Meulder, 2003) is a widely used benchmark for named entity recognition, which contains annotated newswire text from the Reuters RCV1 corpus. State-of-the-art approaches on this dataset (Baevski et al., 2019) use a bidirectional LSTM (Lample et al., 2016; Ma and Hovy, 2016) with conditional random field (Collobert et al., 2011) and contextualized word representations (McCann et al., 2017; Peters et al., 2018; Devlin et al., 2019). Named entity recognition has been explored for new domains and languages, such as social media (Finin et al., 2010; Ritter et al., 2011; Plank et al., 2014; Derczynski et al., 2015; Limsopatham and Collier, 2016; Aguilar et al., 2017), biomedical texts (Collier and Kim, 2004; Greenberg et al., 2018; Kulkarni et al., 2018), multilingual texts (Benajiba et al., 2008; Xie et al., 2018) and codeswitched corpora (Aguilar et al., 2018; Ball and Garrette, 2018). Various methods have been investigated for handling rare entities, for example incorporating external context (Long et al., 2017) or approaches that make use of distant supervision (Choi et al., 2018; Yang et al., 2018; Onoe and Durrett, 2019). There has been relatively little prior work on named entity recognition in the software engineering domain. Ye et al. (2016) annotated 4,646 sentences from StackOverflow with five named entity types (Programming Language, Platform, API, Tool-Library-Framework and Software Standard). The authors used a traditional feature-based CRF to recognize these entities. In contrast, we present a much larger annotated corpus consisting of 15,372 sentences labeled with 20 fine-grained entity types. We also develop a novel attention based neural NER model to extract those finegrained entities. 7 Conclusion In this work, we investigated the task of named entity recognition in the social computer programming domain. We developed a new NER corpus of 15,372 sentences from StackOverflow and 6,510 sentences from GitHub annotated with 20 fine-grained named entities. We demonstrate that this new corpus is an ideal benchmark dataset for contextual word representations, as there are many challenging ambiguities that often require long-distance context to resolve. We also proposed a novel attention based model, named SoftNER, that outperforms the state-of-the-art NER models on this dataset. Furthermore, we investigated the important sub-task of code recognition. Our code recognition model captures additional spelling information beyond then contextual word representations and consistently helps to improve the NER performance. We believe our corpus, StackOverflow-specific BERT embeddings and named entity tagger will be useful for various language-and-code tasks, such as code retrieval, software knowledge base extraction and automated question-answering. Acknowledgement We thank anonymous reviewers for their thoughtful comments. We also thank NVIDIA, Google, and Ohio Supercomputer Center (Center, 2012) for providing GPU/TPU computing resources; Wuwei Lan for kindly helping to train in-domain BERT on StackOverflow data; Sydney Lee, Rita Tong, Lillian Chow, and Raleigh Potluri for help with data annotation. This research is supported in part by the NSF awards IIS-1822754 and IIS1845670, ODNI and IARPA via the BETTER program contract 19051600004, ARO and DARPA via the SocialSim program contract W911NF-17C-0095, Criteo Faculty Research Award to Wei Xu, and Amazon Faculty Research Award to Alan Ritter. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, ARO, DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. References Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018. Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. Gustavo Aguilar, Suraj Maharjan, Adrian Pastor L´opez-Monroy, and Thamar Solorio. 2017. A Multi-task Approach for Named Entity Recognition in Social Media Data. In Proceedings of the 3rd Workshop on Noisy User-generated Text (WNUT). 4922 David Lo Thamar Solorio Amin Alipour Amirreza Shirani, Bowen Xu. 2019. Question Relatedness on Stack Overflow: The Task, Dataset, and Corpusinspired Models. In Proceedings of the AAAI Reasoning for Complex Question Answering Workshop. Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Cloze-driven pretraining of self-attention networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics. Kelsey Ball and Dan Garrette. 2018. Part-of-Speech Tagging for Code-Switched, Transliterated Texts without Explicit Language Identification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Yassine Benajiba, Mona Diab, and Paolo Rosso. 2008. Arabic Named Entity Recognition using Optimized Feature Sets. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media Inc. Ohio Supercomputer Center. 2012. Oakley supercomputer. Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettlemoyer. 2018. Ultra-Fine Entity Typing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL). Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement. Nigel Collier and Jin-Dong Kim. 2004. Introduction to the Bio-entity Recognition Task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP). Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (almost) from Scratch. Journal of Machine Learning Research (JMLR). Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2019. Using Similarity Measures to Select Pretraining Data for NER. arXiv preprint arXiv:1904.00585. Leon Derczynski, Diana Maynard, Giuseppe Rizzo, Marieke Van Erp, Genevieve Gorrell, Rapha¨el Troncy, Johann Petrak, and Kalina Bontcheva. 2015. Analysis of Named Entity Recognition and Linking for Tweets. Information Processing & Management. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Tim Finin, Will Murnane, Anand Karandikar, Nicholas Keller, Justin Martineau, and Mark Dredze. 2010. Annotating Named Entities in Twitter Data with Crowdsourcing. In Proceedings of the Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech Tagging for Twitter: Annotation, Features, and Experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). John M Giorgi and Gary Bader. 2018. Transfer Learning for Biomedical Named Entity Recognition with Neural Networks. bioRxiv. G. Gousios and D. Spinellis. 2012. GHTorrent: Github’s Data from a Firehose. In Proceedings of the 9th IEEEConference on Mining Software Repositories (MSR). Nathan Greenberg, Trapit Bansal, Patrick Verga, and Andrew McCallum. 2018. Marginal Likelihood Training of BiLSTM-CRF for Biomedical Named Entity Recognition from Disjoint Label Sets. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing Source Code using a Neural Attention Model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H´erve J´egou, and Tomas Mikolov. 2016. FastText.zip: Compressing Text Classification Models. arXiv preprint arXiv:1612.03651. Chaitanya Kulkarni, Wei Xu, Alan Ritter, and Raghu Machiraju. 2018. An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In Proceedings of the 2016 Conference of the North 4923 American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Nut Limsopatham and Nigel Collier. 2016. Bidirectional LSTM for Named Entity Recognition in Twitter Messages. In Proceedings of 2016 the Workshop on Noisy User-generated Text (WNUT). Dan Liu, Wei Lin, Shiliang Zhang, Si Wei, and Hui Jiang. 2016. Neural Networks Models for Entity Discovery and Linking. arXiv preprint arXiv:1611.03558. Teng Long, Emmanuel Bengio, Ryan Lowe, Jackie Chi Kit Cheung, and Doina Precup. 2017. World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical LSTMs Using External Descriptions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP). Xuezhe Ma and Eduard Hovy. 2016. End-to-end Sequence Labeling via Bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Mounica Maddela and Wei Xu. 2018. A WordComplexity Lexicon and A Neural Readability Ranking Model for Lexical Simplification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of the 2014 Association for Computational Linguistics System Demonstrations (ACL). Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in Translation: Contextualized Word Vectors. In Proceedings of the 2017 Conference on Neural Information Processing Systems (NeurIPS). Dana Movshovitz-Attias and William W Cohen. 2015. KB-LDA: Jointly Learning a Knowledge Base of Hierarchy, Relations, and Facts. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP). Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated Gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-Scale Knowledge Extraction. Yasumasa Onoe and Greg Durrett. 2019. Learning to Denoise Distantly-Labeled Data for Entity Typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NACL). Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP). Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the of Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL). Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to Code: Learning Semantic Parsers for If-This-Then-That Recipes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP). Sujith Ravi, Bo Pang, Vibhor Rastogi, and Ravi Kumar. 2014. Great Question! Question Quality in Community Q&A. In Eighth International AAAI Conference on Weblogs and Social Media. Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named Entity Recognition in Tweets: An Experimental Study. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP). Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL. Golnar Sheikhshabbafghi, Inanc Birol, and Anoop Sarkar. 2018. In-domain Context-aware Token Embeddings Improve Biomedical Named Entity Recognition. In Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis. Avirup Sil, Gourab Kundu, Radu Florian, and Wael Hamza. 2017. Neural Cross-Lingual Entity Linking. In Proceedings of the 30th AAAI Conference on Artificial Intelligence. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. brat: a Web-based Tool for NLP-Assisted Text Annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL). 4924 Minghao Wu, Fei Liu, and Trevor Cohn. 2018. Evaluating the Utility of Hand-crafted Features in Sequence Labelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A Smith, and Jaime Carbonell. 2018. Neural CrossLingual Named Entity Recognition with Minimal Resources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Yaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, and Min Zhang. 2018. Distantly Supervised NER with Partial Annotation Learning and Reinforcement Learning. In Proceedings of the 27th International Conference on Computational Linguistics (COLING). Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical Attention Networks for Document Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). Ziyu Yao, Jayavardhan Reddy Peddamail, and Huan Sun. 2019. CoaCor: Code Annotation for Code Retrieval with Reinforcement Learning. In Proceedings of the World Wide Web Conference (WWW). Deheng Ye, Zhenchang Xing, Chee Yong Foo, Zi Qun Ang, Jing Li, and Nachiket Kapre. 2016. Softwarespecific Named Entity Recognition in Software Engineering Social Content. In Proceedings of the 2016 IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER). Pengcheng Yin and Graham Neubig. 2018. TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP). Qi Zhang, Jinlan Fu, Xiaoyu Liu, and Xuanjing Huang. 2018. Adaptive Co-attention Network for Named Entity Recognition in Tweets. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. 4925 A Feature-Based CRF Baseline We implemented a CRF baseline model using CRFsuite8 to extract the software entities. This model uses standard orthographic, contextual and gazetteer features. It also includes the code markdown tags (§3.1.3) and a set of regular expression features. The regular expressions are developed to recognize specific categories of coderelated entities. Feature ablation experiments on this CRF model are presented in Table 9. One noticeable distinction from the named entity recognizer in many other domains is that the contextual features are not as helpful in feature-based CRFs for classifying software entities. This is because, in the StackOverflow NER corpus a significant number of neighbouring words are shared among different software entities. As an example, the bigram ‘in the’ frequently appears as the left context of the following types: APPLICATION, CLASS, FUNCTION, FILE TYPE, UI ELEMENT, LIBRARY, DATA STRUCTURE and LANGUAGE. P R F1 Feature-based CRF 66.85 46.19 54.64 – Context Features 68.91 43.58 53.39 – Markdown Feature 70.64 40.15 51.20 – Rule and Gazetteer Features 69.71 40.66 51.36 Table 9: Feature based CRF performance with varying input features on dev data. B Attentive BiLSTM CRF with ELMoVerflow We propose a baseline Attentive NER model that utilizes a BiLSTM-CRF network to predict the entity type of each word from its weighted representations. The weighted word representations are extracted by a multi-level attention network, similar to Yang et al.(2016), that combines the contextualized ELMo embeddings with the code recognizer (§3.1.2) and segmenter vector (§C). These three input embeddings are merged together in the first attention layer and then their corresponding weights are calculated using the second layer. Although such multi-level attention is not commonly used in NER, we found it empirically helpful for the software domain (see Table 10). Embedding-Level Attention uses three embeddings, ELMo (wi1), Code Recognizer (wi2), and Entity Segmenter (wi3), for each word wi in 8http://www.chokkan.org/software/crfsuite/ P R F1 Attentive BiLSTM-CRF 79.43 80.00 79.72 – Multi-level Attention 77.68 78.08 77.88 – Code Recognizer 77.18 77.76 77.47 – Entity Segmenter 74.82 75.32 75.07 Table 10: Ablation study of Attentive-NER on the dev set of StackOverflow NER corpus. the input sentence. The embedding-level attention αit (t ∈{1, 2, 3}) to captures each embedding’s contribution towards the meaning of the word. To compute αit, it pass the input embeddings through a bidirectional GRU and generate their corresponding hidden representations hit = ←−−→ GRU(wit). These vectors are then passed through a non-linear layer, which outputs uit = tanh(Wehit + be). It uses an embedding-level context vector, ue, which is learned during the training process. This context vector is combined with the hidden embedding representation using a softmax function to extract weight of the embeddings, αit = exp(uitT ue) P texp(uitT ue). Finally, the word vector is created by a weighted sum of all the information from different embeddings as wordi = P tαithit. Weighted Word Representation uses a wordlevel weighting factor αi to emphasize the importance of each word wi for the NER task. Similar to the embedding-level attention, it calculates αi from the weighted word vectors wordi. A bidirectional GRU is used to encode the summarized information from neighbouring words and thus it get hi = ←−−→ GRU(wordi). This is then passed through a hidden layer which outputs ui = tanh(Wwhi + bw). Then the normalized weight for each word vector is extracted by αi = exp(uiT uw) P texp(uiT uw), where uw is another word-level context vector that is learned during training. The final weighted word representation is computed by word′ i = αihi. Subsequently, the aggregated word vector word′ i is fed into a BiLSTM-CRF network, which predicts the entity category for each word. The complete architecture of the Attentive BiLSTM CRF model is illustrated in Figure 4. Compared to BiLSTM-CRF, our proposed Attentive BiLSTMCRF demonstrates a 9.7 increase in F1 on the test set (Table 2) and reduces the segmentation errors and entity type errors by 80.33% 23.34% respectively. 4926 Figure 4: Our SoftNER model. It utilizes an attention network to combine the contextual word embeddings (ELMo) with the domain-specific embeddings (Code Recognizer and Entity Segmenter). The detailed structure of the attention network is depicted on the right. C Entity Segmentation with ELMoVerflow The Attentive-NER tagger utilizes the outputs from an auxiliary segmentation module which consists of a BiLSTM encoder and a CRF decoder. This model concatenates ELMo embeddings with two hand-crafted features- word frequency and code markdown (§3.1.3). The segmentation model follows the same architecture and training setup as the Attentive-NER model except for the input, where ELMo embeddings are concatenated with 100-dimensional code markdown and 10-dimensional word frequency features. The binary output from this entity segmenter model is later passed as through an embedding layer and used as one of the auxiliary inputs of the Attentive NER model. Table 11 shows the performance of this segmentation model with ELMoVerflow on the dev set. This model achieves an F1 score of 84.3 and an accuracy of 97.4%. The ablation study in Table 11 depicts the importance of the hand-crafted frequency and markdown features for this segmenter model by providing an increment of 1.2 and 2.1 points in the F1 score respectively. P R F1 Entity Segmentation (ELMoVerflow) 86.80 81.86 84.26 – Word Frequency 84.61 81.53 83.04 – Code Markdown 82.49 81.83 82.16 Table 11: Ablation study of our segmentation model with ELMoVerflow on the dev set of the StackOverflow NER corpus.
2020
443
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4927–4940 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4927 Dialogue-Based Relation Extraction Dian Yu1† Kai Sun2† Claire Cardie2 Dong Yu1 1Tencent AI Lab, Bellevue, WA 2Cornell University, Ithaca, NY {yudian, dyu}@tencent.com, [email protected], [email protected] Abstract We present the first human-annotated dialoguebased relation extraction (RE) dataset DialogRE, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. We further offer DialogRE as a platform for studying cross-sentence RE as most facts span multiple sentences. We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks. Considering the timeliness of communication in a dialogue, we design a new metric to evaluate the performance of RE methods in a conversational setting and investigate the performance of several representative RE methods on DialogRE. Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings. DialogRE is available at https:// dataset.org/dialogre/. 1 Introduction Cross-sentence relation extraction, which aims to identify relations between two arguments that are not mentioned in the same sentence or relations that cannot be supported by any single sentence, is an essential step in building knowledge bases from large-scale corpora automatically (Ji et al., 2010; Swampillai and Stevenson, 2010; Surdeanu, 2013). It has yet to receive extensive study in natural language processing, however. In particular, although dialogues readily exhibit cross-sentence relations, most existing relation extraction tasks focus on texts from formal genres such as professionally written and edited news reports or well-edited websites (Elsahar et al., 2018; Yao et al., 2019; † Equal contribution. S1: Hey Pheebs. S2: Hey! S1: Any sign of your brother? S2: No, but he’s always late. S1: I thought you only met him once? S2: Yeah, I did. I think it sounds y’know big sistery, y’know, ‘Frank’s always late.’ S1: Well relax, he’ll be here. Argument pair Trigger Relation type R1 (Frank, S2) brother per:siblings R2 (S2, Frank) brother per:siblings R3 (S2, Pheebs) none per:alternate names R4 (S1, Pheebs) none unanswerable Table 1: A dialogue and its associated instances in DialogRE. S1, S2: anoymized speaker of each utterance. Mesquita et al., 2019; Grishman, 2019), while dialogues have been under-studied. In this paper, we take an initial step towards studying relation extraction in dialogues by constructing the first human-annotated dialogue-based relation extraction dataset, DialogRE. Specifically, we annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends, a corpus that has been widely employed in dialogue research in recent years (Catizone et al., 2010; Chen and Choi, 2016; Chen et al., 2017; Zhou and Choi, 2018; Rashid and Blanco, 2018; Yang and Choi, 2019). Altogether, we annotate 10,168 relational triples. For each (subject, relation type, object) triple, we also annotate the minimal contiguous text span that most clearly expresses the relation; this may enable researchers to explore relation extraction methods that provide fine-grained explanations along with evidence sentences. For example, the bolded text span “brother” in Table 1 indicates the PER:SIBLINGS relation (R1 and R2) between speaker 2 (S2) and “Frank”. Our analysis of DialogRE indicates that the supporting text for most (approximately 96.0%) an4928 notated relational triples includes content from multiple sentences, making the dataset ideal for studying cross-sentence relation extraction. This is perhaps because of the higher person pronoun frequency (Biber, 1991) and lower information density (Wang and Liu, 2011) in conversational texts than those in formal written texts. In addition, 65.9% of relational triples involve arguments that never appear in the same turn, suggesting that multi-turn information may play an important role in dialogue-based relation extraction. For example, to justify that “Pheebs” is an alternate name of S2 in Table 1, the response of S2 in the second turn is required as well as the first turn. We next conduct a thorough investigation of the similarities and differences between dialoguebased and traditional relation extraction tasks by comparing DialogRE and the Slot Filling dataset (McNamee and Dang, 2009; Ji et al., 2010, 2011; Surdeanu, 2013; Surdeanu and Ji, 2014), and we argue that a relation extraction system should be aware of speakers in dialogues. In particular, most relational triples in DialogRE (89.9%) signify either an attribute of a speaker or a relation between two speakers. The same phenomenon occurs in an existing knowledge base constructed by encyclopedia collaborators, relevant to the same dialogue corpus we use for annotation (Section 3.2). Unfortunately, most previous work directly applies existing relation extraction systems to dialogues without explicitly considering the speakers involved (Yoshino et al., 2011; Wang and Cardie, 2012). Moreover, traditional relation extraction methods typically output a set of relations only after they have read the entire document and are free to rely on the existence of multiple mentions of a relation throughout the text to confirm its existence. However, these methods may be insufficient for powering a number of practical real-time dialoguebased applications such as chatbots, which would likely require recognition of a relation at its first mention in an interactive conversation. To encourage automated methods to identify the relationship between two arguments in a dialogue as early as possible, we further design a new performance evaluation metric for the conversational setting, which can be used as a supplement to the standard F1 measure (Section 4.1). In addition to dataset creation and metric design, we adapt a number of strong, representative learning-based relation extraction methods (Zeng et al., 2014; Cai et al., 2016; Yao et al., 2019; Devlin et al., 2019) and evaluate them on DialogRE to establish baseline results on the dataset going forward. We also extend the best-performing method (Devlin et al., 2019) among them by letting the model be aware of the existence of arguments that are dialogue participants (Section 4.2). Experiments on DialogRE demonstrate that this simple extension nevertheless yields substantial gains on both standard and conversational RE evaluation metrics, supporting our assumption regarding the critical role of tracking speakers in dialogue-based relation extraction (Section 5). The primary contributions of this work are as follows: (i) we construct the first human-annotated dialogue-based relation extraction dataset and thoroughly investigate the similarities and differences between dialogue-based and traditional relation extraction tasks, (ii) we design a new conversational evaluation metric that features the timeliness aspect of interactive communications in dialogue, and (iii) we establish a set of baseline relation extraction results on DialogRE using standard learning-based techniques and further demonstrate the importance of explicit recognition of speaker arguments in dialogue-based relation extraction. 2 Data Construction We use the transcripts of all ten seasons (263 episodes in total) of an American television situation comedy Friends, covering a range of topics. We remove all content (usually in parentheses or square brackets) that describes non-verbal information such as behaviors and scene information. 2.1 Relation Schema We follow the slot descriptions1 of the Slot Filling (SF) task in the Text Analysis Conference Knowledge Base Population (TAC-KBP) (McNamee and Dang, 2009; Ji et al., 2010, 2011; Surdeanu, 2013; Surdeanu and Ji, 2014), which primarily focuses on biographical attributes of person (PER) entities and important attributes of organization (ORG) entities. As the range of topics in Friends is relatively restricted compared to large-scale news corpora such as Gigaword (Parker et al., 2011), some relation types (e.g., PER:CHARGES, and ORG:SUBSIDIARIES) seldom appear in the texts. Additionally, we consider new relation types such as PER:GIRL/BOYFRIEND and PER:NEIGHBOR that 1http://surdeanu.info/kbp2014/def.php. 4929 ID Subject Relation Type Object Inverse Relation TR (%) 1 PER per:positive impression NAME 70.4 2 PER per:negative impression NAME 60.9 3 PER per:acquaintance NAME per:acquaintance 22.2 4 PER per:alumni NAME per:alumni 72.5 5 PER per:boss NAME per:subordinate 58.1 6 PER per:subordinate NAME per:boss 58.1 7 PER per:client NAME 50.0 8 PER per:dates NAME per:dates 72.5 9 PER per:friends NAME per:friends 94.7 10 PER per:girl/boyfriend NAME per:girl/boyfriend 86.1 11 PER per:neighbor NAME per:neighbor 71.2 12 PER per:roommate NAME per:roommate 89.9 13 PER per:children⋆ NAME per:parents 85.4 14 PER per:other family⋆ NAME per:other family 52.0 15 PER per:parents⋆ NAME per:children 85.4 16 PER per:siblings⋆ NAME per:siblings 80.5 17 PER per:spouse⋆ NAME per:spouse 86.7 18 PER per:place of residence⋆⋆ NAME gpe:residents of place 42.9 19 PER per:place of birth⋆⋆ NAME gpe:births in place 100.0 20 PER per:visited place NAME gpe:visitors of place 43.0 21 PER per:origin⋆ NAME 3.8 22 PER per:employee or member of⋆ NAME org:employees or members 47.2 23 PER per:schools attended⋆ NAME org:students 37.5 24 PER per:works NAME 27.0 25 PER per:age⋆ VALUE 0.0 26 PER per:date of birth⋆ VALUE 66.7 27 PER per:major STRING 50.0 28 PER per:place of work STRING 45.1 29 PER per:title⋆ STRING 0.5 30 PER per:alternate names⋆ NAME/STRING 0.7 31 PER per:pet NAME/STRING 0.3 32 GPE gpe:residents of place⋆⋆ NAME per:place of residence 42.9 33 GPE gpe:births in place⋆⋆ NAME per:place of birth 100.0 34 GPE gpe:visitors of place NAME per:visited place 43.0 35 ORG org:employees or members NAME per:employee or member of 47.2 36 ORG org:students⋆ NAME per:schools attended 37.5 37 NAME unanswerable NAME/STRING/VALUE — Table 2: Relation Types in DialogRE. Relation types with ⋆represent the existing relation types defined in the TAC-KBP SF task, and we combine three SF fine-grained relation types about cities, states, and countries in a single relation type with ⋆⋆. TR: Trigger ratio, representing the percentage of relational triples of a certain relation type that are accompanied by triggers. frequently appear in Friends. We list all 36 relation types that have at least one relational instance in the transcripts in Table 2 and provide definitions and examples of new relation types in Appendix A.1. 2.2 Annotation We focus on the annotation of relational triples (i.e., (subject, relation type, object)) in which at least one of the arguments is a named entity. We regard an uninterrupted stream of speech from one speaker and the name of this speaker as a turn. As we follow the TAC-KBP guideline to annotate relation types and design new types, we use internal annotators (two authors of this paper) who are familiar with this task. For a pilot annotation, annotator A annotates relational triples in each scene in all transcripts and form a dialogue by extracting the shortest snippet of contiguous turns that covers all annotated relational triples and sufficient supportive contexts in this scene. The guidelines are adjusted during the annotation.2 We prefer to use speaker name (i.e., the first word or phrase of a turn, followed by a colon) as one argument of a speaker-related triple if the corresponding full names or alternate names of the speaker name also appear in the same dialogue, except for relation PER:ALTERNATE NAMES in which both mentions should be regarded as arguments. For an argument pair (i.e., (subject, object)), there may exist multiple relations between them, and we annotate all instances of all of them. For each 2As the pilot annotation only involves one annotator, we admit there may exist a certain degree of bias in defining new relation types and labeling argument pairs. 4930 triple, we also annotate its trigger: the smallest extent (i.e., span) of contiguous text in the dialogue that most clearly indicates the existence of the relation between two arguments. If there exist multiple spans that can serve as triggers, we only keep one for each triple. For relation types such as PER:TITLE and PER:ALTERNATE NAMES, it is difficult to identify such supportive contexts, and therefore we leave their triggers empty. For each relational triple, we annotate its inverse triple if its corresponding inverse relation type exists in the schema (e.g., PER:CHILDREN and PER:PARENTS) while the trigger remains unchanged. In the second process, annotator B annotates the possible relations between candidate pairs annotated by annotator A (previous relation labels are hidden). Cohen’s kappa among annotators is around 0.87. We remove the cases when annotators cannot reach a consensus. On average, each dialogue in DialogRE contains 4.5 relational triples and 12.9 turns, as shown in Table 3. See Table 1 for relational triple examples (R1, R2, and R3). DialogRE Average dialogue length (in tokens) 225.8 Average # of turns 12.9 Average # of speakers 3.3 Average # of sentences 21.8 Average # of relational instances 4.5 Average # of no-relation instances 1.2 Table 3: Statistics per dialogue of DialogRE. 2.3 Negative Instance Generation, Data Split, and Speaker Name Anonymization After our first round of annotation, we use any two annotated arguments associated with each dialogue to generate candidate relational triples, in which the relation between two arguments is unanswerable based on the given dialogue or beyond our relation schema. We manually filter out candidate triples for which there is “obviously” no relation between an argument pair in consideration of aspects such as argument type constraints (e.g., relation PER:SCHOOLS ATTENDED can only exist between a PER name and an ORG name). After filtering, we keep 2,100 triples in total, whose two arguments are in “no relation”, and we finally have 10,168 triples for 1,788 dialogues. We randomly split them at the dialogue level, with 60% for training, 20% for development, and 20% for testing. The focus of the proposed task is to identify relations between argument pairs based on a dialogue, rather than exploiting information in DialogRE beyond the given dialogue or leveraging external knowledge to predict the relations between arguments (e.g., characters) specific to a particular television show. Therefore, we anonymize all speaker names (Section 2.2) in each dialogue and annotated triples and rename them in chronological order within the given dialogue. For example, S1 and S2 in Table 1 represent the original speaker names “Rachel” and “Phoebe”, respectively. 3 Data Comparisons and Discussions 3.1 Comparison Between DialogRE and SF As a pilot study, we examine the similarities and differences between dialogue-based and traditional relation extraction datasets that are manually annotated. We compare DialogRE with the official SF (2013-2014) dataset (Surdeanu, 2013; Surdeanu and Ji, 2014) as 47.2% of relation types in DialogRE originate from the SF relation types (Section 2.1), and 92.2% of the source documents in it that contain ground truth relational triples are formally written newswire reports (72.8%) or well-edited web documents (19.4%) compared to the remaining documents from discussion fora. We show the relation distributions in DialogRE and SF in Figure 1 and Figure 2 (Appendix A.2), respectively. Half of the top ten relation types in DialogRE are newly defined (PER:GIRL/BOYFRIEND, PER:POSITIVE(NEGATIVE) IMPRESSION, PER:FRIENDS, and PER:ROOMMATE), partially justifying the need for new relation types. Argument Type: Based on the predefined SF and DialogRE relation types, a subject is expected to be an entity of type PER, ORG, or geo-political entity (GPE). Notably, subjects of most relational triples (96.8% vs. 69.7% in the SF dataset) in DialogRE are person names. The coarse-grained object type is entity, string, or value (i.e., a numerical value or a date). As shown in Table 4, we observe that a higher proportion (80.1%) of objects are entities in DialogRE compared to that in SF (65.3%). DialogRE SF Entity 80.1 (6,460) 65.3 (2,167) String 18.9 (1,524) 25.4 (843) Value 1.0 (84) 9.2 (306) Table 4: Coarse-grained object type distributions (%) of DialogRE and SF with frequencies in brackets. 4931 In particular, the subjects of 77.3% of relational triples are speaker names, and more than 90.0% of relational triples contain at least one speaker argument. The high percentage of “speaker-centric” relational triples and the low percentage of ORG and GPE arguments in DialogRE is perhaps because the transcripts for annotation are from a single situation comedy that involves a small group of characters in a very limited number of scenes (see more discussions in Section 5.3). Distance Between Argument Pairs: It has been shown that there is a longer distance between two arguments in the SF dataset (Surdeanu, 2013; Huang et al., 2017) compared to that in many widely used human-annotated relation extraction datasets such as ACE (Doddington et al., 2004) and SemEval (Hendrickx et al., 2010). However, it is not trivial to compute an accurate distance between two arguments in a dialogue, especially for cases containing arguments that are speaker names. We instead consider different types of distances (e.g., average and minimum) between two argument mentions in a dialogue. We argue that DialogRE exhibits a similar level of difficulty as SF from the perspective of the distance between two arguments. 41.3% of arguments are separated by at least seven words even considering the minimum distance, and the percentage can reach as high as 96.5% considering the average distance, contrast with 46.0% in SF (Huang et al., 2017) and 59.8% in a recently released cross-sentence relation extraction dataset DocRED, in which Wikipedia articles serve as documents (Yao et al., 2019). Note that the provenance/evidence sentences in SF and DocRED are provided by automated systems or annotators. Also, 95.6% of relational triples from an annotated subset of DialogRE (Section 5.2) require reasoning over multiple sentences in a dialogue, compared with 40.7% in DocRED (Table 7). See Figure 3 in Appendix A.3 for more details. 3.2 Comparison Between DialogRE and Existing Relational Triples We also collect 2,341 relational triples related to Friends, which are summarized by a community of contributors, from a collaborative encyclopedia.3 We remove triples of content-independent relation types such as DIRECTED BY, GUEST STARS, and NUMBER OF EPISODES. 3https://friends.fandom.com/wiki/Friends. We find that 93.8% of all 224 relation types in these triples can be mapped to one of the 36 relation types in our relation schema (e.g., HUSBAND, EX-HUSBAND, and WIFE can be mapped to PER:SPOUSE) except for the remaining relatively rare or implicit relation types such as PROM DATE and GENDER, and KISSED, demonstrating the relation schema we use for annotation is capable of covering most of the important relation types labeled by the encyclopedia community of contributors. On the other hand, the relatively small number of the existing triples and the moderate size of our annotated triples in DialogRE may suggest the low information density (Wang and Liu, 2011) in conversational speech in terms of relation extraction. For example, the average annotated triple per sentence in DialogRE is merely 0.21, compared to other exhaustively annotated datasets ACE (0.73) and KnowledgeNet (Mesquita et al., 2019) (1.44), in which corpora are formal written news reports and Wikipedia articles, respectively. 3.3 Discussions on Triggers As annotated triggers are rarely available in existing relation extraction datasets (Aguilar et al., 2014), the connections between different relation types and trigger existence are under-investigated. Relation Type: In DialogRE, 49.6% of all relational triples are annotated with triggers. We find that argument pairs are frequently accompanied by triggers when (1) arguments have the same type such as PER:FRIENDS, (2) strong emotions are involved (e.g., PER:POSITIVE(NEGATIVE) IMPRESSION), or (3) the relation type is related to death or birth (e.g., GPE:BIRTHS IN PLACE). In comparison, a relation between two arguments of different types (e.g., PER:ORIGIN and PER:AGE) is more likely to be implicitly expressed instead of relying on triggers. This is perhaps because there exist fewer possible relations between such an argument pair compared to arguments of the same type, and a relatively short distance between such an argument pair might be sufficient to help the listeners understand the message correctly. For each relation type, we report the percentage of relational triples with triggers in Table 2. Argument Distance: We assume the existence of triggers may allow a longer distance between argument pairs in a text as they help to decrease ambiguity. This assumption may be empirically 4932 validated by the longer average distance (68.3 tokens) between argument pairs with triggers in a dialogue, compared to the distance (61.2 tokens) between argument pairs without any triggers. 4 Task Formulations and Methods 4.1 Dialogue-Based Relation Extraction Given a dialogue D = s1 : t1, s2 : t2, . . . , sm : tm and an argument pair (a1, a2), where si and ti denote the speaker ID and text of the ith turn, respectively, and m is the total number of turns, we evaluate the performance of approaches in extracting relations between a1 and a2 that appear in D in the following two settings. Standard Setting: As the standard setting of relation extraction tasks, we regard dialogue D as document d. The input is a1, a2, and d, and the expected output is the relation type(s) between a1 and a2 based on d. We adopt F1, which is the harmonic mean of precision (P) and recall (R), for evaluation. Conversational Setting: Instead of only considering the entire dialogue, here we can regard the first i ≤m turns of the dialogue as d. Accordingly, we propose a new metric F1c, the harmonic mean of conversational precision (Pc) and recall (Rc), as a supplement to the standard F1. We start by introducing some notation that will be used in the definition of F1c. Let Oi denote the set of predicted relation types when the input is a1, a2, and the first i turns (i.e., d = s1 : t1, s2 : t2, . . . , si : ti). For an argument pair (a1, a2), let L denote its corresponding set of relation types that are manually annotated based on the full dialogue. R represents the set of 36 relation types. By definition, Oi, L ⊆R. We define that auxiliary function (x) returns m if x does not appear in D. Otherwise, it returns the index of the turn where x first appears. We define auxiliary function ı(r) as: (i) For each relation type r ∈L, if there exists an annotated trigger for r, ı(r) = (λr) where λr denotes the trigger. Otherwise, ı(r) = m. (ii) For each r ∈ R\L, ı(r) = 1. We define the set of relation types that are evaluable based on the first i turns by Ei: Ei = {r | i ≥max{(a1), (a2), ı(r)}} (1) The interpretation of Equation 1 is that given d containing the first i turns in a dialogue, relation type r associated with a1 and a2 is evaluable if a1, a2, and the trigger for r have all been mentioned in d. The definition is based on our assumption that we can roughly estimate how many turns we require to predict the relations between two arguments based on the positions of the arguments and triggers, which most clearly express relations. See Section 5.2 for more discussions. The conversational precision and recall for an input instance D, a1, and a2 are defined as: Pc(D, a1, a2) = Pm i=1 |Oi ∩L ∩Ei| Pm i=1 |Oi ∩Ei| (2) Rc(D, a1, a2) = Pm i=1 |Oi ∩L ∩Ei| Pm i=1 |L ∩Ei| (3) We average the conversational precision/recall scores of all instances to obtain the final conversational precision/recall. Pc = P D′,a′ 1,a′ 2 Pc(D′, a′ 1, a′ 2) P D′,a′ 1,a′ 2 1 (4) Rc = P D′,a′ 1,a′ 2 Rc(D′, a′ 1, a′ 2) P D′,a′ 1,a′ 2 1 (5) and F1c = 2 · Pc · Rc/(Pc + Rc). 4.2 Baselines Majority: If a given argument pair does not appear in the training set, output the majority relation type in the training set as the prediction. Otherwise, output the most frequent relation type associated with the two arguments in the training set. CNN, LSTM, and BiLSTM: Following previous work (Yao et al., 2019), we adapt three baselines (Zeng et al., 2014; Cai et al., 2016) that use different document encoders. We refer readers to Yao et al. (2019) for more details. BERT: We follow the framework of fine-tuning a pre-trained language model on a downstream task (Radford et al., 2018) and use BERT (Devlin et al., 2019) as the pre-trained model. We concatenate the given d and (a1, a2) with classification token [CLS] and separator token [SEP] in BERT as the input sequence [CLS]d[SEP]a1[SEP]a2[SEP]. We denote the final hidden vector corresponding to [CLS] as C ∈RH, where H is the hidden size. For each relation type i, we introduce a vector Wi ∈RH and obtain the probability Pi of the existence of i between a1 and a2 based on d by Pi = sigmoid(CW T i ). The cross-entropy loss is used. 4933 Method Dev Test F1 (σ) F1c (σ) F1 (σ) F1c (σ) Majority 38.9 (0.0) 38.7 (0.0) 35.8 (0.0) 35.8 (0.0) CNN 46.1 (0.7) 43.7 (0.5) 48.0 (1.5) 45.0 (1.4) LSTM 46.7 (1.1) 44.2 (0.8) 47.4 (0.6) 44.9 (0.7) BiLSTM 48.1 (1.0) 44.3 (1.3) 48.6 (1.0) 45.0 (1.3) BERT 60.6 (1.2) 55.4 (0.9) 58.5 (2.0) 53.2 (1.6) BERTS 63.0 (1.5) 57.3 (1.2) 61.2 (0.9) 55.4 (0.9) Table 5: Performance of relation extraction methods on DialogRE in both the standard and conversational settings. BERTS: We propose a modification to the input sequence of the above BERT baseline with two motivations: (1) help a model locate the start positions of relevant turns based on the arguments that are speaker names, and (2) prevent a model from overfitting to the training data. Formally, given an argument pair (a1, a2) and its associated document d = s1 : t1, s2 : t2, . . . , sn : tn, we construct ˆd = ˆs1 : t1, ˆs2 : t2, . . . , ˆsn : tn, where ˆsi is: ˆsi =      [S1] if si = a1 [S2] if si = a2 si otherwise (6) where [S1] and [S2] are two newly-introduced special tokens. In addition, we define ˆak (k ∈ {1, 2}) to be [Sk] if ∃i(si = ak), and ak otherwise. The modified input sequence to BERT is [CLS] ˆd[SEP]ˆa1[SEP]ˆa2[SEP]. In Appendix A.4, we investigate in three alternative input sequences. It is worth mentioning that a modification that does not disambiguate speaker arguments from other arguments performs substantially worse than the above speaker-aware modification. 5 Experiment 5.1 Implementation Details CNN, LSTM, and BiLSTM Baselines: The CNN/LSTM/BiLSTM encoder takes as features GloVe word embeddings (Pennington et al., 2014), mention embeddings, and type embeddings. We assign the same mention embedding to mentions of the same argument and obtain the type embeddings based on named entity types of the two arguments. We use spaCy4 for entity typing. Language Model Fine-Tuning: We use the uncased base model of BERT released by Devlin et al. (2019). We truncate a document when the input sequence length exceeds 512 and fine-tune BERT using a batch size of 24 and a learning rate of 3×10−5 4https://spacy.io/. for 20 epochs. Other parameters remain unchanged. The embeddings of newly-introduced special tokens (e.g., [S1]) are initialized randomly. 5.2 Results and Discussions We report the performance of all baselines in both the standard and conversational settings in Table 5. We run each experiment five times and report the average F1 and F1c along with standard deviation (σ). The fine-tuned BERT method already outperform other baselines (e.g., BiLSTM that achieves 51.1% in F1 on DocRED (Yao et al., 2019)), and our speaker-aware extension to the BERT baseline further leads to 2.7% and 2.2% improvements in F1 and F1c, respectively, on the test set of DialogRE, demonstrating the importance of tracking speakers in dialogue-based relation extraction. Conversational Metric: We randomly select 269 and 256 instances, which are associated with 50 dialogues from each of the dev and test sets, respectively. For each of relational instances (188 in total) that are previously labeled with triggers in the subsets, annotator A labels the smallest turn i∗such that the first i∗turns contain sufficient information to justify a relation. The average distance between i∗and our estimation max{(a1), (a2), ı(r)} in Equation (1) (Section 4.1) is only 0.9 turn, supporting our hypothesis that the positions of arguments and triggers may be good indicators for estimating the minimum turns for humans to make predictions. For convenience, we use BERT for the following discussions and comparisons. Ground Truth Argument Types: Methods in Table 5 are not provided with ground truth argument types considering the unavailability of this kind of annotation in practical use. To study the impacts of argument types on DialogRE, we report the performance of four methods, each of which additionally takes as input the ground truth argument types as previous work (Zhang et al., 2017; Yao et al., 2019). We adopt the same baseline for a direct comparison 4934 except that the input sequence is changed. In Method 1, we simply extend the original input sequence of BERT (Section 4.2) with newly-introduced special tokens that represent argument types. The input sequence is [CLS]d[SEP]τ1a1[SEP]τ2a2[SEP], where τi is a special token representing the argument type of ai (i ∈{1, 2}). For example, given a1 of type PER and a2 of type STRING, τ1 is [PER] and τ2 is [STRING]. In Method 2, we extend the input sequence of BERTS with τi defined in Method 1 (i.e., [CLS] ˆd[SEP]τ1ˆa1[SEP]τ1ˆa2[SEP]). We also follow the input sequence of previous single-sentence relation extraction methods (Shi and Lin, 2019; Joshi et al., 2020) and refer them as Method 3 and 4, respectively. We provide the implementation details in Appendix A.5. As shown in Table 6, the best performance achieved by Method 2 is not superior to that of BERTS, which does not leverage ground truth argument types. Therefore, we guess that ground truth argument types may only provide a limited, if at all positive, contribution to the performance on DialogRE. Method 1 Method 2 Method 3 Method 4 Dev 60.6 (0.4) 62.9 (1.2) 55.6 (2.4) 61.9 (1.4) Test 59.1 (0.7) 60.5 (1.9) 52.3 (3.2) 59.7 (0.6) Table 6: Performance (F1 (σ)) comparison of methods with considering the ground truth argument types. Ground Truth Triggers: We investigate what performance would be ideally attainable if the model could identify all triggers correctly. We append the ground truth triggers to the input sequence on the baseline, and the F1 of this model is 74.9%, a 16.4% absolute improvement compared to the BERT baseline. In particular, through the introduction of triggers, we observe a 22.9% absolute improvement in F1 on relation types whose inverse relation types are themselves (e.g., PER:ROOMMATE and PER:SPOUSE). These experimental results show the critical role of triggers in dialogue-based relation extraction. However, trigger identification is perhaps as difficult as relation extraction, and it is labor-intensive to annotate large-scale datasets with triggers. Future research may explore how to identify triggers based on a small amount of human-annotated triggers as seeds (Bronstein et al., 2015; Yu and Ji, 2016). 5.3 Error Analysis and Limitations We analyze the outputs on the dev set and find that BERT tends to make more mistakes when there exists an asymmetric inverse relation of the relation to be predicted compared to those that have symmetric inverse relations. For example, the baseline mistakenly predicts S2 as the subordinate of S1 based on the following dialogue: “. . . S2: Oh. Well, I wish I could say no, but you can’t stay my assistant forever. Neither can you Sophie, but for different reasons. S1: God, I am so glad you don’t have a problem with this, because if you did, I wouldn’t even consider applying. . . ”. Introducing triggers into the input sequence leads to a relatively small gain (11.0% in F1 on all types with an asymmetric inverse relation) perhaps because inverse relation types share the same triggers (e.g., “my assistant” serves as the trigger for both PER:BOSS and PER:SUBORDINATE). One possible solution may be the use of directed syntactic graphs constructed from the given dialogue, though the performance of coreference resolution and dependency parsing in dialogues may be relatively unsatisfying. A major limitation in DialogRE is that all transcripts for annotation are from Friends, which may limit the diversity of scenarios and generality of the relation distributions. It may be useful to leverage existing triples in knowledge bases (e.g., Fandom) for thousands of movies or TV shows using distant supervision (Mintz et al., 2009), considering the time-consuming manual annotation process. In addition, dialogues in Friends presents less variation based on linguistic features (Biber, 1991) than natural conversations; nonetheless, compared to other registers such as personal letters and prepared speeches, there are noticeable linguistic similarities between natural conversations and television dialogues in Friends (Quaglio, 2009). 6 Related Work Cross-Sentence Relation Extraction Datasets Different from the sentence-level relation extraction (RE) datasets (Roth and Yih, 2004; Hendrickx et al., 2010; Riedel et al., 2010; Zhang and Wang, 2015; Zhang et al., 2017; Han et al., 2018), in which relations are between two arguments in the same sentence, we focus on cross-sentence RE tasks (Ji et al., 2011; Surdeanu, 2013; Surdeanu and Ji, 2014) and present the first dialogue-based RE dataset, in which dialogues serve as input contexts instead of formally written sentences or documents. 4935 Task style/source of doc # rel cross rate◦ # doc # triples• —– distant supervision —– Peng et al. (2017) written/PubMed 4 75.2 960,000 140,661 DocRED (Yao et al., 2019) written/Wikipedia 96 n/a 101,873 881,298 T-REx (Elsahar et al., 2018) written/Wikipedia 353 n/a 3 million 11 million —– human annotation —– BC5CDR (Li et al., 2016) written/PubMed 1 n/a 1,500 2,434 DocRED (Yao et al., 2019) written/Wikipedia 96 40.7 5,053 56,354 KnowledgeNet (Mesquita et al., 2019) written/Wikipedia and others 15 n/a 4,991 13,425 DialogRE (this work) conversational/Friends 36 95.6 1,788 8,068 Table 7: Statistics of publicly available cross-sentence relation extraction datasets (◦: the percentage (%) of relational triples involving multiple sentences; •: not include no-relation argument pairs). We compare DialogRE and existing cross-sentence RE datasets (Li et al., 2016; Quirk and Poon, 2017; Yao et al., 2019; Mesquita et al., 2019) in Table 7. In this paper, we do not consider relations that take relations or events as arguments and are also likely to span multiple sentences (Pustejovsky and Verhagen, 2009; Do et al., 2012; Moschitti et al., 2013). Relation Extraction Approaches Over the past few years, neural models have achieved remarkable success in RE (Nguyen and Grishman, 2015b,a; Adel et al., 2016; Yin et al., 2017; Levy et al., 2017; Su et al., 2018; Song et al., 2018; Luo et al., 2019), in which the input representation usually comes from shallow neural networks over pre-trained word and character embeddings (Xu et al., 2015; Zeng et al., 2015; Lin et al., 2016). Deep contextualized word representations such as the ELMo (Peters et al., 2018) are also applied as additional input features to boost the performance (Luan et al., 2018). A recent thread is to fine-tune pre-trained deep language models on downstream tasks (Radford et al., 2018; Devlin et al., 2019), leading to further performance gains on many RE tasks (Alt et al., 2019; Shi and Lin, 2019; Baldini Soares et al., 2019; Peters et al., 2019; Wadden et al., 2019). We propose an improved method that explicitly considers speaker arguments, which are seldom investigated in previous RE methods. Dialogue-Based Natural Language Understanding To advance progress in spoken language understanding, researchers have studied dialoguebased tasks such as argument extraction (Swanson et al., 2015), named entity recognition (Chen and Choi, 2016; Choi and Chen, 2018; Bowden et al., 2018), coreference resolution (Chen et al., 2017; Zhou and Choi, 2018), emotion detection (Zahiri and Choi, 2018), and machine reading comprehension (Ma et al., 2018; Sun et al., 2019; Yang and Choi, 2019). Besides, some pioneer studies focus on participating in dialogues (Yoshino et al., 2011; Hixon et al., 2015) by asking users relation-related questions or using outputs of existing RE methods as inputs of other tasks (Kl¨uwer et al., 2010; Wang and Cardie, 2012). In comparison, we focus on extracting relation triples from human-human dialogues, which is still under investigation. 7 Conclusions We present the first human-annotated dialoguebased RE dataset DialogRE. We also design a new metric to evaluate the performance of RE methods in a conversational setting and argue that tracking speakers play a critical role in this task. We investigate the performance of several RE methods, and experimental results demonstrate that a speaker-aware extension on the best-performing model leads to substantial gains in both the standard and conversational settings. In the future, we are interested in investigating the generality of our defined schema for other comedies and different conversational registers, identifying the temporal intervals when relations are valid (Surdeanu, 2013) in a dialogue, and joint dialogue-based information extraction as well as its potential combinations with multimodal signals from images, speech, and videos. Acknowledgments We would like to thank the anonymous reviewers for their constructive comments and suggestions. References Heike Adel, Benjamin Roth, and Hinrich Sch¨utze. 2016. Comparing convolutional neural networks to 4936 traditional models for slot filling. In Proceedings of NAACL-HLT, pages 828–838, San Diego, CA. Jacqueline Aguilar, Charley Beller, Paul McNamee, Benjamin Van Durme, Stephanie Strassel, Zhiyi Song, and Joe Ellis. 2014. A comparison of the events and relations across ACE, ERE, TAC-KBP, and FrameNet annotation standards. In Proceedings of the Second Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 45– 53, Baltimore, MD. Christoph Alt, Marc H¨ubner, and Leonhard Hennig. 2019. Improving relation extraction by pre-trained language representations. In Proceedings of AKBC, Amherst, MA. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of ACL, pages 2895–2905, Florence, Italy. Douglas Biber. 1991. Variation across speech and writing. Cambridge University Press. Kevin Bowden, Jiaqi Wu, Shereen Oraby, Amita Misra, and Marilyn Walker. 2018. SlugNERDS: A named entity recognition tool for open domain dialogue systems. In Proceedings of LREC, pages 4462–4469, Miyazaki, Japan. Ofer Bronstein, Ido Dagan, Qi Li, Heng Ji, and Anette Frank. 2015. Seed-based event trigger labeling: How far can event descriptions get us? In Proceedings of ACL-IJCNLP, pages 372–376, Beijing, China. Rui Cai, Xiaodong Zhang, and Houfeng Wang. 2016. Bidirectional recurrent convolutional neural network for relation classification. In Proceedings of ACL, pages 756–765, Berlin, Germany. Roberta Catizone, Alexiei Dingli, and Robert Gaizauskas. 2010. Using dialogue corpora to extend information extraction patterns for natural language understanding of dialogue. In Proceedings of LREC, pages 2136–2140, Valletta, Malta. Henry Y Chen, Ethan Zhou, and Jinho D Choi. 2017. Robust coreference resolution and entity linking on dialogues: Character identification on tv show transcripts. In Proceedings of CoNLL, pages 216–225, Vancouver, Canada. Yu-Hsin Chen and Jinho D. Choi. 2016. Character identification on multiparty conversation: Identifying mentions of characters in TV shows. In Proceedings of SIGDIAL, pages 90–100, Los Angeles, CA. Jinho D. Choi and Henry Y. Chen. 2018. SemEval 2018 Task 4: Character identification on multiparty dialogues. In Proceedings of SemEval, pages 57–64, New Orleans, LA. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proeedings of NAACL-HLT, pages 4171–4186, Minneapolis, MN. Quang Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In Proceedings of EMNLP-CoNLL, pages 677–687, Jeju Island, Korea. George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ACE) program – tasks, data, and evaluation. In Proceedings of LREC, pages 837–840, Lisbon, Portugal. Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Fr´ed´erique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of LREC, pages 3448– 3452, Miyazaki, Japan. Ralph Grishman. 2019. Twenty-five years of information extraction. Natural Language Engineering, 25(6):677–692. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of EMNLP, pages 4803–4809, Brussels, Belgium. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of SemEval, pages 33–38, Uppsala, Sweden. Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. In Proceedings of NAACL-HLT, pages 851–861, Denver, CO. Lifu Huang, Avirup Sil, Heng Ji, and Radu Florian. 2017. Improving slot filling performance with attentive neural networks on dependency structures. In Proceedings of EMNLP, pages 2588–2597, Copenhagen, Denmark. Heng Ji, Ralph Grishman, and Hoa Trang Dang. 2011. Overview of the TAC2011 Knowledge Base Population Track. In Proceedings of TAC, Gaithersburg, MD. Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Griffitt, and Joe Ellis. 2010. Overview of the TAC 2010 knowledge base population track. In Proceedings of TAC, Gaithersburg, MD. 4937 Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77. Tina Kl¨uwer, Hans Uszkoreit, and Feiyu Xu. 2010. Using syntactic and semantic based relations for dialogue act recognition. In Proceedings of COLING, pages 570–578, Beijing, China. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of CoNLL, pages 333–342, Vancouver, Canada. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database, 2016. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL, pages 2124–2133, Berlin, Germany. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of EMNLP, pages 3219–3232, Brussels, Belgium. Fan Luo, Ajay Nagesh, Rebecca Sharp, and Mihai Surdeanu. 2019. Semi-supervised teacher-student architecture for relation extraction. In Proceedings of the Third Workshop on Structured Prediction for NLP, pages 29–37, Minneapolis, MN. Kaixin Ma, Tomasz Jurczyk, and Jinho D. Choi. 2018. Challenging reading comprehension on daily conversation: Passage completion on multiparty dialog. In Proceedings of NAACL-HLT, pages 2039–2048, New Orleans, LA. Paul McNamee and Hoa Trang Dang. 2009. Overview of the TAC 2009 knowledge base population track. In Proceedings of TAC, Gaithersburg, MD. Filipe Mesquita, Matteo Cannaviccio, Jordan Schmidek, Paramita Mirza, and Denilson Barbosa. 2019. KnowledgeNet: A benchmark dataset for knowledge base population. In Proceedings of EMNLP-IJCNLP, pages 749–758, Hong Kong, China. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th ACL and the 4th IJCNLP of the AFNLP, pages 1003–1011, Suntec, Singapore. Alessandro Moschitti, Siddharth Patwardhan, and Chris Welty. 2013. Long-distance time-event relation extraction. In Proceedings of the IJCNLP, pages 1330–1338, Nagoya, Japan. Thien Huu Nguyen and Ralph Grishman. 2015a. Combining neural networks and log-linear models to improve relation extraction. arXiv preprint, cs.CL/1511.05926v1. Thien Huu Nguyen and Ralph Grishman. 2015b. Relation extraction: Perspective from convolutional neural networks. In Proceedings of the First Workshop on Vector Space Modeling for Natural Language Processing, pages 39–48, Denver, CO. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword fifth edition, linguistic data consortium. Linguistic Data Consortium. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics, 5:101–115. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532– 1543, Doha, Qatar. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237, New Orleans, LA. Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of EMNLP-IJCNLP, pages 43–54, Hong Kong, China. James Pustejovsky and Marc Verhagen. 2009. SemEval-2010 task 13: Evaluating events, time expressions, and temporal relations (TempEval-2). In Proceedings of SEW, pages 112–116, Boulder, Colorado. Paulo Quaglio. 2009. Television dialogue: The sitcom Friends vs. natural conversation, volume 36. John Benjamins Publishing. Chris Quirk and Hoifung Poon. 2017. Distant supervision for relation extraction beyond the sentence boundary. In Proceedings of EACL, pages 1171– 1182, Valencia, Spain. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. In Preprint. Farzana Rashid and Eduardo Blanco. 2018. Characterizing interactions and relationships between people. In Proceedings of EMNLP, pages 4395–4404, Brussels, Belgium. 4938 Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of ECML-PKDD, pages 148–163, Barcelona, Spain. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of CoNLL at HLTNAACL, pages 1–8, Boston, MA. Peng Shi and Jimmy Lin. 2019. Simple BERT models for relation extraction and semantic role labeling. arXiv preprint, cs.CL/1904.05255v1. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. N-ary relation extraction using graphstate lstm. In Proceedings of EMNLP, pages 2226– 2235, Brussels, Belgium. Yu Su, Honglei Liu, Semih Yavuz, Izzeddin G¨ur, Huan Sun, and Xifeng Yan. 2018. Global relation embedding for relation extraction. In Proceedings of NAACL-HLT, pages 820–830, New Orleans, LA. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge dataset and models for dialogue-based reading comprehension. Transactions of the Association of Computational Linguistics, 7:217–231. Mihai Surdeanu. 2013. Overview of the TAC2013 knowledge base population evaluation: English slot filling and temporal slot filling. In Proceedings of TAC, Gaithersburg, MD. Mihai Surdeanu and Heng Ji. 2014. Overview of the english slot filling track at the TAC2014 knowledge base population evaluation. In Proceedings of TAC, Gaithersburg, MD. Kumutha Swampillai and Mark Stevenson. 2010. Intersentential relations in information extraction corpora. In Proceedings of LREC, pages 2637–2641, Valletta, Malta. Reid Swanson, Brian Ecker, and Marilyn Walker. 2015. Argument mining: Extracting arguments from online dialogue. In Proceedings of SIGDIAL, pages 217–226, Prague, Czech Republic. David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of EMNLP-IJCNLP, pages 5788– 5793, Hong Kong, China. Dong Wang and Yang Liu. 2011. A pilot study of opinion summarization in conversations. In Proceedings of ACL, pages 331–339, Portland, OR. Lu Wang and Claire Cardie. 2012. Focused meeting summarization via unsupervised relation extraction. In Proceedings of SIGDIAL, pages 304–313, Seoul, South Korea. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of EMNLP, pages 1785–1794, Lisbon, Portugal. Zhengzhe Yang and Jinho D Choi. 2019. FriendsQA: Open-domain question answering on tv show transcripts. In Proceedings of SIGDIAL, pages 188–197, Stockholm, Sweden. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In Proceedings of ACL, pages 764–777, Florence, Italy. Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Schtze. 2017. Comparative study of cnn and rnn for natural language processing. arXiv preprint, cs.CL/1702.01923v1. Koichiro Yoshino, Shinsuke Mori, and Tatsuya Kawahara. 2011. Spoken dialogue system based on information extraction using similarity of predicate argument structures. In Proceedings of SIGDIAL, pages 59–66, Portland, OR. Dian Yu and Heng Ji. 2016. Unsupervised person slot filling based on graph mining. In Proceedings of ACL, pages 44–53, Berlin, Germany. Sayyed M Zahiri and Jinho D Choi. 2018. Emotion detection on tv show transcripts with sequence-based convolutional neural networks. In Proceedings of the AAAI Workshop on Affective Content Analysis, pages 44–51, New Orleans, LA. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of EMNLP, pages 1753–1762, Lisbon, Portugal. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING, pages 2335–2344, Dublin, Ireland. Dongxu Zhang and Dong Wang. 2015. Relation classification via recurrent neural network. arXiv preprint, cs.CL/1508.01006v2. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Positionaware attention and supervised data improve slot filling. In Proceedings of EMNLP, pages 35–45, Copenhagen, Denmark. Ethan Zhou and Jinho D Choi. 2018. They exist! introducing plural mentions to coreference resolution and entity linking. In Proceedings of COLING, pages 24–34, Santa Fe, NM. 4939 A Appendices A.1 Definitions of New Relation Types We follow the original guideline to annotate relation types in the TAC-KBP SF task (marked with ⋆) unless stated otherwise and define new relation types as follows except for self-explainable ones (e.g., PER:MAJOR, PER:FRIENDS, and PER:CLIENT). In this section, we keep the original speaker names in examples for better readability. ◦per:alternate names⋆: Names used to refer a person that are distinct from speaker names or the first name mention in the given dialogue. It is possible to provide correct objects for this relation type without any contextual information such as triggers. Alternate names may include nicknames, first name, aliases, stage names, alternate transliterations, abbreviations, alternate spellings, full names, and birth names. However, if the full name mention appears first, we do not regard a first/last name alone as a valid value. An alternate name can also be a single word or a noun phrase. ◦per:positive impression: Have a positive impression (psychological) towards an object (e.g., a person, a book, a team, a song, a shop, or location). A named entity is expected here. ◦per:negative impression: Have a negative impression (psychological) towards an object. A named entity is expected here. ◦per:acquaintance: A person one knows slightly (e.g., name), but who is not a close friend. ◦per:alumni: Two persons studied in the same school, college, or university, not necessarily during the same period. Two persons can be in different majors. Classmates or batchmates also belong to this relation type. ◦per:boss: In most cases, we annotate B as the boss of A when A directly reports to B and is managed by B at work. In the meantime, A is the subordinate of B. For example, we label (“Rachel”, per:boss, “Joanna”) and its corresponding trigger “assistant” based on dialogue D1. D1 Rachel: Oh, uh, Joanna I was wondering if I could ask you something. There’s an opening for an assistant buyer in Junior Miss... Joanna: Okay, but that would actually be a big step down for me. Rachel: Well, actually, I meant for me. The hiring committee is meeting people all day and... Joanna: Oh. Well, I wish I could say no, but you cant stay my assistant forever. Neither can you Sophie, but for different reasons. ◦ per:girl/boyfriend: A relatively long-standing relationship compared to PER:POSITIVE IMPRESSION and PER:DATES, including but not limited to ex-relationships, partners, and engagement. The fact that two people dated for one or several times alone cannot guarantee that there exists a PER:GIRL/BOYFRIEND relation between them; we label PER:DATES for such an argument pair, instead. ◦per:neighbor: A neighbor could be a person who lives in your apartment building whether they are next door to you, or not. A neighbor could also be in the broader sense of a person who lives in your neighborhood. ◦per:roommate: We regard that two persons are roommates if they share a living facility (e.g., an apartment or dormitory), and they are not family or romantically involved (e.g., per:spouse and per:girl/boyfriend). ◦ per:visited place: A person visits a place in a relatively short term of period (vs. PER:PLACE OF RESIDENCE). For example, we annotate (“Mike”, per:visited place, “Barbados”) in dialogue D2 and its corresponding trigger “coming to”. D2 Phoebe: Okay, not a fan of the tough love. Precious: I just can’t believe that Mike didn’t give me any warning. Phoebe: But he didn’t really know, you know. He wasn’t planning on coming to Barbados and proposing to me... Precious: He proposed to you? This is the worst birthday ever. ◦per:works: The argument can be a piece of art, a song, a movie, a book, or a TV series. ◦per:place of work: A location in the form of a string or a general noun phrase, where a person works such as “shop”. ◦per:pet: We prefer to use named entities as arguments. If there is no name associated with a pet, we keep its species (e.g., dog) mentioned in a dialogue. A.2 Relation Type Distribution A.3 Distance Between Argument Pairs A.4 Other Input Sequences We also experiment with the following three alternative input sequences on the BERT baseline: (1) [CLS]d#[SEP], (2) [CLS]d#[SEP]a1[SEP]a2[SEP], and (3) [CLS]d′′[SEP], where d# is obtained by 4940 2138 808 763 722 414 330 318 274 258 208 0 500 1000 1500 2000 2500 Figure 1: Relation type distribution in DialogRE. 548 326 278 144 115 113 112 106 103 99 0 150 300 450 600 Figure 2: Relation type distribution in SF (2013-2014). 0 20 40 60 80 100 ≥ (# of words between two arguments) 0.0 0.2 0.4 0.6 0.8 1.0 Percentage (%) average min max Figure 3: Number of words between two arguments within a dialogue in DialogRE. replacing subject/object mentions in d with special tokens [SUBJ] and [OBJ], and d′′ is obtained by surrounding each mention of ai (i ∈{1, 2}) in d with special tokens [Ai] and [/Ai] (Baldini Soares et al., 2019). The F1 of them is 50.9%, 58.8%, and 57.9%, respectively, substantially lower than that of BERTS (61.2%). A.5 Ground Truth Argument Type Method 3 follows the input sequence employed by Joshi et al. (2020). Specifically, we replace the argument mentions in document d with newlyintroduced special tokens that represent the subject/object and argument types. For example, if the subject type is PER and the object is STRING, we replace every subject mention in d with [SUBJ-PER] and every object mention with [OBJ-STRING]. Let d′ denote the new document. The input sequence is [CLS]d′[SEP]. Method 4 takes as input the sequence employed by Shi and Lin (2019). The input sequence is [CLS]d′[SEP]a1[SEP]a2[SEP], where d′ is defined in Method 3.
2020
444
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4941–4957 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4941 Facet-Aware Evaluation for Extractive Summarization Yuning Mao1, Liyuan Liu1, Qi Zhu1, Xiang Ren2, Jiawei Han1 1Department of Computer Science, University of Illinois at Urbana-Champaign, IL, USA 2Department of Computer Science, University of Southern California, CA, USA 1{yuningm2, ll2, qiz3, hanj}@illinois.edu [email protected] Abstract Commonly adopted metrics for extractive summarization focus on lexical overlap at the token level. In this paper, we present a facetaware evaluation setup for better assessment of the information coverage in extracted summaries. Specifically, we treat each sentence in the reference summary as a facet, identify the sentences in the document that express the semantics of each facet as support sentences of the facet, and automatically evaluate extractive summarization methods by comparing the indices of extracted sentences and support sentences of all the facets in the reference summary. To facilitate this new evaluation setup, we construct an extractive version of the CNN/Daily Mail dataset and perform a thorough quantitative investigation, through which we demonstrate that facet-aware evaluation manifests better correlation with human judgment than ROUGE, enables fine-grained evaluation as well as comparative analysis, and reveals valuable insights of state-of-the-art summarization methods.1 1 Introduction Text summarization has enjoyed increasing popularity due to its wide applications, whereas the evaluation of text summarization remains challenging and controversial. The most commonly used evaluation metric of summarization is lexical overlap, i.e., ROUGE (Lin, 2004), which regards the system and reference summaries as sequences of tokens and measures their n-gram overlap. However, recent studies (Paulus et al., 2017; Schluter, 2017; Kryscinski et al., 2019) reveal the limitations of ROUGE and find that in many cases, it fails to reach consensus with human judgment. Since lexical overlap only captures information 1Data can be found at https://github.com/ morningmoni/FAR. Reference: Three people in Kansas have died from a listeria outbreak. Lexical Overlap: But they did not appear identical to listeria samples taken from patients infected in the Kansas outbreak. (ROUGE-1 F1=37.0, multiple token matches but totally different semantics) Manual Extract: Five people were infected and three died in the past year in Kansas from listeria that might be linked to blue bell creameries products, according to the CDC. (ROUGE-1 F1=36.9, semantics covered but lower ROUGE due to the presence of other details) Reference: Chelsea boss Jose Mourinho and United manager Louis van Gaal are pals. Lexical Overlap: Gary Neville believes Louis van Gaal’s greatest achievement as a football manager is the making of Jose Mourinho. Manual Extract: The duo have been friends since they first worked together at Barcelona in 1997 where they enjoyed a successful relationship at the Camp Nou. (ROUGE Recall/F1=0, no lexical overlap at all) Table 1: Lexical overlap — finding the document sentence with the highest ROUGE against one reference sentence — could be misleading. Examples are from the CNN/Daily Mail dataset (Nallapati et al., 2016). coverage at the surface (token) level, ROUGE favors system summaries that share more tokens with the reference summaries. Nevertheless, such summaries may not always convey the desired semantics. For example, in Table 1, the document sentence with the highest ROUGE score has more lexical overlap but expresses rather different semantic meaning. In contrast, the sentence manually extracted from the document by our annotators, which conveys similar semantics, is over-penalized as it involves other details or uses alternative words. In this paper, we argue that the information coverage in summarization can be better evaluated by facet overlap, i.e., whether the system summary covers the facets in the reference summary. Specifically, we treat each reference sentence as a facet, identify document sentences that express the semantics of each facet as support sentences of the facet, and measure information coverage by FacetAware Recall (FAR), i.e., how many facets are covered. We focus on extractive summarization for the following two reasons. Theoretically, since extractive methods cannot paraphrase or compress the document sentences as abstractive methods, it is somewhat unfair to penalize them for extracting long sentences that cover the facets. Pragmatically, 4942 we can evaluate extractive methods automatically by comparing the indices of extracted sentences and support sentences. We denote the mappings from each facet (sentence) in the reference summary to its support sentences in the document as Facet-Aware Mappings (FAMs). FAMs can be used as labels indicating which sentences should be extracted but they are grouped with respect to each facet, while conventional extractive labels correspond to the entire reference summary rather than individual facets (detailed explanations in Sec. 2.1). Compared to treating one summary as a sequence of n-grams, facet-aware evaluation considers information coverage at a semantically richer granularity, and thus can contribute to a more accurate assessment on the summary quality. To verify the effectiveness of facet-aware evaluation, we construct an extractive version of the CNN/Daily Mail dataset (Nallapati et al., 2016) by annotating its FAMs (Sec. 2). We revisit state-ofthe-art extractive methods using this new extractive dataset (Sec. 3.2), the results of which show that FAR correlates better with human evaluation than ROUGE. We also demonstrate that FAMs are beneficial for fine-grained evaluation of both abstractive and extractive methods (Sec. 3.3). We then illustrate how facet-aware evaluation can be useful for comparing different extractive methods in terms of their capability of extracting salient and non-redundant sentences (Sec. 3.4). Finally, we explore the feasibility of automatic FAM creation by evaluating sentence regression approaches against the ground-truth annotations (i.e., FAMs), and generalize facet-aware evaluation to the entire CNN/Daily Mail dataset without any human annotation (Sec. 4). We believe that the summarization community will benefit from the proposed setup for better assessment of information coverage and gain deeper understandings of the current benchmark dataset and state-of-the-art methods through our analysis. Contributions. (1) We propose a facet-aware evaluation setup that better assesses information coverage for extractive summarization. (2) We build the first dataset designed specifically for extractive summarization by creating facet-aware mappings from reference summaries to documents. (3) We revisit state-of-the-art summarization methods in the proposed setup and discover valuable insights. (4) To our knowledge, our work is also the first thorough quantitative analysis regarding the charFacet-Aware Evaluation Reference Summary R reference sentence r2 reference sentence r1 Document D document sentence d1 document sentence d2 document sentence d3 document sentence d4 document sentence d5 r1 S1 d1 S2 d3 S3 d4 S1 d2 d4 r1r2 Extracted Summary E document sentence d1 document sentence d2 document sentence d3 FAM Annotation FAM 1 FAM 2 *each is a support group ✅ ✅ ❌ FAR Calculation Any( , , ) Any( ) r1 r2 1 1 1 2 Sj i Figure 1: An illustration of facet-aware evaluation. Two of three support groups of facet 1 (r1) are covered. Facet 2 (r2) cannot be covered as document sentence 4 (d4) is missing in the extracted summary. The illustration corresponds to the example in Sec. 3.1. acteristics of the CNN/Daily Mail dataset. 2 Dataset Creation In this section, we describe the process of creating an extractive summarization dataset to facilitate facet-aware evaluation, which involves annotating FAMs between the documents and abstractive reference summaries. We first formalize the FAMs and then describe the FAM annotation on the CNN/Daily Mail dataset (Nallapati et al., 2016). 2.1 FAMs: Facet-Aware Mappings We denote one document-summary pair as {D, R}, where D = [d1, d2, ..., dD], R = [r1, r2, ..., rR], and D, R denote the numbers of document sentences and reference sentences, respectively. We conceptualize facet as one unique semantic aspect presented in the summary. In practice, we hypothesize that each reference sentence ri corresponds to one facet.2 We define support sentences as the sentences in the document that express the semantics of one facet ri. We define support group S of facet ri as a set of support sentences that can fully cover the information of ri. For each facet ri in the reference summary, we try to find all its support sentences in the document and put them into support groups. Since we focus on single-document 2It is possible to define facet at sub-sentence or multisentence level as in Pyramid (Nenkova and Passonneau, 2004). However, such definitions inevitably incur more annotation effort and lower inter-annotator agreement, while the current definition balances cost and effectiveness. 4943 Category #Samples #Facets Example (full documents, reference summaries, and the FAMs can be found in Appendix C) Noise (N) 41 (27.3%) 137 (27.1%) • Reference: “Furious 7” opens Friday. (unimportant detail) • Reference: Click here for all the latest Floyd Mayweather vs Manny Pacquiao news. (not found in the document) • Reference: Vin Diesel: “This movie is more than a movie”. (random quotation) • Reference: “I had a small moment of awe,” she said. (random quotation) Low Abstraction (L) 89 (59.3%) 310 (61.2%) M=1: 275 (88.7%) M=2: 35 (11.3%) • Reference: Willis never trademarked her most-famous work, calling it “my gift to the city”. • Support: Willis never trademarked her most-famous work, calling it “my gift to the city.” (identical) • Reference: Thomas K. Jenkins, 49, was arrested last month by deputies with the Prince George’s County sheriff’s office, authorities said. • Support: Authorities said in a news release Thursday that 49-year-old Thomas K. Jenkins of capitol heights, Maryland, was arrested last month by deputies with the Prince George’s County sheriff’s office. (compression) High Abstraction (H) 20 (13.3%) 59 (11.7%) • Reference: College-bound basketball star asks girl with down syndrome to high school prom. Pictures of the two during the “prom-posal” have gone viral. (highly abstractive) • Reference: While Republican Gov. Asa Hutchinson was weighing an Arkansas religious freedom bill, Walmart voiced its opposition. Walmart and other high-profile businesses are showing their support for gay and lesbian rights. (unable to find support sentences) Table 2: Category breakdown of Facet-Aware Mappings (FAMs). Nearly 60% samples are of low abstraction while more than a quarter of samples contain noisy facets. M denotes the average number of support sentences. summarization in this work, most facets only have one support group. But some may contain multiple and extracting any of them would suffice (see example in Appendix C Table 10). Allowing multiple support groups also makes FAMs easily extendable to multi-document summarization where redundant sentences prevail. Formally, for each ri, we annotate a FacetAware Mapping (FAM) ri →{Si 1, Si 2, ..., Si N}, where N is the number of support groups. Each Si j = {dI1, dI2, ..., dIMj } is a support group, where I1, I2, ..., IMj are the indices of support sentences and Mj is the number of support sentences in Si j. One illustrative example is presented in Fig. 1. The support sentences are likely to be verbose, but we consider whether the support sentences express the semantics of the facet regardless of their length.3 The reason is that we believe extractive summarization should focus on information coverage since it cannot alter the original sentences and once salient sentences are extracted, one can then compress them in an abstractive manner (Chen and Bansal, 2018; Hsu et al., 2018). Relation w. Extractive Labels. Extractive methods (Nallapati et al., 2017; Chen and Bansal, 2018; Narayan et al., 2018c) typically require binary labels of every document sentence indicating whether it should be extracted during model training. Such labels are called extractive labels and usually created heuristically based on reference summaries 3We ignore coreference (e.g., “he” vs. “the writer”) and short fragments when considering the semantics of one facet, as we found that the wording of the reference summaries regarding such choices is also capricious. since existing datasets do not provide extractive labels but only abstractive references. Our assumption that each reference sentence corresponds to one facet is similar to that during the creation of extractive labels. The major differences are that (1) We allow an arbitrary number of support sentences while extractive labels usually limit to one support sentence for each reference sentence, i.e., we do not specify Mj. For example, we would put two support sentences to one support group if they are complementary and only combining them can cover the facet. (2) We try to find multiple support groups (N > 1), as there could be more than one set of support sentences that cover the same facet. In contrast, there is no notion of support group in extractive labels as they inherently form one such group (N = 1). Also, we allow N = 0 if such a mapping cannot be found even by humans. (3) The FAMs are more accurate as they are created by human annotators while extractive methods use sentence regression approaches (which we evaluate in Sec. 4.1) to obtain extractive labels approximately. Comparison w. SCUs. Some may mistake FAMs for Summarization Content Units (SCUs) in Pyramid (Nenkova and Passonneau, 2004), but they are different in that (1) FAMs utilize both the documents and reference summaries while SCUs ignore the documents; (2) FAMs are at the sentence level and can thus be used to automatically evaluate extractive methods once created — simply by matching sentence indices we can know how many facets are covered, while SCUs have to be manually annotated for each system (refer to Appendix B Fig. 4). 4944 2.2 Creation of Extractive CNN/Daily Mail To verify the effectiveness of facet-aware evaluation, we annotate the FAMs of 150 documentsummary pairs from the test set of CNN/Daily Mail. Specifically, we take the first 50 samples in the test set, the 20 samples used in the human evaluation of Narayan et al. (2018c), and randomly draw another 80 samples. The annotators are graduate students who are required to read through the document and mark support groups for each facet. The most similar document sentences to each facet found by ROUGE and cosine similarity of average word embeddings are provided as the baselines for annotation. 310 non-empty FAMs are created by three annotators with high agreement (pairwise Jaccard index 0.714) and further verified to reach consensus.4 On average, 5.44 (6.04 non-unique) document sentences are included as the support sentences in each document-summary pair. To summarize, we found that the facets can be divided into three categories based on their quality and degree of abstraction as follows. Noise: The facet is noisy and irrelevant to the main content, either because the document itself is too hard to summarize (e.g., a report full of quotations) or the human editor was too subjective when writing the summary (See et al., 2017). Another possible reason is that the so-called “summaries” in CNN/Daily Mail are in fact “story highlights”, which seems reasonable to include certain details. We found that 41/150 (27.3%) samples have noisy facet(s), indicating that the reference summaries of CNN/Daily Mail are rather noisy. We show in Sec. 3.2 that existing summarization methods perform poorly on this category, which justifies our judgment of “noisy facets” from another aspect. Also note that there would not be a “noise” category in a “clean” dataset. However, given the creation process of popular summarization datasets (Nallapati et al., 2016; Narayan et al., 2018b), it is unlikely that all of their samples are of high quality. Low Abstraction: The facet can be mapped to its support sentences. We denote the (rounded) average number of support sentences for each facet as M (= 1 N PN j=1 Mj, N represents the number of support groups). As shown in Table 2, all the facets with non-empty FAMs in CNN/Daily Mail are paraphrases or compression of one to two sentences in 4One alternative way is to store multiple FAMs for each sample (like multiple reference summaries) and average their results as in ROUGE. the document without much abstraction. High Abstraction: The facet cannot be mapped to its support sentences (N = 0) by humans, which indicates that the writing of the facet requires deep understanding of the document rather than simply reorganizing several sentences. The proportion of this category (13.3%) also indicates how often extractive methods would not work (well) on CNN/Daily Mail. We found it easier than previously believed to create the FAMs on CNN/Daily Mail, as it is uncommon (average number of support groups N = 1.6) to detect multiple sentences with similar semantics. In addition, most support groups only have one or two support sentences with large lexical overlap, which coincides with the fact that extractive methods work quite well on CNN/Daily Mail and abstractive methods are often hybrid and learn to copy words directly from the documents. That said, we try to automate the FAM creation and scale facet-aware evaluation to the whole test set of CNN/Daily Mail using machine-created FAMs (Sec. 4). 3 Facet-Aware Evaluation In this section, we introduce the facet-aware evaluation setup (Sec. 3.1) and demonstrate its effectiveness by revisiting state-of-the-art summarization methods under this new setup (Sec. 3.2). We then illustrate the additional benefits of facet-aware evaluation, including fine-grained evaluation (Sec. 3.3) and comparative analysis (Sec. 3.4). 3.1 Proposed Metrics As current extractive methods are facet-agnostic, i.e., their output is not nested (organized by facets) but a flat set of extracted sentences, we consider one facet as being “covered” if any of its support groups can be found in the whole extracted summary. Formally, we define the Facet-Aware Recall (FAR) as follows. FAR = PR i=1 Any(I(Si 1, E), ..., I(Si N, E)) R , where Any(X) returns 1 if any x ∈X is 1 and 0 otherwise, I(X, Y) returns 1 if set X ⊂Y and 0 otherwise, E denotes the set of extracted sentences, and R is the number of facets. Intuitively, FAR does not over-penalize extractive methods for extracting long sentences as long as the extracted sentences cover the semantics of the facets. FAR 4945 also treats each facet equally, whereas ROUGE weighs higher the facets with more tokens since they are more likely to incur lexical overlap. To further measure model capability of retrieving salient (support) sentences without considering redundancy as FAR does, we merge all the support sentences of one document-summary pair to one single support set and define the Support-Aware Recall (SAR) as follows. SAR is used in Sec. 3.4 for the comparative analysis of extractive methods. SAR = | ∪R i=1 ∪N j=1Si j ∩E| | ∪R i=1 ∪N j=1Si j| . Example (Fig. 1). Assume that R = 2, r1 → {{d1}, {d3}, {d4}}, r2 →{{d2, d4}}, and E = {d1, d2, d3}. Then FAR = 1 2 as E covers {d1} (or {d3}) for r1 but cannot cover {d2, d4} for r2. SAR = |{d1,d2,d3,d4}∩{d1,d2,d3}| |{d1,d2,d3,d4}| = 3 4. Note that d1 and d3 are salient (support sentences) and both considered positive in SAR, while they only contribute to the coverage of one facet in FAR. 3.2 Automatic Evaluation with FAR By utilizing the low abstraction category on the extractive CNN/Daily Mail dataset, we revisit extractive methods to evaluate how they perform on information coverage. Specifically, we compare Lead-3 (that extracts the first three document sentences), FastRL(E) (E for extractive only) (Chen and Bansal, 2018), BanditSum (Dong et al., 2018), NeuSum (Zhou et al., 2018), Refresh (Narayan et al., 2018c), and UnifiedSum(E) (Hsu et al., 2018) using both ROUGE and FAR. For a fair comparison, each method extracts three sentences (|E| = 3).5 Results on Neural Extractive Methods. As shown in Table 3, there is almost no discrimination among the last four methods under ROUGE-1 F1, and the rankings under ROUGE-1/2/L often contradict with each other. The observations on ROUGE Precision/Recall are similar. We provide them as well as more comparative analysis under facet-aware evaluation in Sec. 3.4. For facet coverage, the upper bound of FAR by extracting 3 sentences (Oracle, given the ground-truth FAMs) is 84.8, much higher than all the compared methods. The best performing extractive method under FAR 5Extracting all the sentences results in a perfect FAR, which is expected as FAR measures recall. One can also normalize FAR by the number of extracted sentences. is UnifiedSum(E), which indicates that it covers the most facets semantically. Method ROUGE-1 ROUGE-2 ROUGE-L FAR Lead-3 41.9 19.6 34.8 50.6 FastRL(E) 41.6 20.3 35.5 50.8 BanditSum 42.7 20.2 35.8 44.7 NeuSum 42.7 22.1 36.4 51.2 Refresh 42.8 20.3 39.3 51.3 UnifiedSum(E) 42.6 20.7 35.5 54.8 Oracle 53.8 32.1 48.1 84.8 Table 3: Performance comparison of extractive methods under ROUGE F1 and Facet-Aware Recall (FAR). FAR’s Correlation w. Human Evaluation. Although FAR is supposed to be favored as the FAMs are manually labeled and indicate accurately whether one sentence should be extracted (assuming the annotations are in high quality), to further verify that FAR correlates with human preference, we ask the annotators to rank the outputs of UnifiedSum(E), NeuSum, and Lead-3 and measure ranking correlation. As listed in Table 4, we observe that the method with the most 1st ranks in the human evaluation coincides with FAR. We also find that FAR has higher Spearman’s coefficient ρ than ROUGE (0.457 vs. 0.44).6 Method 1st 2nd 3rd Lead-3 26.8% 46.3% 26.8% NeuSum 29.3% 39.0% 31.7% UnifiedSum(E) 37.8% 52.4% 9.8% Table 4: Proportions of system ranking in human evaluation. FAR shows better human correlation than ROUGE and prefers UnifiedSum(E). 3.3 Fine-grained Evaluation One benefit of facet-aware evaluation is that we can employ the category breakdown of FAMs for fine-grained evaluation, namely, how one method performs on noisy / low abstraction / high abstraction samples, respectively. Any metric of interest can be used for this fine-grained analysis. Here we consider ROUGE and additionally evaluate several abstractive methods: PG (PointerGenerator) (See et al., 2017), FastRL(E+A) (extractive+abstractive) (Chen and Bansal, 2018), and UnifiedSum(E+A) (Hsu et al., 2018). As shown in Table 5, extractive methods perform poorly on high abstraction samples, which 6We expect that one can observe larger gains on datasets with less lexical overlap than CNN/Daily Mail. 4946 is somewhat expected since they cannot perform abstraction. Abstractive methods, however, also exhibit a huge performance gap between low and high abstraction samples, which suggests that existing abstractive methods achieve decent overall performance mainly by extraction rather than abstraction, i.e., performing well on low abstraction samples of CNN/Daily Mail. We also found that all the compared methods perform much worse on the documents with “noisy” reference summaries, implying that the randomness in the reference summaries might introduce noise to both model training and evaluation. Note that although the sample size is relatively small, we observe consistent results when analyzing different subsets of the data. Method N L H L + H Extractive Lead-3 34.1 41.9 24.9 38.9 FastRL(E) 33.5 41.6 31.2 39.8 BanditSum 35.3 42.7 34.1 41.2 NeuSum 34.9 42.7 30.7 40.6 Refresh 35.7 42.8 32.2 40.9 UnifiedSum(E) 34.2 42.6 31.3 40.6 Abstractive PG 32.6 40.6 27.5 38.2 FastRL(E+A) 35.1 40.8 29.9 38.8 UnifiedSum(E+A) 34.2 42.4 29.2 40.1 Table 5: ROUGE-1 F1 of extractive and abstractive methods on noisy (N), low abstraction (L), high abstraction (H), and high quality (L + H) samples. 3.4 Comparative Analysis Facet-aware evaluation is also beneficial for comparing extractive methods regarding their capability of extracting salient and non-redundant sentences. We show the FAR, SAR, and ROUGE scores of various extractive methods in Fig. 2. We next illustrate how one can leverage these scores under different metrics for comparative analysis. For brevity, we denote ROUGE Precision and ROUGE Recall as RP and RR, respectively. FAR vs. ROUGE. By comparing the scores of extractive methods under FAR and ROUGE, one can discover useful insights. For example, we observe that the performance of Refresh, FastRL(E), NeuSum are quite close to Lead-3 under FAR, but they generally have higher RR. Such results imply that these methods might have learned to extract sentences that are not the support sentences, i.e., sentences that do not directly contribute to the facet coverage, but still have lexical overlap with reference summaries. It is also likely that they extract redundant support sentences that happen to have token matches with other facets. Overall, UnifiedSum(E) covers the most facets (high FAR) and also has decent lexical matches (high RR). SAR vs. ROUGE. By comparing SAR with RP, one can find that UnifiedSum(E) extracts salient but possibly redundant support sentences, as it has higher SAR but similar RP to Lead-3. On the contrary, Refresh has similar SAR with Lead-3 but higher RP, which again implies that it might extract non-support sentences that contain token matches but irrelevant semantics. Similarly, BanditSum is capable of lexical overlap (high RP), but the matched tokens may not contribute much to the major semantics (low SAR). FAR vs. SAR. By comparing FAR with SAR (Fig. 3), we observe that FastRL(E) and NeuSum have FAR scores similar to Lead-3 and Refresh, but higher SAR scores. One possible explanation is that FastRL(E) and NeuSum are better at extracting support sentences, but they do not handle redundancy very well, i.e., the extracted sentences might contain multiple support groups of the same facet (recall the example in Sec. 3.1). For instance, there are 30.3% extracted summaries of FastRL(E) that can cover more than one support group of the same facet while there are 19.1% for Lead-3. 4 Evaluation without Human Annotation In the previous sections, we have demonstrated the effectiveness and benefits of facet-aware evaluation. One remaining issue that might prevent facet-aware evaluation from scaling is the need of human-annotated FAMs. We thus study the feasibility of automatic FAM creation with sentence regression and present a pilot study of conducting facet-aware evaluation without any human annotation in this section. 4.1 Sentence Regression for FAM Creation Similar to most benchmark constructions, facetaware evaluation requires one-time annotation — once the FAMs are annotated, we can reuse them for automatic evaluation. That said, we explore various approaches to automate this one-time process. Specifically, we investigate whether facet-aware evaluation can be conducted without any human effort by utilizing sentence regression (Zopf et al., 2018) to automatically create the FAMs. Sentence regression is widely used to create extractive labels. Sentence regression approaches typ4947 0.45 0.50 0.55 FAR 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 ROUGE Recall (RR) ROUGE­1 Recall ROUGE­L Recall ROUGE­2 Recall Lead­3 FastRL(E) BanditSum NeuSum Refresh UnifiedSum(E) 0.45 0.50 0.55 FAR 0.20 0.25 0.30 0.35 ROUGE Precision (RP) ROUGE­1 Precision ROUGE­L Precision ROUGE­2 Precision 0.45 0.50 0.55 FAR 0.20 0.25 0.30 0.35 0.40 ROUGE F1 ROUGE­1 F1 ROUGE­L F1 ROUGE­2 F1 0.34 0.36 0.38 0.40 0.42 SAR 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 ROUGE Recall (RR) ROUGE­1 Recall ROUGE­L Recall ROUGE­2 Recall 0.34 0.36 0.38 0.40 0.42 SAR 0.20 0.25 0.30 0.35 ROUGE Precision (RP) ROUGE­1 Precision ROUGE­L Precision ROUGE­2 Precision 0.34 0.36 0.38 0.40 0.42 SAR 0.20 0.25 0.30 0.35 0.40 ROUGE F1 ROUGE­1 F1 ROUGE­L F1 ROUGE­2 F1 Figure 2: Performance of extractive methods under ROUGE, FAR, and SAR. The results under ROUGE-1/2/L often disagree with each other. UnifiedSum(E) generally performs the best in the facet-aware evaluation. 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 SAR 0.46 0.48 0.50 0.52 0.54 FAR Lead­3 FastRL(E) BanditSum NeuSum Refresh UnifiedSum(E) Figure 3: Comparison of extractive methods under FAR and SAR reflects their capability of extracting salient and non-redundant sentences. ically transform abstractive reference summaries to extractive labels heuristically using ROUGE. Previously, one could only estimate the quality of these labels by evaluating the extractive models trained using such labels, i.e., comparing their extracted summaries with the reference summaries (also approximately via ROUGE). Now that the humanannotated FAMs serve as ground-truth extractive labels, we can evaluate how well each approach performs accurately. Sentence Regression Approaches. We briefly review recent sentence regression approaches as follows. Nallapati et al. (2017) greedily select sentences that maximize ROUGE-1 F1 until adding another sentence decreases it. Chen and Bansal (2018) find for each reference sentence the most similar sentence in the document by ROUGE-L recall. Zopf et al. (2018) argue that precision is a better measure than recall because it aims not at covering as much information but at wasting as little space as possible. Narayan et al. (2018c) measure sentence similarity by the average of ROUGE1/2/L F1. We also test other variants of ROUGE and TF-IDF, which represents sentences by TF-IDF features and measures their cosine similarity. 4.2 Evaluation with Machine-Created FAMs Results on Support Sentence Discovery. We first evaluate sentence regression with its original function, i.e., creating extractive labels (finding support sentences). We merge the support groups of each sample and calculate precision and recall (i.e., SAR). The performance of sentence regression approaches is shown in Table 6. The relatively low recall suggests that simply finding one support sentence for each facet as most existing approaches do would miss plenty of salient sentences, which could possibly worsen the models trained on such labels since the models would treat missed support sentences as unimportant ones. On the bright side, many sentence regression approaches achieve high 4948 precision. For instance, 90.0% document sentences labeled positive by Narayan et al. (2018c) indeed contain salient information. This is to some extent explainable as ROUGE captures lexical overlap and as we have shown, there are many copy-and-paste reference summaries in CNN/Daily Mail. Method Precision Recall F1 Lead-3 61.0 33.7 43.4 Greedy ROUGE-1 F1 58.2 30.8 40.3 TF-IDF 83.7 51.9 64.0 ROUGE-1 F1 88.9 53.1 66.5 ROUGE-2 F1 86.6 52.3 65.2 ROUGE-L Recall 89.3 53.7 67.1 ROUGE-L Precision 77.2 45.5 57.2 ROUGE-L F1 87.8 53.5 66.5 ROUGE-AVG F1 90.0 53.9 67.4 Table 6: Performance of sentence regression approaches regarding support sentence discovery. High precision and low recall are often observed. Correlation w. Human-Annotated FAMs. We then explore the correlation between humanannotated and machine-created FAMs by evaluating extractive methods against both of them. This time we extend to find for each facet multiple support sentences and put each support sentence into a separate support group. We measure the correlation between estimated and ground-truth FAR by Pearson’s r. We measure the correlation between system rankings induced from estimated and ground-truth FAR by Spearman’s ρ and Kendall’s τ. The detailed correlation results of representative approaches are listed in Table 7. We observe that creating three support groups consistently shows the highest correlation for the same sentence regression approach. Also, the FAMs created by ROUGE-1 F1 and ROUGE-AVG F1 have very high correlation with human annotation, indicating the usability and reliability of machine-created FAMs for system ranking. Method N = 1 N = 2 N = 3 r ρ τ r ρ τ r ρ τ ROUGE-1 F1 70.5 37.1 33.3 72.0 71.4 60.0 88.4 94.3 86.7 ROUGE-2 F1 11.0 25.7 20.0 43.4 65.7 46.7 88.4 65.7 60.0 ROUGE-L F1 34.0 54.3 46.7 37.5 42.9 20.0 62.3 42.9 46.7 ROUGE-AVG F1 49.6 54.3 46.7 46.1 65.7 46.7 83.2 82.9 73.3 Table 7: Correlation between ground-truth and estimated FAR scores by Pearson’s r, Spearman’s ρ, and Kendall’s τ. N denotes the number of support groups. FAR Prediction. Despite the high correlation, we also find that the estimated FAR scores may vary in range compared to the ground-truth FAR.7 Therefore, we further use the estimations of different sentence regression approaches to train a linear regression model to fit the ground-truth FAR (denoted as AutoFAR). We then calculate the estimated FAR scores on the whole test set of CNN/Daily Mail and use the trained linear regressor to predict a (supposedly) more accurate FAR score (denoted as AutoFAR-L). As shown in Table 8, the fitting of AutoFAR is very close to the ground-truth FAR, and the system ranking on the large-scale evaluation under AutoFAR-L follows a similar trend to that under FAR with Spearman’s ρ = 54.3. On the other hand, although our preliminary analysis on AutoFAR-L shows promising results, we also note that since the human annotation on the whole test set is lacking, the reliability of such extrapolation is not guaranteed and we leave more rigorous study with a larger number of systems and samples as future work. Method FAR AutoFAR AutoFAR-L FAR vs. AutoFAR(-L) BanditSum 44.7 44.8 44.7 Pearson’s r Lead-3 50.6 51.3 45.6 97.6 (42.9) FastRL(E) 50.8 51.0 43.1 Spearman’s ρ NeuSum 51.2 49.9 44.3 77.1 (54.3) Refresh 51.3 51.7 46.2 Kendall’s τ UnifiedSum(E) 54.8 54.5 46.9 60.0 (46.7) Table 8: FAR prediction via linear regression. AutoFAR(-L) denotes the results on the humanannotated subset (entire CNN/Daily Mail dataset). 5 Related Work Evaluation Metrics for Text Summarization. ROUGE (Lin, 2004) is the most widely used evaluation metric for text summarization. Extensions of ROUGE include ROUGE-WE (Ng and Abrecht, 2015) that incorporated word embedding into ROUGE, ROUGE 2.0 (Ganesan, 2018) that considered synonyms, and ROUGE-G (ShafieiBavani et al., 2018) that applied graph analysis to WordNet for lexical and semantic matching. Nevertheless, these extensions did not draw enough attention as the original ROUGE and recent advances (Gu et al., 2020; Zhang et al., 2019a) are still primarily evaluated by the vanilla ROUGE. Another popular branch is Pyramid-based metrics (Nenkova and Passonneau, 2004; Yang et al., 2016), which annotate and compare the Summarization Content Units (SCUs) in the summaries. 7The raw estimated FAR scores are provided in Appendix B Fig. 5 in the interest of space. 4949 FAR is related to Pyramid and HighRES (Hardy et al., 2019) in that Pyramid employs the summaries to annotate SCUs and HighRES highlights salient text fragments in the documents, while FAR considers both the summaries and documents. Beyond lexical overlap, embedding-based evaluation metrics (Zhang et al., 2019b; Zhao et al., 2019; Sun and Nenkova, 2019; Xenouleas et al., 2019) are gaining more traction along with the dominance of pre-trained language models. One straightforward way to incorporate embeddingbased metrics into FAR is to use them as similarity measures instead of the ROUGE-based approaches tested in Sec. 4.1 for automatic FAM creation (i.e., finding support sentences for each facet by the scores of embedding-based metrics). Such similarity measures are especially beneficial when the facet and its support sentences are not similar at the lexical level. Reflections on Text Summarization. There has been increasing attention and critique to the issues of existing summarization metrics (Schluter, 2017), methods (Kedzie et al., 2018; Shapira et al., 2018), and datasets (Jung et al., 2019). Notably, Kryscinski et al. (2019) conducted a comprehensive critical evaluation for summarization from various aspects. Zopf et al. (2018) investigated sentence regression approaches in a manner similar to ours but they could only evaluate them approximately against ROUGE as no ground-truth labels (FAMs) existed. Annotation and Analysis. Many recent studies conduct human annotation or evaluation on text summarization and other NLP tasks to gain useful insights. Hardy et al. (2019) annotated 50 documents to demonstrate the benefits of highlightbased summarization evaluation. Recent summarization methods (Paulus et al., 2017; Narayan et al., 2018c; Chen and Bansal, 2018) generally sampled 50 to 100 documents for human evaluation in addition to ROUGE in light of its limitations. Chen et al. (2016); Yavuz et al. (2018) inspected 100 samples and analyzed their category breakdown for reading comprehension and semantic parsing, respectively. We observed similar trends when analyzing different subsets of the FAMs, indicating that our findings are relatively stable. We thus conjecture that our sample size is sufficient to verify our hypotheses and benefit future research. 6 Conclusion and Future Work We propose a facet-aware evaluation setup for better assessment of information coverage in extractive summarization. We construct an extractive summarization dataset and demonstrate the effectiveness of facet-aware evaluation on this newly constructed dataset, including better human correlation on the assessment of information coverage, and the support for fine-grained evaluation as well as comparative analysis. We also evaluate sentence regression approaches and explore the feasibility of fully-automatic evaluation without any human annotation. In the future, we will investigate multi-document summarization datasets such as DUC (Paul and James, 2004) and TAC (Dang and Owczarzak, 2008) to see whether our findings coincide when multiple references are provided. We will also explore better sentence regression approaches for the use of both extractive summarization methods and automatic FAM creation. Acknowledgement We thank Woojeong Jin and Jiaming Shen for the valuable feedback on the paper draft. We thank anonymous reviewers for the constructive comments. Research was sponsored in part by US DARPA KAIROS Program No. FA8750-19-21004 and SocialSim Program No. W911NF-17-C0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government. References Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the CNN/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2358–2367, Berlin, Germany. Association for Computational Linguistics. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686, Melbourne, Australia. Association for Computational Linguistics. 4950 Hoa Trang Dang and Karolina Owczarzak. 2008. Overview of the tac 2008 update summarization task. In TAC. Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, and Jackie Chi Kit Cheung. 2018. BanditSum: Extractive summarization as a contextual bandit. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3739–3748, Brussels, Belgium. Association for Computational Linguistics. Kavita Ganesan. 2018. Rouge 2.0: Updated and improved measures for evaluation of summarization tasks. arXiv preprint arXiv:1803.01937. Xiaotao Gu, Yuning Mao, Jiawei Han, Jialu Liu, Hongkun Yu, You Wu, Cong Yu, Daniel Finnie, Jiaqi Zhai, and Nicholas Zukoski. 2020. Generating representative headlines for news stories. WWW. Hardy Hardy, Shashi Narayan, and Andreas Vlachos. 2019. HighRES: Highlight-based reference-less evaluation of summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3381–3392, Florence, Italy. Association for Computational Linguistics. Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 132–141, Melbourne, Australia. Association for Computational Linguistics. Taehee Jung, Dongyeop Kang, Lucas Mentch, and Eduard Hovy. 2019. Earlier isn’t always better: Subaspect analysis on corpus and system biases in summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3315–3326, Hong Kong, China. Association for Computational Linguistics. Chris Kedzie, Kathleen McKeown, and Hal Daum´e III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818–1828, Brussels, Belgium. Association for Computational Linguistics. Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540– 551, Hong Kong, China. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In AAAI, pages 3075–3081. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ a˘glar Gulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics. Shashi Narayan, Ronald Cardenas, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata, Jiangsheng Yu, and Yi Chang. 2018a. Document modeling with external attention for sentence extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2020–2030, Melbourne, Australia. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018c. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759, New Orleans, Louisiana. Association for Computational Linguistics. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 145–152, Boston, Massachusetts, USA. Association for Computational Linguistics. Jun-Ping Ng and Viktoria Abrecht. 2015. Better summarization evaluation with word embeddings for ROUGE. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1925–1930, Lisbon, Portugal. Association for Computational Linguistics. Over Paul and Yen James. 2004. An introduction to duc-2004. In Proceedings of the 4th Document Understanding Conference (DUC 2004). 4951 Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Natalie Schluter. 2017. The limits of automatic summarisation according to ROUGE. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 41–45, Valencia, Spain. Association for Computational Linguistics. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond Wong, and Fang Chen. 2018. A graphtheoretic summary evaluation for ROUGE. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 762– 767, Brussels, Belgium. Association for Computational Linguistics. Ori Shapira, David Gabay, Hadar Ronen, Judit BarIlan, Yael Amsterdamer, Ani Nenkova, and Ido Dagan. 2018. Evaluating multiple system summary lengths: A case study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 774–778, Brussels, Belgium. Association for Computational Linguistics. Simeng Sun and Ani Nenkova. 2019. The feasibility of embedding based automatic evaluation for single document summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1216–1221, Hong Kong, China. Association for Computational Linguistics. Stratos Xenouleas, Prodromos Malakasiotis, Marianna Apidianaki, and Ion Androutsopoulos. 2019. SUMQE: a BERT-based summary quality estimation model. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6004–6010, Hong Kong, China. Association for Computational Linguistics. Qian Yang, Rebecca J Passonneau, and Gerard De Melo. 2016. Peak: Pyramid evaluation via automated knowledge extraction. In Thirtieth AAAI Conference on Artificial Intelligence. Semih Yavuz, Izzeddin Gur, Yu Su, and Xifeng Yan. 2018. What it takes to achieve 100% condition accuracy on WikiSQL. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1702–1711, Brussels, Belgium. Association for Computational Linguistics. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J Liu. 2019a. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019b. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Linguistics. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654– 663, Melbourne, Australia. Association for Computational Linguistics. Markus Zopf, Eneldo Loza Menc´ıa, and Johannes F¨urnkranz. 2018. Which scores to predict in sentence regression for text summarization? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1782–1791, New Orleans, Louisiana. Association for Computational Linguistics. 4952 Reference Summary Document Extracted Summary SCU1.1, SCU2.1, SCU2.3, SCU3.1 SCU1.1 SCU1.2 SCU1.3 SCU1.4 SCU2.1 SCU2.2 SCU2.3  SCU3.1   ROUGE Evaluation (token match) Reference Sentence 2 (Facet 2) Reference Sentence 3 (Facet 3) Document Sentence 2 Document Sentence 3 Document Sentence 6 Document Sentence 1 Document Sentence 2 Document Sentence 3 Document Sentence 4 Document Sentence 5 Document Sentence 6 Document Sentence 7 Facet-Aware Evaluation (facet match) FAM Annotation (colored) Pyramid SCU Annotation Pyramid SCU Annotation  (during evaluation) Pyramid Evaluation (SCU match) Reference Sentence 1 (Facet 1) Human Effort Annotation Evaluation p p p p Figure 4: Comparison of summarization metrics. Support sentences are marked in the same color as their corresponding facets. SCUs have to be annotated for each extracted summary during evaluation, while facetaware evaluation can be conducted automatically by comparing sentence indices. A Practical Notes on CNN/Daily Mail We note several issues of the CNN/Daily Mail dataset in the hope that the researchers working on this dataset are better aware of these issues. One issue is that sometimes the titles and image captions are introduced in the main body of the document by mistake (usually captured by “-lrbpictured -rrb-” or colons), which may lead to bias or label leaking for model training since the reference summaries are observed to be similar to the titles and image captions (Narayan et al., 2018a). For example, we found that if there is a sentence in the main body that is almost the same as one of the captions, then that sentence is very likely to be used in the reference summary. Many such cases can be found in our annotated data. We also found that in many documents, the 4-th sentence is “scroll down for video”. And if this sentence appears in one document, it is often the case that the first three sentences are good enough to summarize the whole document. This finding provides yet another evidence why a simple Lead-3 baseline could be rather strong on CNN/Daily Mail. In addition, sentences similar to the first three sentences can often be found afterward, which suggests that the first three sentences may not even belong to the main body of the document. B Additional Illustration In Fig. 4, we show the comparison of ROUGE, FAR, and Pyramid. In Fig. 5, we show the the ground-truth FAR scores, the FAR scores estimated by various sentence regression approaches, and the prediction of FAR scores by linear regression. C Detailed Examples We list below the full documents, reference summaries, and the corresponding FAMs of several examples shown in Table 2. In particular, Table 10 shows an example of several support groups covering the same facet. We release all of the annotated data to facilitate facet-aware evaluation and follow-up studies along this direction. 4953 BanditSum Lead­3 FastRL(E) NeuSum Refresh UnifiedSum(E) N = 1 0.35 0.40 0.45 0.50 0.55 FAR BanditSum Lead­3 FastRL(E) NeuSum Refresh UnifiedSum(E) N = 2 0.46 0.49 0.52 0.55 0.58 Human ROUGE­1 ROUGE­2 ROUGE­L ROUGE­AVG BanditSum Lead­3 FastRL(E) NeuSum Refresh UnifiedSum(E) N = 3 0.45 0.50 0.55 0.60 0.65 0.70 BanditSum Lead­3 FastRL(E) NeuSum Refresh UnifiedSum(E) LR Estimation and Prediction 0.44 0.46 0.48 0.50 0.52 0.54 Human LR­Small LR­Large Figure 5: The first three figures show the ground-truth and estimated FAR scores via human-annotated FAMs and machine-created FAMs. The fourth figure shows the fitting of linear regression on the human-annotated samples (LR-Small) and the prediction on the whole test set of CNN/Daily Mail (LR-Large). Systems are sorted in an ascending order by the ground-truth FAR on the human-annotated samples. ID: 1b2cc634e2bfc6f2595260e7ed9b42f77ecbb0ce Category: Noise Document: -LRB- CNN -RRB- Paul Walker is hardly the first actor to die during a production . But Walker ’s death in November 2013 at the age of 40 after a car crash was especially eerie given his rise to fame in the “ Fast and Furious ” film franchise . The release of “ Furious 7 ” on Friday (this is the only mention of “Friday” in the whole document) offers the opportunity for fans to remember – and possibly grieve again – the man that so many have praised as one of the nicest guys in Hollywood . “ He was a person of humility , integrity , and compassion , ” military veteran Kyle Upham said in an email to CNN . Walker secretly paid for the engagement ring Upham shopped for with his bride . “ We did n’t know him personally but this was apparent in the short time we spent with him . I know that we will never forget him and he will always be someone very special to us , ” said Upham . The actor was on break from filming “ Furious 7 ” at the time of the fiery accident , which also claimed the life of the car ’s driver , Roger Rodas . Producers said early on that they would not kill off Walker ’s character , Brian O’Connor , a former cop turned road racer . Instead , the script was rewritten and special effects were used to finish scenes , with Walker ’s brothers , Cody and Caleb , serving as body doubles . There are scenes that will resonate with the audience – including the ending , in which the filmmakers figured out a touching way to pay tribute to Walker while “ retiring ” his character . At the premiere Wednesday night in Hollywood , Walker ’s co-star and close friend Vin Diesel gave a tearful speech before the screening , saying “ This movie is more than a movie . ” (random quotation, may use other quotes as well) “ You ’ll feel it when you see it , ” Diesel said . “ There ’s something emotional that happens to you , where you walk out of this movie and you appreciate everyone you love because you just never know when the last day is you ’re gon na see them . ” There have been multiple tributes to Walker leading up to the release . Diesel revealed in an interview with the “ Today ” show that he had named his newborn daughter after Walker . Social media has also been paying homage to the late actor . A week after Walker ’s death , about 5,000 people attended an outdoor memorial to him in Los Angeles . Most had never met him . Marcus Coleman told CNN he spent almost $ 1,000 to truck in a banner from Bakersfield for people to sign at the memorial . “ It ’s like losing a friend or a really close family member ... even though he is an actor and we never really met face to face , ” Coleman said . “ Sitting there , bringing his movies into your house or watching on TV , it ’s like getting to know somebody . It really , really hurts . ” Walker ’s younger brother Cody told People magazine that he was initially nervous about how “ Furious 7 ” would turn out , but he is happy with the film . “ It ’s bittersweet , but I think Paul would be proud , ” he said . CNN ’s Paul Vercammen contributed to this report . Reference Summary: “ Furious 7 ” pays tribute to star Paul Walker , who died during filming Vin Diesel : “ This movie is more than a movie ” (random quotation) “ Furious 7 ” opens Friday (unimportant detail) FAMs: N/A Table 9: Full document, reference summary, and the FAMs presented in Table 2. 4954 ID: d58bf9387cd76f34bbb95fe25f8036015e5cc90a Category: Low Abstraction Document: Dover police say a man they believe to be the so-called ‘ rat burglar ’ who cut holes to tunnel into buildings has been arrested in Maryland . Authorities said in a news release Thursday that 49-year-old Thomas K. Jenkins of Capitol Heights , Maryland , was arrested last month by deputies with the Prince George ’s County Sheriff ’s Office . ‘ Rat burglar ’ : Thomas K. Jenkins , pictured is accused of robbing 18 Dover businesses From September 2014 to February 2015 , Jenkins allegedly carried out 18 commercial robberies in Dover , Delaware , authorities there said . ‘ During the investigation it was learned that the Prince George ’s County Sheriff ’s Department had a series of burglaries that were similar in nature to the eighteen committed in Dover , ’ the release said . Thomas Jenkins has been accused by the Dover Police Department of robbing multiple businesses . They are : Maple Dale Country Club Manlove Auto Parts Sovereign Properties Morgan Properties U and I Builders AMCO Check Cashing Colonial Investment 1st Capital Mortgage Advantage Travel Ancient Way Massage Tranquil Spirit Massage/Spa Christopher Asay Massage Morgan Communities Vincenzo ’s Restaurant Happy Fortune Chinese Restaurant Happy 13 Liquors Del-One Credit Union Pizza Time Melvin ’s Auto Service Source : Dover Police Department/The News Journal A car was found behind a building where a robbery took place and led deputies in Maryland to consider Jenkins as a suspect , authorities said . Law enforcement later found Jenkin ’s car and tracked where he went , Dover police said . Police say Jenkins had cut a hole in the roof of a commercial business in Maryland on March 9 and deputies arrested him as he fled . According to Dover police , ‘ Jenkins was found in possession of .45 - caliber handgun that was stolen from a business in Delaware State Police Troop 9 jurisdiction . A search of Jenkins vehicle revealed an additional .45 - caliber handgun stolen from the same business . ’ Jenkins is being held in Maryland and will face 72 charges involving the 18 burglaries in Dover when he is returned to Delaware . The charges he is facing break down to : four counts of wearing a disguise during the commission of a felony , eighteen counts of third-degree burglary , eighteen counts of possession of burglary tools , fourteen counts of theft under $ 1,500 , and eighteen counts of criminal mischief , two of which are felonies , authorities said . Cpl. Mark Hoffman with the Dover Police Department told the News Journal that Delaware State Police are planning to file charges over a 19th robbery at Melvin ’s Auto Service , which reportedly occurred in a part of Dover where jurisdiction is held by state police . Sharon Hutchison , who works at one of the businesses Jenkins allegedly robbed , told the newspaper ‘ He cut through two layers of drywall , studs and insulation . ’ The Prince George ’s County Sheriff ’s Department did not immediately return a request for information on what charges Jenkins is facing there . FAMs: • thomas k. jenkins , 49 , was arrested last month by deputies with the prince george ’s county sheriff ’s office , authorities said . [Support Group0][Sent0]: authorities said in a news release thursday that 49-year-old thomas k. jenkins of capitol heights , maryland , was arrested last month by deputies with the prince george ’s county sheriff ’s office . • police say jenkins had cut a hole in the roof of a commercial business in maryland on march 9 and deputies arrested him as he fled . [Support Group0][Sent0]: police say jenkins had cut a hole in the roof of a commercial business in maryland on march 9 and deputies arrested him as he fled . • jenkins is accused of carrying out multiple robberies in dover , delaware . [Support Group0][Sent0]: jenkins is being held in maryland and will face 72 charges involving the 18 burglaries in dover when he is returned to delaware . [Support Group1][Sent0]: ‘ rat burglar ’ : thomas k. jenkins , pictured is accused of robbing 18 dover businesses . [Support Group2][Sent0]: thomas jenkins has been accused by the dover police department of robbing multiple businesses . • he is facing 72 charges from the dover police department for 18 robberies . [Support Group0][Sent0]: jenkins is being held in maryland and will face 72 charges involving the 18 burglaries in dover when he is returned to delaware . • the delaware state police is planning to file charges over a 19th robbery , which occurred in a part of dover where jurisdiction is held by state police . [Support Group0][Sent0]: mark hoffman with the dover police department told the news journal that delaware state police are planning to file charges over a 19th robbery at melvin ’s auto service , which reportedly occurred in a part of dover where jurisdiction is held by state police . Table 10: Full document, reference summary, and the FAMs presented in Table 2. 4955 ID: d1fa0db909ce45fe1ee32d6cbb546e9d784bcf74 Category: Low Abstraction Document: -LRB- CNN -RRB- You probably never knew her name , but you were familiar with her work . Betty Whitehead Willis , the designer of the iconic “ Welcome to Fabulous Las Vegas ” sign , died over the weekend . She was 91 . Willis played a major role in creating some of the most memorable neon work in the city . The Neon Museum also credits her with designing the signs for Moulin Rouge Hotel and Blue Angel Motel Willis visited the Neon Museum in 2013 to celebrate her 90th birthday . Born about 50 miles outside of Las Vegas in Overton , she attended art school in Pasadena , California , before returning home . She retired at age 77 . Willis never trademarked her most-famous work , calling it “ my gift to the city . ” Today it can be found on everything from T-shirts to refrigerator magnets . People we ’ve lost in 2015 FAMs: • willis never trademarked her most-famous work , calling it “ my gift to the city ” [Support Group0][Sent0]: willis never trademarked her most-famous work , calling it “ my gift to the city . ” • she created some of the city ’s most famous neon work . [Support Group0][Sent0]: willis played a major role in creating some of the most memorable neon work in the city . Table 11: Full document, reference summary, and the FAMs presented in Table 2. 4956 ID: dc833f8b55e381011ce23f89ea909b9a141b5a66 Category: High Abstraction Document: -LRB- CNN -RRB- As goes Walmart , so goes the nation ? Everyone from Apple CEO Tim Cook to the head of the NCAA slammed religious freedom laws being considered in several states this week , warning that they would open the door to discrimination against gay and lesbian customers . But it was the opposition from Walmart , the ubiquitous retailer that dots the American landscape , that perhaps resonated most deeply , providing the latest evidence of growing support for gay rights in the heartland . Walmart ’s staunch criticism of a religious freedom law in its home state of Arkansas came after the company said in February it would boost pay for about 500,000 workers well above the federal minimum wage . Taken together , the company is emerging as a bellwether for shifting public opinion on hot-button political issues that divide conservatives and liberals . And some prominent Republicans are urging the party to take notice . Former Minnesota Gov. Tim Pawlenty , who famously called on the GOP to “ be the party of Sam ’s Club , not just the country club , ” told CNN that Walmart ’s actions “ foreshadow where the Republican Party will need to move . ” “ The Republican Party will have to better stand for ” ideas on helping the middle class , said Pawlenty , the head of the Financial Services Roundtable , a Washington lobbying group for the finance industry . The party ’s leaders must be “ willing to put forward ideas that will help modest income workers , such as a reasonable increase in the minimum wage , and prohibit discrimination in things such as jobs , housing , public accommodation against gays and lesbians . ” Walmart , which employs more than 50,000 people in Arkansas , emerged victorious on Wednesday . Hours after the company ’s CEO , Doug McMillon , called on Republican Gov. Asa Hutchinson to veto the bill , the governor held a news conference and announced he would not sign the legislation unless its language was fixed . Walmart ’s opposition to the religious freedom law once again puts the company at odds with many in the Republican Party , which the company ’s political action committee has tended to support . In 2004 , the Walmart PAC gave around $ 2 million to Republicans versus less than $ 500,000 to Democrats , according to data from the Center for Responsive Politics . That gap has grown less pronounced in recent years . In 2014 , the PAC spent about $ 1.3 million to support Republicans and around $ 970,000 for Democrats . It has been a gradual transformation for Walmart . In 2011 , the company bulked up its nondiscrimination policies by adding protections for gender identity . Two years later , the company announced that it would start offering health insurance benefits to same-sex partners of employees starting in 2014 . Retail experts say Walmart ’s evolution on these issues over the years is partly a reflection of its diverse consumer base , as well as a recognition of the country ’s increasingly progressive views of gay equality -LRB- support for same-sex marriage is at a new high of 59 % , according to a recent Wall Street Journal/NBC News poll -RRB- . “ It ’s easy for someone like a Chick-fil-A to take a really polarizing position , ” said Dwight Hill , a partner at the retail consulting firm McMillanDoolittle . “ But in the world of the largest retailer in the world , that ’s very different . ” Hill added : Same-sex marriage , “ while divisive , it ’s becoming more common place here within the U.S. , and the businesses by definition have to follow the trend of their customer . ” The backlash over the religious freedom measures in Indiana and Arkansas this week is shining a bright light on the broader business community ’s overwhelming support for workplace policies that promote gay equality . After Indiana Gov. Mike Pence , a Republican , signed his state ’s religious freedom bill into law , CEOs of companies big and small across the country threatened to pull out of the Hoosier state . The resistance came from business leaders of all political persuasions , including Bill Oesterle , CEO of the business-rating website Angie ’s List and a one-time campaign manager for former Indiana Gov. Mitch Daniels . Oesterle announced that his company would put plans on hold to expand its footprint in Indianapolis in light of the state ’s passage of the religious freedom act . NASCAR , scheduled to hold a race in Indianapolis this summer , also spoke out against the Indiana law . “ What we ’re seeing over the past week is a tremendous amount of support from the business community who are standing up and are sending that equality is good for business and discrimination is bad for business , ” said Jason Rahlan , spokesman for the Human Rights Campaign . The debate has reached presidential politics . National Republicans are being forced to walk the fine line of protecting religious liberties and supporting nondiscrimination . Likely GOP presidential candidate Jeb Bush initially backed Indiana ’s religious freedom law and Pence , but moderated his tone a few days later . The former Florida governor said Wednesday that Indiana could have taken a “ better ” and “ more consensus-oriented approach . ” “ By the end of the week , Indiana will be in the right place , ” Bush said , a reference to Pence ’s promise this week to fix his state ’s law in light of the widespread backlash . Others in the GOP field are digging in . Sen. Ted Cruz of Texas , the only officially declared Republican presidential candidate , said Wednesday that he had no interest in second-guessing Pence and lashed out at the business community for opposing the law . “ I think it is unfortunate that large companies today are listening to the extreme left wing agenda that is driven by an aggressive gay marriage agenda , ” Cruz said . Meanwhile , former Secretary of State Hillary Clinton , who previously served on Walmart ’s board of directors , called on Hutchinson to veto the Arkansas bill , saying it would “ permit unfair discrimination ” against the LGBT community . Jay Chesshir , CEO of the Little Rock Regional Chamber of Commerce in Arkansas , welcomed Hutchinson ’s pledge on Wednesday to seek changes to his state ’s bill . He said businesses are not afraid to wade into a politically controversial debate to ensure inclusive workplace policies . “ When it comes to culture and quality of life , businesses are extremely interested in engaging in debate simply because it impacts its more precious resource – and that ’s its people , ” Chesshir said . “ Therefore , when issues arise that have negative or positive impact on those things , then the business community will again speak and speak loudly . ” Reference Summary: While Republican Gov. Asa Hutchinson was weighing an Arkansas religious freedom bill , Walmart voiced its opposition (highly abstractive, hard to obtain by rephrasing original sentences) Walmart and other high-profile businesses are showing their support for gay and lesbian rights Their stance puts them in conflict with socially conservative Republicans , traditionally seen as allies FAMs: N/A Table 12: Full document, reference summary, and the FAMs presented in Table 2. 4957 ID: 1b2cc634e2bfc6f2595260e7ed9b42f77ecbb0ce Category: High Abstraction Document: -LRB- CNN -RRB- He ’s a blue chip college basketball recruit . She ’s a high school freshman with Down syndrome . At first glance Trey Moses and Ellie Meredith could n’t be more different . But all that changed Thursday when Trey asked Ellie to be his prom date . Trey – a star on Eastern High School ’s basketball team in Louisville , Kentucky , who ’s headed to play college ball next year at Ball State – was originally going to take his girlfriend to Eastern ’s prom . So why is he taking Ellie instead ? “ She ’s great ... she listens and she ’s easy to talk to ” he said . Trey made the prom-posal -LRB- yes , that ’s what they are calling invites to prom these days -RRB- in the gym during Ellie ’s P.E. class . Trina Helson , a teacher at Eastern , alerted the school ’s newspaper staff to the prom-posal and posted photos of Trey and Ellie on Twitter that have gone viral . She was n’t surpristed by Trey ’s actions . “ That ’s the kind of person Trey is , ” she said . To help make sure she said yes , Trey entered the gym armed with flowers and a poster that read “ Let ’s Party Like it ’s 1989 , ” a reference to the latest album by Taylor Swift , Ellie ’s favorite singer . Trey also got the OK from Ellie ’s parents the night before via text . They were thrilled . “ You just feel numb to those moments raising a special needs child , ” said Darla Meredith , Ellie ’s mom . “ You first feel the need to protect and then to overprotect . ” Darla Meredith said Ellie has struggled with friendships since elementary school , but a special program at Eastern called Best Buddies had made things easier for her . She said Best Buddies cultivates friendships between students with and without developmental disabilities and prevents students like Ellie from feeling isolated and left out of social functions . “ I guess around middle school is when kids started to care about what others thought , ” she said , but “ this school , this year has been a relief . ” Trey ’s future coach at Ball State , James Whitford , said he felt great about the prom-posal , noting that Trey , whom he ’s known for a long time , often works with other kids Trey ’s mother , Shelly Moses , was also proud of her son . “ It ’s exciting to bring awareness to a good cause , ” she said . “ Trey has worked pretty hard , and he ’s a good son . ” Both Trey and Ellie have a lot of planning to do . Trey is looking to take up special education as a college major , in addition to playing basketball in the fall . As for Ellie , she ca n’t stop thinking about prom . “ Ellie ca n’t wait to go dress shopping ” her mother said . “ Because I ’ve only told about a million people ! ” Ellie interjected . Reference Summary: College-bound basketball star asks girl with down syndrome to high school prom. (highly abstractive, hard to obtain by rephrasing original sentences) Pictures of the two during the “prom-posal” have gone viral. FAMs: N/A Table 13: Full document, reference summary, and the FAMs presented in Table 2.
2020
445
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4958–4968 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4958 More Diverse Dialogue Datasets via Diversity-Informed Data Collection Katherine Stasaski1, Grace Hui Yang2, and Marti A. Hearst1 1UC Berkeley 2Georgetown University 1{katie stasaski, hearst}@berkeley.edu [email protected] Abstract Automated generation of conversational dialogue using modern neural architectures has made notable advances. However, these models are known to have a drawback of often producing uninteresting, predictable responses; this is known as the diversity problem. We introduce a new strategy to address this problem, called Diversity-Informed Data Collection. Unlike prior approaches, which modify model architectures to solve the problem, this method uses dynamically computed corpuslevel statistics to determine which conversational participants to collect data from. Diversity-Informed Data Collection produces significantly more diverse data than baseline data collection methods, and better results on two downstream tasks: emotion classification and dialogue generation. This method is generalizable and can be used with other corpuslevel metrics. 1 Introduction It is well-documented that neural dialogue models struggle with generating engaging, relevant responses (Li et al., 2016a) and often produce banal responses such as “Yeah.” While this may be an appropriate response to a chitchat conversation, to keep a human participant engaged, diversity of responses is important. Diverse models vary the language used and the content referenced, and the generated utterances differ from the most typical conversation responses some proportion of the time. A model which only generates “Yeah,” “No,” and “I don’t know” is not diverse and is not be engaging to converse with. Past work has improved model diversity with innovation on model architectures and decoding strategies (Li et al., 2016a; Baheti et al., 2018; Li et al., 2017; Shao et al., 2017; Cao and Clark, 2017; Serban et al., 2017; Zhao et al., 2017). We build upon this work to propose a novel method to collect and determine more diverse data to train these models with. Our method can be used in conjunction with existing generation-specific model innovations. Some prior work on data collection processes has prioritized diversity. For instance, Rashkin et al. (2019) prompts crowdworkers to choose an underused emotion class to generate dialogue. This work encourages coverage of emotion classes, but does not consider the likelihood that some crowdworkers are better at producing certain types of data than others. This paper introduces Diversity-Informed Data Collection (DIDC), a new strategy for creating a dataset of conversational utterances via selecting which participants’ data to include in the collection. The strategy progressively builds up a more diverse sub-corpus from an existing larger collection. The main idea is to grow the sub-corpus by adding conversations sequentially and to assess the contribution of a new participant’s utterances to the diversity of the entire sub-corpus. This strategy is also applicable to on-the-fly collection of new datasets via crowdworking or similar methods. We implement DIDC with three diversity metrics: Outlier, Entropy, and Mean-IDF. Diversity-Informed Data Collection also provides a new method for finding an upper bound on a current corpus’s diversity via a Corpus-Wide Oracle which has access to information about which utterances are most diverse across the corpus. Prior work has not used corpus-level statistics to enhance the diversity of the collected data. Instead, when collecting data with crowdworkers, researchers have sought more diverse responses by altering the task (Kang et al., 2018) or by altering the stimulus (Larson et al., 2019). Prior work that trains neural dialogue models has not made use of subsets of existing datasets that exhibit properties 4959 of diversity. Our experiments show this strategy yields significantly more diverse data than baseline collection processes. It also yields better, more diverse model output on two downstream tasks. Additionally, this method can be implemented for other metrics which are defined relative to the corpus. 2 Related Work Past work in neural dialogue generation investigates how to improve diversity in conversational responses. Additionally, past work in crowdsourcing data collection has explored optimizing crowdsourcing data collection processes. 2.1 Diverse Neural Dialogue Generation Improving model diversity is an important goal in dialogue generation (Li et al., 2016a), with several related works proposing architecture and training improvements to increase diversity. Decoding methods to increase model diversity include Li et al. (2016a) which proposes maximizing mutual information between the source sentence and response rather than maximizing likelihood. Other approaches have focused on beam search and incentivizing diverse beams, by adding similarity constraints at decoding (Baheti et al., 2018), penalizing items on the beam that are similar and reranking resulting items (Li et al., 2016b), or penalizing words which have already been generated in a current beam (Li et al., 2017). Shao et al. (2017) uses attention over already-generated words at decode time and beam reranking. Adding a temperature parameter to sharpen the decoder’s distribution has also been studied (Cao and Clark, 2017). Neural architecture improvements have also been explored, such as conditioning on a latent variable at decode time (Serban et al., 2017; Zhao et al., 2017) or a multi-headed attention mechanism which aims to capture different parts of the context (Tao et al., 2018). Zhang et al. (2018) explore the use of Generative Adversarial Networks to incentivize diversity. These more diverse models and decoding methods can be used in conjunction with Diversity-Informed Data Collection, since it attempts to improve the data that neural models are trained on in an earlier part of the model pipeline. 2.2 Crowdsourcing Related work in crowdsourcing has approached the optimization problem of how to assign crowdworkers to different tasks. 2.2.1 Crowdworker Task Assignment Basu Roy et al. (2015) formulates the problem of matching crowdworkers to tasks depending on skill levels for a set of concepts, pay rates, and HIT acceptance ratio. Follow-up work extends to collaborative crowdwork, where crowdworkers need to work together (Rahman et al., 2015). Assadi et al. (2015) pursue a similar task assignment setup. Additional work has attempted to automatically evaluate crowdworker quality of task performance and use the results to assign crowdworkers to new tasks on-the-fly (Fan et al., 2015). Further investigations have explored more adaptive assignment of tasks in real-time based on the likelihood that a participant will continually complete tasks (Kobren et al., 2015). Relatedly, Kumai et al. (2018) design a task allocation to minimize the stress of workers and maximize the resulting quality in terms of balanced skill performance. 2.2.2 Label Distribution Prediction An additional area related to our work is crowdworker label distribution prediction. Liu et al. (2019) has a crowdworking labeling task and trains models to predict the 50-label crowdworker distribution from 5-10 labels. Yang et al. (2018) aim to predict diversity in crowdworker answers to questions about an image to determine how many crowdworker responses are required to capture this diversity. 2.2.3 Dynamic Crowdworking Tasks Lin et al. (2018) tackle the task of employing crowdworkers to generate or label minority class examples to feed an active-learning model. They deploy a multi-armed bandit to choose crowdworking tasks based on how cheaply a minority-class example can be generated using the technique. Our approach, by contrast, adapts a distributional constraint across the entire collection. Zhou et al. (2018) explores the related task of changing crowdworker team instruction prompts. 2.2.4 Diverse Crowdworking Data collection approaches to incentivize diverse crowdworker output have also been studied. For instance, in EmpatheticDialogues (Rashkin et al., 2019) crowdworkers are conditioned to generate a response and an emotion (such as “afraid” or “proud”) associated with it. If workers do not generate text with certain emotions, they are prompted 4960 to select only from the underused labels. This is an example of trying to get better class coverage, but does not compare crowdworker output to the entire corpus of collected responses. Past work has also examined how the particular crowdworking task affects the diversity of crowdworker output. Kang et al. (2018) compare two crowdsourcing tasks for use in a downstream goaloriented dialogue system and examine resulting data diversity. While Kang et al. (2018) focus on choosing a task which produces diverse utterances, our work focuses on choosing a participant population which produces diverse data compared to data which has already been collected. Building on Kang et al. (2018), and perhaps most similar to our work is Larson et al. (2019), which tackles the problem of detecting outlier paraphrases generated by crowdworkers. To obtain multiple ways of expressing similar intent (such as opening a bank account), crowdworkers are asked to paraphrase sentences. After a round of paraphrase collection, the most diverse (the outlier) paraphrases are identified and placed back onto the crowdsourcing platform for another round of data collection. Our method is similarly aimed at increasing diversity of collected data. However, our method adapts the participant population for a set of tasks, which can be used in addition to an approach like Larson et al. (2019) which adapts the stimulus the population works on. 3 Diversity-Informed Data Collection We propose a method, Diversity-Informed Data Collection, which progressively builds up a corpus, and while doing so, identifies which conversation participants produce more diverse utterances compared to the rest of the in-progress corpus. More formally, our task is to progressively build a subcorpus, subc, of a given size from a larger, precollected corpus, c, where utterances are tied to IDs of specific participants. Our approach is aimed at building a diverse subcorpus subc. Our approach chooses which population of participants to collect data from for a given round. This population changes dynamically depending on calculated participant’s diversity scores. When utilizing a human-created, pre-existing corpus, we assume responses of the dataset are well-formed and of acceptable quality. With this assumption, we can maximize diversity scores without worrying that quality will be sacrificed for this diversity. However, when using this approach to collect data on-the-fly, additional quality controls may be necessary to ensure diverse data does not come at the cost of quality. We assess two experimental conditions: Simulated Data Collection and Corpus-Wide Oracle Upper-Bound. Simulated Data Collection is set up to mimic crowdsourcing data collection processes leveraging a large pre-collected corpus, while Corpus-Wide Oracle Upper-Bound gathers an maximally diverse sub-corpus of utterances. 3.1 Corpus For all experiments, we utilize the pre-collected EmpatheticDialogues corpus (Rashkin et al., 2019). We experiment with this corpus because it has crowdworker IDs associated with each utterance, which allows us to experiment with varying the participant population. Future work should conduct further experimentation to examine this approach’s adaptability to other chitchat and goaloriented datasets. The corpus has a large number of utterances (100,000) over 25,000 conversations. Each conversation is centered around a situation (such as getting a promotion at work) and is associated with one of 32 emotions, such as anger, excitement, or guilt. Each conversation takes place between two crowdworkers and is an average of 4.3 turns. There are 810 unique crowdworkers in this dataset, each completing an average of 132 utterances each across an average of 61 conversations. Our task is to select subc of size 10,000 from the larger EmpatheticDialogues corpus, c. We choose 10,000 as it is a sufficient number of utterances to train downstream models but still a small proportion (10%) of the original dataset, allowing examination of differences between sub-corpora. Implementation utilizes Cornell Convokit (Chang et al., 2019). 3.2 Simulated Data Collection We simulate real-time crowdsourcing using a large, pre-collected corpus, c. This allows for running multiple trials, each time selecting subc and examining significance of different diversity metrics and participant selection conditions. We simulate collecting data on-the-fly using an artificially-constructed environment (formally described in Algorithm 1), which completes multiple rounds of data collection until the progressively built sub-corpus size(subc) is the desired size. The 4961 Algorithm 1: Data collection simulation environment. ComputeDiversity depends on the diversity metric (Table 2), and EvalParticipants depends on the participant selection approach (Table 1). 1 function GatherData(Corpus c) 2 subc = ϵ 3 subCorpusSize = 10,000 4 numConvosToCollect = 2 5 population = [] 6 numParticipants = 10 7 while size(subc) < subCorpusSize do 8 while size(population < numParticipants) do 9 p = Sample from c.Participants 10 population.append(p) 11 c.Participants.remove(p) 12 end 13 participantDiversities = [] 14 for Participant p in population do 15 divp = 0 16 numUtts = 0 17 for i in numConvosToCollect do 18 convo = sample from p.Convos 19 for utt in convo do 20 divp += ComputeDiversity(utt, subc) 21 numUtts += 1 22 subc.append(utt) 23 end 24 p.Convos.remove(convo) 25 end 26 divp / = numUtts 27 participantDiversities.append(divp) 28 end // Which participants kept for next round based on diversity scores. 29 toKeep = EvalParticipants(participantDiversities) // Which participants still have data. 30 remaining = p in population where len(p.convos) ≥ numConvosToCollect 31 population = (toKeep ∩remaining) 32 end procedure assumes a fixed number of conversation participants in each round to gather data from (set to 10 for our experiments). We collect 2 conversations from each participant, chosen to allow the algorithm to recover from a participant with low diversity utterances while not judging a participant on just one conversation. Given a participant’s conversation, the diversity of an utterance in that conversation is stated in Equation 1: divutt = ComputeDiversity(utt, subc) (1) where ComputeDiversity depends on the diversity metric examined. We obtain a diversity score for each participant p’s set of utterances (uttsp) by averaging these diversity values: divp = 1 size(uttsp) X utt∈uttsp divutt (2) At the end of each round of data collection, uttp is added to subc for each participant. Additionally, the algorithm determines which subset of the participant population is retained for the next round based on a Participant Population Selection strategy. Our algorithm is greedy, since the order participants are added to the simulation and the order in which conversations are sampled both affect the participant’s likelihood to be retained for an additional round. However, crowdworker data collection itself is usually a greedy approach, with crowdworkers being assigned to tasks in the order they arrive and being allowed to complete many tasks until the dataset has been collected. 3.2.1 Participant Population Selection We experiment with three conditions to determine which sub-set of current participants (participants which were involved in the most recent round of data collection) should be retained for the next round of data collection, summarized in Table 1. Diverse Population: After collecting conversations from current participants, we choose to retain the most-diverse 70% of participants. Above Mean Population: Any participant whose diversity average falls above the mean diversity average of subc is retained in the pool of participants. Random Population: We compare to a special random baseline, where at each iteration we retain a random 70% of the participant population, to directly compare to the 70% of crowdworkers 4962 Condition Description Diverse Population Calculates each participant’s average relative diversity for current data collection round. We retain the 70% most-diverse participants of the current round. Above Mean Population Calculates each participant’s average relative diversity for current data collection round. Retains the participants whose diversity scores fall above the subcorpus’s mean diversity. Random Population Retains a random 70% of participants. CorpusWide Oracle Uses a Corpus-Wide Oracle which ranks utterances’ diversities in relation to the large dataset, c. Selects the most diverse utterances from these values independent of conversations. Table 1: Participant Population Selection conditions for Simulated Data Collection. The first three conditions are used in conjunction with Algorithm 1, while the last condition provides an upper-bound for diversity by utilizing a Corpus-Wide Oracle to determine the known most-diverse utterances. Metric Description Outlier Euclidean distance between utterance embedding and average embedding for all utterances in the sub-corpus (Larson et al., 2019) Entropy Entropy of utterance under a trigram language model trained on sub-corpus. Mean IDF Mean IDF value (Baeza-Yates et al., 1999) for words in utterance compared to the rest of the corpus. Table 2: Diversity metrics considered for data collection. retained in Diverse Population. We structure Random Population to collect data from roughly the same number of participants as Diverse Population, to examine differences between the resulting subc due to the the selection of which participants to retain for another round of data collection. 3.2.2 Diversity Metrics We experiment with three diversity metrics (Outlier, Entropy, and Mean IDF), summarized in Table 2. For all metrics, a new utterance utt is compared to the sub-corpus subc. The same utterance can have different diversity values depending on the utterances in subc. When augmenting pre-collected data, this allows for the collection of new utterances which are relatively diverse. Outlier: The embedding-based Outlier metric was proposed by Larson et al. (2019). Each utterance is encoded using a Universal Sentence Encoder (USE), which creates a sentence embedding by averaging word embeddings and passing the representation through a feedforward neural network, originally trained in a multi-task setting with supervised and unsupervised NLP tasks (Cer et al., 2018). An embedding of an utterance is created via: Eutt = USE(utt). A mean corpus vector is computed by averaging all of subc’s utterance’s vectors: Esubc = 1 size(subc) X u∈subc USE(u) (3) The diversity metric is the Euclidean distance between each new utterance and the mean corpus vector, or: sX i (Eui −Esubci)2 (4) where i is a dimension in Embedding E. Utterances which are farther from the mean corpus vector are given a higher diversity score. For Simulated Data Collection, the mean corpus vector shifts as data is collected. Therefore, depending on which utterances are already added in the sub-corpus, outlier values will change for a given utterance. Entropy: The Entropy score is determined by a non-neural trigram language model with smoothing for unseen words. The diversity score is given by: − 1 |x ∈Trigram(utt)| X x∈ Trigram(utt) p(x) log p(x) (5) The language model is only trained on utterances in the sub-corpus. 4963 Mean IDF: This metric calculates the mean IDF value for each word in the utterance (Baeza-Yates et al., 1999). IDF is calculated by treating each utterance in the corpus as a document. For a given utterance uttp and sub-corpus subc, Mean IDF is calculated via: 1 |uttp| X w∈uttp log  |{subc}| |{utt|w ∈utt}|  (6) where {subc} is the set of all utterances in the subc. The IDF of a word w in utt is the number of utterances in subc divided by the number of utterances containing w on a log scale. In addition to evaluating the robustness of our approaches, multiple diversity metrics are chosen with different conceptual types of diversity in mind. Outlier uses Universal Sentence Encoder embeddings which capture content (Cer et al., 2018). Entropy considers the probability of short phrases and can capture word combination diversity. Mean IDF considers the rarity of words being used for vocabulary diversity. Depending on the downstream application for a dialogue agent, the utility of these diversity measures may vary. 3.3 Corpus-Wide Oracle Upper Bound To provide an Upper Bound for the diversity of a sub-corpus subc, we create a Corpus-Wide Oracle which knows the value of each utterance’s diversity compared to the entire corpus c. For each utt ∈c, we compute diversity according to the methods in Table 2, where subc = c. For example, for Outlier, the mean corpus vector is 1 size(c) X x∈c USE(x) (7) which captures utterances from the entire corpus c. We calculate a Corpus-Wide Oracle diversity score, divoracle, for each utterance in c for each diversity metric. The Corpus-Wide Oracle is used to construct subc of any size consisting of the most diverse utterances. This sub-corpus can be used to compare against other collection methods, such as those in Simulated Data Collection, or as a way to enhance an existing collection by selecting out the most diverse utterances. After the Corpus-Wide Oracle ranks each utterance by diversity, we select the utterances with the top 10,000 diversity values to form subc. This serves as a use-case for collecting the maximallydiverse corpus for a given diversity metric. However, the Corpus-Wide Oracle might not be the best 10,000 utterances to collect for a subcorpus. The Corpus-Wide Oracle selects the utterances with the most diversity compared to the whole corpus, but this might be too much diversity without enough context since the Simulated Data Collection methods add entire conversations (not utterances in isolation) to subc. 4 Evaluation We evaluate the collected corpora both in terms of how diverse each sub-corpus is as well as performance on two downstream tasks: conversation emotion classification and dialogue generation. 4.1 Overall Diversity The first evaluation aims to answer the question of if our methods produce more diverse sub-corpora than the Random Population baseline. We examine the hypothesis that using a collection method with knowledge of diversity will result in subc that is significantly more diverse. For each data collection method, we compare the diversity of the sub-corpus to Random Population. Because diversity values are relative to subc, diversity of subc is measured via divoracle values. Table 3 shows the resulting divoracle values for datasets collected using our methods. Each value is the average of 100 trials, in which each trial collects a 10,000 utterance sub-corpus, subc. Significance results for all experiments use a two-sided t-test compared to the Random Population baseline. Both Diverse Population and Above Mean Population produce datasets which contain statistically significantly (p < 0.001) more diverse data compared to the Random Population baseline. The Corpus-Wide Oracle method produces the most diverse results overall, as expected as it is a collection of the top 10,000 most diverse utterances. Running Diversity-Informed Data Collection to collect datasets of size 5,000 produced similarly significant differences. We also examine the average number of participants out of the 810 total in c that are included for each method. Note in Table 3 the difference in Average Number of Participants from Random Population and Diverse Population to Above Mean Population and Corpus-Wide Oracle. Even though Above Mean Population is more diverse than Di4964 Condition Mean Score Avg. #Part Outlier Random Population 0.974 257.4 Diverse Population 0.979* 262.1 Above Mean Population 0.978* 516.9 Corpus-Wide Oracle 1.035* 539.0 Entropy Random Population −5.350 257.2 Diverse Population −5.320* 259.1 Above Mean Population −5.294* 359.1 Corpus-Wide Oracle −4.261* 481.0 Mean IDF Random Population 5.455 256.2 Diverse Population 5.659* 257.7 Above Mean Population 5.613* 357.5 Corpus-Wide Oracle 7.783* 546.0 Table 3: Results for diversity scores for each method of collecting corpora, by metric (Outlier, Entropy, and Mean IDF). Higher scores are better for all metrics. Also shown are the average number of participants (Avg. #Part) included out of a possible 810. * indicates statistical significance compared to the Random Population baseline (p < 0.001). verse Population for Entropy, it comes at the cost of more participants. Across all three diversity metrics, Above Mean Population requires about 100–200 additional participants than Diverse Population and Random Population. In an online setting where the cost to train new crowdworkers is high, the tradeoff between number of participants and diversity of content may be worth considering. 4.2 Classification To examine the quality of the resulting subc’s, we turn to downstream task evaluation. We first examine the task of classifying a conversation’s emotions from utterance text. Following Larson et al. (2019)’s justification, we would expect more diverse subc to result in higher classification accuracies, because more diverse responses should cover more variation in how people express emotions in conversation. 4.2.1 Classification Method We follow the methodology of Larson et al. (2019) who propose evaluating the diversity of goaloriented intent paraphrases. For their use case, classification models predict the intents from the paraphrase. For our case, each conversation in the EmpatheticDialogues corpus is associated with an emotion, such as anger or guilt. There are 32 such emotions throughout the corpus. The classification Condition SVM FastText Outlier Random Population 0.224 0.050 Diverse Population 0.234* 0.052 Above Mean Population 0.229 0.077* Corpus-Wide Oracle 0.100* 0.057* Entropy Random Population 0.218 0.052 Diverse Population 0.212† 0.049 Above Mean Population 0.254* 0.065* Corpus-Wide Oracle 0.134* 0.102* Mean IDF Random Population 0.220 0.052 Diverse Population 0.236* 0.052 Above Mean Population 0.257* 0.064* Corpus-Wide Oracle 0.131* 0.065* Table 4: Results for downstream classification accuracy averaged over 5-fold cross-validation over 10 trials: higher is better. The task is classification of emotions from a set of 32 possible given the text of dialogue responses in subc. † and * indicate p<0.05 and 0.001 respectively compared to Random Population. task is to predict which of the 32 emotions is expressed from a given utterance. Following Larson et al. (2019), we use two classification models: • Bag-of-Words SVM • FastText classifier Bag-of-Words SVM is an SVM using TF-IDF word features for prediction. The FastText classifier uses a neural classification model on top of fastText sentence embeddings (Joulin et al., 2017). The sub-corpora we collect using the different methods serve as the datasets to train these classification models. 4.2.2 Classification Results Classification task results are summarized in Table 4. Reported scores are averaged 5-fold crossvalidation and averaged over 10 runs of datasets collected from each method. While most conditions show Diverse Population significantly outperforms Random Population, it performs worse than Random Population with Entropy SVM and Entropy FastText and performs the same in Mean IDF FastText. Above Mean Population, on the other hand, outperforms the Random Population baseline on all conditions. This could potentially be due to the larger number of participants included in Above Mean Population. Surprisingly, Corpus-Wide Oracle does not perform the best in each category. We conjecture that too many diverse responses do not allow a classifica4965 tion model to learn common characteristics. 4.3 Generation Because the ultimate goal of collecting more diverse dialogue data is generating more diverse text, we evaluate diversity of neural text generation models trained on resulting corpora. 4.3.1 Generation Method Our task is to generate the next utterance in a dialogue, where the data collection processes collect utterances for subc. To train generation models, the input is the most recent parent utterance for each utt in subc, and utt is the target sentence to generate. When utt is the starting utterance in a conversation, the input is the situation associated with the conversation (such as planning a vacation). We train Sequence-to-Sequence models (Sutskever et al., 2014) with a 2-layer bidirectional encoder, hidden size 500, word vector size 64, Adam optimizer (Kingma and Ba, 2014), learning rate 0.001, trained for 3000 steps with batch size 32. Models are implemented using OpenNMT (Klein et al., 2017). We opt to use a standard model as it has fewer parameters to learn from smaller sub-corpora. We use the same parameter settings for all trained models. 4.3.2 Generation Results Generation task results are summarized in Table 5. We report on both mean and median length of model responses. Distinct-1 and Distinct-2 measure the proportion of unigrams and bigrams respectively in the set of model responses which are unique (Li et al., 2016a). We also report diversity of the generated responses calculated by the metrics used in subc collection (see Table 2). Our method results in models which produce more diverse output compared to baseline Random Population data collection. Interestingly, Diverse Population and Above Mean Population split the win on producing more diverse outputs. CorpusWide Oracle diversity results are sometimes lower and overall shorter in length than other methods; a potential reason is this condition only samples utterances, not conversations. Responses from the model trained on each subc are evaluated with all 3 diversity metrics, to examine potential interactions. Collecting subc with Entropy results in higher Mean IDF (and vice versa) compared to Random Population. Collecting subc with Outlier results in slightly lower Mean IDF (and vice versa) for Diverse Population and Above Mean Population compared to Random Population. There is not a consistent signal between Outlier and Entropy. Future work can further examine the relationships among these diversity metrics. 5 Discussion Diversity Considerations: Compared to a random baseline, Diversity-Informed Data Collection results in more diverse data than Random Population, which is shown to be more effective on downstream tasks. Future work can explore the effect of simultaneously optimizing multiple desirable measurements of diversity. However, we acknowledge that maximum diversity might not be what is desired and does not always result in the best downstream task performance, as indicated by the low Corpus-Wide Oracle downstream task performance. While we have not examined the tradeoff between diversity and quality, this can be explored in future work. Generalizability: Diversity-Informed Data Collection is generalizable to metrics other than diversity. Concretely, DIDC should be used when a desired metric (1) can compare one sample (or set of samples) to the in-progress dataset and (2) has variation among the participant population. Additionally, Diversity-Informed Data Collection can be applied to areas outside of dialogue data collection. For instance, DIDC could apply to collecting data with different emotions or sentiment. Another extension is to a specialized application domain, such as collecting dialogues for educational tutoring purposes, where our method could be used to collect more data from students who generate text consistent with certain types of misconceptions. Crowdworking Deployment: We evaluated on simulated crowdworking data by leveraging an existing corpus. This choice stems from the desire to test multiple runs of methods in a controlled environment, to reliably determine significance, and to work with data with an assumed level of quality. That said, our approach can be applied to real crowdworking tasks. Data can be gathered from several participants in parallel, where crowdworkers are added and offered new tasks or assigned qualifications based on their diversity. If our method is deployed in paid crowdworking tasks, Diverse Population might be more costeffective. In this particular investigation, we find 4966 Condition Mean Length Median Length D-1 D-2 Outlier Entropy Mean IDF Outlier Random Population 7.6 7 0.114 0.296 0.981 −3.088 5.504 Diverse Population 9.7 7 0.110 0.279 0.989* −3.354* 5.297§ Above Mean Population 8.1 7 0.063 0.169 0.960* −3.083 5.067* Corpus-Wide Oracle 3.8 4 0.204 0.448 1.042* −2.968* 6.789* Entropy Random Population 8.8 8 0.101 0.265 0.981 −3.281 5.263 Diverse Population 7.7 7 0.122 0.317 0.978 −3.197§ 5.411† Above Mean Population 6.6 6 0.092 0.226 0.982 −3.057* 5.474* Corpus-Wide Oracle 4.9 5 0.112 0.316 0.985§ −2.935* 5.781* Mean IDF Random Population 6.1 6 0.120 0.294 0.988 −3.036 5.526 Diverse Population 6.7 6 0.131 0.322 0.986 −2.955§ 5.797§ Above Mean Population 7.2 7 0.071 0.187 0.976* −2.937* 5.655 Corpus-Wide Oracle 3.4 3 0.214 0.449 1.008* −2.421* 8.327* Table 5: Downstream model generation results; higher numbers are better for all metrics. †, §, and * indicate p<0.05, 0.01, and 0.001 respectively. As Distinct-1 and Distinct-2 are summary statistics, we did not test significance. Diverse Population requires 100-200 fewer participants than Above Mean Population to create a dataset. Due to the time required to train new participants, there is a tradeoff between training a new worker and collecting more data form current participants. Caution should be taken in using this method on-the-fly without a quality check. Standard quality control methods (e.g., crowdworker qualifications, manual examination, crowdworker verification) should be deployed for from-scratch data collection. Crowdworker Fairness: Another important consideration for a live deployment is the crowdworker’s perspective of fairness. Because some crowdworkers are retained for more data collection than others, communicating this possibility to crowdworkers is essential (Brawley and Pury, 2016). Crowdworking best practices involve disclosing which quality metrics are being used to workers to set clear expectations (Bederson and Quinn, 2011). Additionally, combining our method with a method which alters the task crowdworkers complete (Kang et al., 2018) as opposed to restricting the crowdworking population could be a way to balance fairness with crowdworkers. Different task and population combinations could allow for all crowdworkers to participate in more tasks. 6 Conclusion We propose a method, Diversity-Informed Data Collection, which leverages this to produce more diverse datasets than the standard approach, and which performs better on downstream tasks. We define diversity of an utterance compared to the other utterances in a corpus. This allows for measurement of the impact of adding each utterance to the corpus. Working under the same assumption that a subset of participants produce diverse data compared to the corpus, our method can be extended to other diversity measures and can be modified to work with other corpus-level metrics. Acknowledgements This work was supported by an AWS Machine Learning Research Award, an NVIDIA Corporation GPU grant, a UC Berkeley Chancellor’s Fellowship, a National Science Foundation (NSF) Graduate Research Fellowship (DGE 1752814) and an NSF CAREER Award (IIS-1453721). We thank the three anonymous reviewers for their helpful comments. We additionally thank Cathy Chen, David Gaddy, Daniel Fried, Lucy Li, and Nate Weinman for their helpful feedback. References Sepehr Assadi, Justin Hsu, and Shahin Jabbari. 2015. Online assignment of heterogeneous tasks in crowdsourcing markets. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2015, November 8-11, 2015, San Diego, California, USA, pages 12–21. AAAI Press. Ricardo A. Baeza-Yates, Berthier Ribeiro-Neto, et al. 4967 1999. Modern Information Retrieval, chapter 3, Modeling. ACM press New York, USA. Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. 2018. Generating more interesting responses in neural conversation models with distributional constraints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3970–3980, Brussels, Belgium. Association for Computational Linguistics. Senjuti Basu Roy, Ioanna Lykourentzou, Saravanan Thirumuruganathan, Sihem Amer-Yahia, and Gautam Das. 2015. Task assignment optimization in knowledge-intensive crowdsourcing. The VLDB Journal—The International Journal on Very Large Data Bases, 24(4):467–491. Benjamin B. Bederson and Alexander J. Quinn. 2011. Web workers unite! addressing challenges of online laborers. In CHI ’11 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’11, page 97–106, New York, NY, USA. Association for Computing Machinery. Alice M. Brawley and Cynthia L.S. Pury. 2016. Work experiences on mturk: Job satisfaction, turnover, and information sharing. Computers in Human Behavior, 54:531 – 546. Kris Cao and Stephen Clark. 2017. Latent variable dialogue models and their diversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 182–187, Valencia, Spain. Association for Computational Linguistics. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. Association for Computational Linguistics. Jonathan P. Chang, Caleb Chiam, Liye Fu, Andrew Wang, Justine Zhang, and Cristian DanescuNiculescu-Mizil. 2019. Convokit: The cornell conversational analysis toolkit. Ju Fan, Guoliang Li, Beng Chin Ooi, Kian-lee Tan, and Jianhua Feng. 2015. Icrowd: An adaptive crowdsourcing framework. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, SIGMOD ’15, page 1015–1030, New York, NY, USA. Association for Computing Machinery. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431, Valencia, Spain. Association for Computational Linguistics. Yiping Kang, Yunqi Zhang, Jonathan K. Kummerfeld, Lingjia Tang, and Jason Mars. 2018. Data collection for dialogue system: A startup perspective. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 33–40, New Orleans - Louisiana. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL. Ari Kobren, Chun How Tan, Panagiotis Ipeirotis, and Evgeniy Gabrilovich. 2015. Getting more for less: Optimized crowdsourcing with dynamic tasks and goals. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15, pages 592– 602, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Katsumi Kumai, Masaki Matsubara, Yuhki Shiraishi, Daisuke Wakatsuki, Jianwei Zhang, Takeaki Shionome, Hiroyuki Kitagawa, and Atsuyuki Morishima. 2018. Skill-and-stress-aware assignment of crowd-worker groups to task streams. In Proceedings of the Sixth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2018, Z¨urich, Switzerland, July 5-8, 2018, pages 88–97. Stefan Larson, Anish Mahendran, Andrew Lee, Jonathan K. Kummerfeld, Parker Hill, Michael A. Laurenzano, Johann Hauswald, Lingjia Tang, and Jason Mars. 2019. Outlier detection for improved data quality and diversity in dialog systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 517–527, Minneapolis, Minnesota. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. 4968 Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169, Copenhagen, Denmark. Association for Computational Linguistics. Christopher H. Lin, Mausam, and Daniel S. Weld. 2018. Active learning with unbalanced classes and example-generation queries. In Proceedings of the Sixth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2018, Z¨urich, Switzerland, July 5-8, 2018, pages 98–107. AAAI Press. Tong Liu, Akash Venkatachalam, Pratik Sanjay Bongale, and Christopher Homan. 2019. Learning to predict population-level label distributions. In Companion Proceedings of The 2019 World Wide Web Conference, WWW ’19, pages 1111–1120, New York, NY, USA. ACM. H. Rahman, S. B. Roy, S. Thirumuruganathan, S. Amer-Yahia, and G. Das. 2015. Task assignment optimization in collaborative crowdsourcing. In 2015 IEEE International Conference on Data Mining, pages 949–954. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, page 3295–3301. AAAI Press. Louis Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating long and diverse responses with neural conversation models. CoRR, abs/1701.03185. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Chongyang Tao, Shen Gao, Mingyue Shang, Wei Wu, Dongyan Zhao, and Rui Yan. 2018. Get the point of my utterance! learning towards effective responses with multi-head attention mechanism. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4418–4424. International Joint Conferences on Artificial Intelligence Organization. Chun-Ju Yang, Kristen Grauman, and Danna Gurari. 2018. Visual question answer diversity. In Sixth AAAI Conference on Human Computation and Crowdsourcing. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, page 1815–1825, Red Hook, NY, USA. Curran Associates Inc. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664, Vancouver, Canada. Association for Computational Linguistics. Sharon Zhou, Melissa Valentine, and Michael S. Bernstein. 2018. In search of the dream team: Temporally constrained multi-armed bandits for identifying effective team structures. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, pages 108:1–108:13, New York, NY, USA. ACM.
2020
446
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4969–4983 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4969 S2ORC: The Semantic Scholar Open Research Corpus Kyle Lo†∗ Lucy Lu Wang†∗ Mark Neumann† Rodney Kinney† Daniel S. Weld†‡ †Allen Institute for Artificial Intelligence ‡Paul G. Allen School of Computer Science & Engineering, University of Washington {kylel, lucyw}@allenai.org Abstract We introduce S2ORC,1 a large corpus of 81.1M English-language academic papers spanning many academic disciplines. The corpus consists of rich metadata, paper abstracts, resolved bibliographic references, as well as structured full text for 8.1M open access papers. Full text is annotated with automaticallydetected inline mentions of citations, figures, and tables, each linked to their corresponding paper objects. In S2ORC, we aggregate papers from hundreds of academic publishers and digital archives into a unified source, and create the largest publicly-available collection of machine-readable academic text to date. We hope this resource will facilitate research and development of tools and tasks for text mining over academic text. 1 Introduction Academic papers are an increasingly important textual domain for natural language processing (NLP) research. Aside from capturing valuable knowledge from humankind’s collective research efforts, academic papers exhibit many interesting characteristics – thousands of words organized into sections, objects such as tables, figures and equations, frequent inline references to these objects, footnotes, other papers, and more. Different types of resources have been used to support research over academic papers. Citation graphs like AMiner’s Open Academic Graph (Tang et al., 2008), the Microsoft Academic Graph (MAG) (Shen et al., 2018), and the Semantic Scholar literature graph (Ammar et al., 2018), have had widespread application in bibliometrics, science-of-science, information retrieval, and network analysis. Digital archives like arXiv,2 ∗denotes equal contribution 1Instructions for access to the data and model are available at https://github.com/allenai/s2orc/. 2https://arxiv.org Figure 1: Inline citations and references to figures and tables are annotated in S2ORC’s structured full text. Citations are linked to bibliography entries, which are linked to other papers in S2ORC. Figure and table references are linked to their captions. PubMed Central,3 CiteSeerX (Giles et al., 1998),4 and the ACL Anthology (Bird et al., 2008),5 are popular resources for deriving large text corpora for summarization and language modeling or, with further annotation, development of datasets for tasks like entity extraction, text classification, parsing, and discourse analysis. We focus on bibliometrically-enhanced derivations of these corpora, such as the ACL Anthology Network (AAN) (Radev et al., 2009)6 derived from the ACL Anthology, RefSeer (Huang et al., 2015) derived from CiteSeerX, and Saier and F¨arber (2019) derived from arXiv, which combine useful aspects of citation graphs and raw text corpora. These resources provide citation mentions linked to paper identifiers in their corresponding digital archives, such as the ACL Anthology and CiteSeerX, or to nodes in citation graphs such as MAG, enabling new forms of cross-paper discourse analysis (e.g., studying how or why papers are related). 3https://www.ncbi.nlm.nih.gov/pmc 4https://citeseerx.ist.psu.edu 5https://www.aclweb.org/anthology 6http://aan.how/ 4970 Corpus Papers w/ body text Citation contexts References to tables / figures / equations Linked to graph Academic disciplines S2ORC (PDF-parse) 8.1M full text yes S2ORC (full) multi S2ORC (LATEX-parse) 1.5M full text yes S2ORC (full) physics, math, CS PubMed Central (OA) 2.6M full text yes PubMed bio, med AAN (Radev et al., 2009) 25k full text no ACL Anthology comp ling Saier and F¨arber (2019)† 1.0M snippets no MAG physics, math, CS RefSeer (Huang et al., 2015) 1.0M snippets no CiteSeerX multi Table 1: A comparison of S2ORC with other publicly-available academic text corpora. Of the other corpora: PubMed Central (OA) links to PubMed, which contains 30M papers at the time of writing. AAN links to the ACL Anthology (which contained 25k papers at the time of dataset construction, and 54k papers at the time of writing). Saier and F¨arber (2019) is derived from arXiv and links to MAG (which contained 213M papers and other non-paper documents at the time of dataset construction, and 226M nodes at the time of writing). RefSeer links to CiteSeerX (which contained 1M papers at the time of dataset construction, and 6M papers at the time of writing). S2ORC contains three times more full text papers than PubMed Central (OA), the next largest corpus with bibliometric enhancements, while covering a more diverse set of academic disciplines. Citations in S2ORC are linked to the full set of S2ORC papers, 81.1M paper nodes derived from Semantic Scholar. In addition, the LATEX subset of S2ORC captures additional structure omitted by Saier and F¨arber (2019), who also parse LATEX sources from arXiv. †Saier and F¨arber (2020) is an update to this work which now includes full text. It is released concurrently with this work. Yet, existing corpora are not without their limitations. Some cover a small number of papers (e.g. AAN), are domain-specific (e.g. AAN, PubMed Central, Saier and F¨arber (2019)), or may not provide usable full text (e.g. Saier and F¨arber (2019) and RefSeer). To address these issues, we introduce S2ORC,7 the Semantic Scholar8 Open Research Corpus, a large publicly-available collection of 81.1M academic papers covering dozens of academic disciplines. Each paper is associated with metadata and abstracts aggregated from hundreds of trusted sources such as academic publishers and literature archives like PubMed and arXiv. Notably, we release structured, machinereadable full text extracted from PDFs for 8.1M papers which we’ve identified as having open access status. S2ORC full text preserves meaningful structure, e.g., paragraph breaks, section headers, inline citation mentions, references to tables and figures, and resolved citation links to other papers. Additionally, we provide 1.5M full text LATEX parses from which we have extracted, in addition to citations and references, the source text of tables and mathematical formulas. As shown in Table 1, S2ORC provides substantially more structured full text papers and covers a more diverse set of academic disciplines than other resources. 7pronounced “stork” 8The papers included in S2ORC are a curated subset of the papers in the Semantic Scholar literature graph (Ammar et al., 2018) that focuses only on English-language papers with abstracts or full text available. See §2.5 for details on filtering through Semantic Scholar papers. In this paper, we describe the construction of S2ORC (§2). We provide summary statistics of the corpus (§3) and evaluate the data quality (§4). We then evaluate a BERT model pretrained on S2ORC (§5), and discuss potential applications to a variety of NLP and analysis tasks over academic text (§6). Finally, we compare S2ORC with other publicly-available academic text corpora (§7). 2 Constructing the corpus S2ORC is constructed using data from the Semantic Scholar literature corpus (Ammar et al., 2018). Papers in Semantic Scholar are derived from numerous sources: obtained directly from publishers, from resources such as MAG, from various archives such as arXiv or PubMed, or crawled from the open Internet. Semantic Scholar clusters these papers based on title similarity and DOI overlap, resulting in an initial set of approximately 200M paper clusters. To construct S2ORC, we must overcome challenges in (i) paper metadata aggregation, (ii) identifying open access publications, and (iii) clustering papers, in addition to identifying, extracting, and cleaning the full text and bibliometric annotations associated with each paper. The pipeline for creating S2ORC is: 1) Process PDFs and LATEX sources to derive metadata, clean full text, inline citations and references, and bibliography entries, 2) Select the best metadata and full text parses for each paper cluster, 4971 3) Filter paper clusters with insufficient metadata or content, and 4) Resolve bibliography links between paper clusters in the corpus. Details for these steps are provided below. See Appendix §A for definitions of terminology. The output of this pipeline is visualized in Figure 1. 2.1 Processing PDFs We process PDFs from the Semantic Scholar corpus using SCIENCEPARSE v3.0.09 and GROBID v0.5.510 (Lopez, 2009). Our processing pipeline is described below. Selecting PDFs We remove PDFs which are less likely to be academic papers. SCIENCEPARSE and GROBID are not optimized for processing nonpaper academic documents such as dissertations, reports, slides, etc., and this filtering step is necessary to increase output data quality. See Appendix §B for filter details. There are around 31.3M PDFs associated with approximately 200M initial paper clusters, and 30.5M PDFs are selected for processing based on these filtering criteria. Extracting structured data from PDFs We use SCIENCEPARSE to extract title and authors from each PDF.11 We then use GROBID to process each PDF. From the XML output of GROBID, we extract (i) metadata such as title, authors, and abstract, (ii) paragraphs from the body text organized under section headings, (iii) figure and table captions, (iv) equations, table content, headers, and footers, which we remove from the body text, (v) inline citations in the abstract and body text, (vi) parsed bibliography entries with title, authors, year, and venue identified, and (vi) links between inline citation mentions and their corresponding bibliography entries. Postprocessing GROBID output We postprocess GROBID output using regular expressions to classify the parenthetical citation style of a paper as BRACKET (e.g. [2]), NAME-YEAR (e.g. ABC, 2019), or OTHER (superscripts and other mixed styles). We focus on addressing two types of common errors in GROBID’s inline citation extractions: (i) false positives resulting from superscripts or equation references being recognized as 9https://github.com/allenai/scienceparse 10https://github.com/kermitt2/grobid 11Our evaluations suggest SCIENCEPARSE outperforms GROBID for title and author extraction. inline citations in papers with BRACKET-style citations, and (ii) false negatives resulting from an inability to expand bracket citation ranges (e.g. “[3]-[5]” should be expanded to “[3], [4], [5]” before linking). False positives are detected using regular expressions and removed from GROBID output. Bracket citation ranges are manually expanded and linked to their corresponding bibliography entries. The resulting parses are expressed in JSON format.12 2.2 Processing LATEX source LATEX document source is available for a majority of arXiv submissions, and where available, are used to construct a full text parse. We retrieve body text, section headers, figure/table captions, table representations, equations, and inline citations and references directly from LATEX source. Inspired by Saier and F¨arber (2019), we first convert LATEX source into XML documents and then extract structured information from the XML. Due to direct access to source, the accuracy of citation span, reference, caption, section header, and equation detection is near-perfect. We process 1.5M papers from LATEX source derived from arXiv, all of which are included as part of S2ORC. Surprisingly, due to the diversity of ways in which authors define metadata in LATEX, the quality of metadata extracted from LATEX documents is worse than those extracted from PDF. Therefore, we do not use LATEX-derived metadata for paper clustering or metadata selection. 2.3 Selecting canonical metadata Canonical values for title, authors and other metadata fields are selected from among the papers in a cluster. First, if a cluster contains multiple PDFs, we select one to be canonical. This can occur, for example, in a cluster containing an arXiv preprint and its eventual camera-ready version. We preferentially select PDFs from open access sources and break ties by prioritizing PDFs for which there exist richer publisher-provided metadata (e.g. abstract, year, venue, DOI). If the selected PDF is associated with publisher-provided metadata, we select those publisher-provided metadata fields to be canonical. In cases where publisher-provided metadata is incomplete, we use majority voting to select 12The S2ORC data format is described at https:// github.com/allenai/s2orc 4972 canonical metadata values. We break ties by minimizing the total number of sources from which we select metadata (e.g., if IEEE provides title, authors and abstract, DBLP provides title and authors, and arXiv provides title and abstract, we prioritize selecting IEEE over the union of DBLP and arXiv). S2ORC metadata fields include title, author, year, venue, journal, abstract, and identifiers (DOI, PubMed, PubMed Central (PMC), arXiv, and ACL Anthology). In cases where the title and authors are not provided by any publishers, we derive the values for these fields from the parsed PDF, prioritizing SCIENCEPARSE over GROBID. We further comment on paper clustering as it pertains to metadata selection in Appendix §C. 2.4 Assembling the corpus We construct the final corpus by assembling clustered paper metadata with GROBID and LATEX parse objects. We associate the GROBID parse with the S2ORC paper object if a valid GROBID parse is produced from the PDF, and the PDF is open access. Open access status is assigned if a paper is derived from arXiv, ACL Anthology, PubMed Central (OA), and/or associated with an open-access DOI in the Unpaywall database.13 If the PDF is not open access, we only include the bibliography from the GROBID parse in S2ORC. If arXiv LATEX source is available for the paper cluster, we also associate the LATEX parse with the S2ORC paper object. 2.5 Filtering paper clusters We further filter paper clusters to remove papers with (i) no title, (ii) no authors, (iii) fewer than 100 characters of abstract and body text, and (iv) where English is not the primary language. The first three filters remove papers that provide little value for bibliometric-based or text-based analyses. The English language filter14 reduces GROBID parsing errors. All filters are applied in series. Subsequently, 95.5M paper clusters are filtered out based on the aforementioned criteria and removed from the corpus. The distribution of filtered papers is given in Table 2. We note that a large number of paper clusters are filtered out; 80.0M of these filtered clusters have no associated publisher-provided abstract or associated PDF and 13Unpaywall 2019-04-19 data dump 14We use the cld2 tool for language detection with a threshold of 0.9 over the English language score. do not provide significant value to our dataset in their current state. Although these papers that lack text may be useful as cite-able nodes in S2ORC, they are generally of lower quality and are filtered out of the corpus to improve corpus quality. Filter Number of papers No title 20k No authors 0.3M < 100 chars of text 80.0M Not English 15.2M Table 2: Post-processing data quality filters for papers 2.6 Linking bibliographies to papers Each bibliography entry in both GROBID and LATEX parses are linked to the most similar papers in the corpus. For linking, we score each bibliography entry and paper cluster pair using a similarity score computed between their titles. Each title is first normalized (i.e. white spaces stripped, lower-cased, special characters removed) and represented by its character 3-grams. The similarity score Stitle is computed as the harmonic mean between a Jaccard index and a containment metric: Stitle = 2 × J × C J + C (1) where the Jaccard index J and containment metric C are computed from the n-grams of the two titles N1 and N2 as: J = |N1 ∩N2| |N1 ∪N2| C = |N1 ∩N2| min (|N1|, |N2|) For each bibliography entry, the bibliographypaper pair with the highest similarity score above 0.8 is output as the correct link. Otherwise, the bibliography entry remains unlinked. We perform an evaluation of linking performance in §4. 3 The S2ORC dataset The resulting corpus consists of 81.1M papers. Our publisher-provided abstract coverage is 90.4%, or 73.4M papers. Our PDF coverage is 35.6%, or 28.9M papers. These PDFs are processed using the pipeline discussed in §2.1. The 4973 Total papers 81.1M Papers w/ PDF 28.9M (35.6%) Papers w/ bibliographies 27.6M (34.1%) Papers w/ GROBID full text 8.1M (10.0%) Papers w/ LaTeX full text 1.5M (1.8%) Papers w/ publisher abstract 73.4M (90.4%) Papers w/ DOIs 52.2M (64.3%) Papers w/ Pubmed IDs 21.5M (26.5%) Papers w/ PMC IDs 4.7M (5.8%) Papers w/ ArXiv IDs 1.7M (2.0%) Papers w/ ACL IDs 42k (0.1%) Table 3: Statistics on paper provenance. We note that categories are not mutually exclusive and do not sum to 100%. All papers in S2ORC have either a publisherprovided abstract or an associated PDF from which we derive full text and/or bibliography entries, or both. Statistic GROBID LATEX Paragraphs (abstract) 1.1 Paragraphs (body) 9.9 93.3* Inline cite spans (abstract) 0.7 Inline cite spans (body) 45.2 46.8 Bibliography entries 27.6 21.9 Linked bib. entries 19.3 6.8† Table 4: Extraction and linking statistics over PDF and LATEX parses. Reported values are averaged over all open access papers, which consist of 8.1M GROBIDparsed PDFs and 1.5M parsed LATEX sources. *LATEX preserves line breaks rather than paragraph breaks. †The lower number of linked bibliography entries in LATEX parses is due to large numbers of papers (mostly in the field of physics) for which the bibliography entries are formatted without paper titles. Our linking algorithm strongly depends on titles and fails to link these entries. vast majority of these PDFs are successfully processed using GROBID, and we extract bibliography entries for 27.6M of the 28.9M PDFs. We identify 8.1M of the 28.9M PDFs as open access (§2.4), and we provide full text for all papers in this open access subset. For the 1.5M papers for which LATEX source is available through arXiv, we further obtain and provide LATEX parses (§2.2). Using these extracted bibliographies, we resolve a total 380.5M citation links between papers (§2.6), 156.5M of which can be tied back to their inline citation mentions in the full text. See Table 3 for more provenance statistics. We provide statistics for the GROBID and LATEX full text parses and bibliography linking in Figure 2: Distribution of papers by Microsoft Academic field of study. Table 4. On average, LATEX parses contain many more “paragraphs” of body text, because LATEX source files preserve line breaks rather than paragraph breaks. We speculate that differences in bibliography entry and linking counts between the GROBID and LATEX parses are due to a combination of: (i) challenges in LATEX bibliography expansion and parsing, and (ii) differences in bibliography formatting in some math and physics venues (where bibliography entries do not include paper titles, which we depend on for bibliography linking). The distribution of academic disciplines in S2ORC is given in Figure 2 using Microsoft Academic fields of study. Not all papers in S2ORC can be found in Microsoft Academic – those not found are denoted as Unclassified. Approximately 677k papers have more than one primary Microsoft Academic field of study; Figure 2 represents only the top field of study for each paper. 4 Evaluation To evaluate the quality of our metadata selection, we randomly sample 500 paper clusters, restricting to those with PDFs. Within each sampled cluster, we determine whether the canonical title and authors match the title and authors in the selected canonical PDF. Inline citation detection and bibliography parsing are dependent on GROBID (Lopez, 2009). Ahmad and Afzal (2018) evaluate GROBID for de4974 Domain Dataset Reference Task SCIBERT S2ORCSCIBERT BC5CDR Li et al. (2016) NER 90.01 90.41 ± 0.06 JNLPBA Collier and Kim (2004) NER 77.28 77.70 ± 0.25 NCBI-disease Do˘gan et al. (2014) NER 88.57 88.70 ± 0.52 Biomed EBM-NLP Nye et al. (2018) PICO 72.28 72.35 ± 0.95 GENIA Kim et al. (2003) DEP (LAS) 90.43 90.80 ± 0.19 GENIA Kim et al. (2003) DEP (UAS) 91.99 92.31 ± 0.18 ChemProt Krallinger et al. (2017) REL 83.64 84.59 ± 0.93 SciERC Luan et al. (2018) NER 67.57 68.93 ± 0.19 CS SciERC Luan et al. (2018) REL 79.97 81.77 ± 1.64 ACL-ARC Jurgens et al. (2018) CLS 70.98 68.45 ± 2.47 Biomed & CS SciCite Cohan et al. (2019) CLS 85.49 84.76 ± 0.37 Multi-domain PaperField Beltagy et al. (2019) CLS 65.71 65.99 ± 0.08 Table 5: S2ORC-SCIBERT test results are comparable with reported SCIBERT test results on the set of tasks and datasets from Beltagy et al. (2019), to which we refer the reader for descriptions. Reported statistics are spanlevel F1 for NER, token-level F1 for PICO, dependency parsing (DEP), and macro-F1 for relation (REL) and text (CLS) classification. We report micro-F1 for ChemProt. All S2ORC-SCIBERT results are the mean ± standard deviation of 5 runs with different random seeds. Beltagy et al. (2019) do not report standard deviation or number of runs. tecting inline citations using a corpus of 5k CiteSeer papers, and found GROBID to have an F1score of 0.89 on this task. Tkaczyk et al. (2018) report GROBID as the best among 10 out-of-the-box tools for parsing bibliographies, also achieving an F1 of 0.89 in an evaluation corpus of 9.5k papers. We perform an evaluation over 200 randomly sampled papers from S2ORC and found comparable F1-scores for GROBID performance on both tasks. For bibliography linking, we randomly sample S2ORC papers (500 GROBID PDF parses and 100 LATEX parses) and select one linked bibliography entry from each sampled paper (while avoiding selecting multiple entries linked to the same paper). We determine whether the title and authors in the bibliography entry agree with the title and authors of the linked paper. We present these evaluation results in Table 6 and detail valuation criteria in Appendix §D. Evaluated task Title Authors Paper clustering 0.93 0.89 Bib. linking (GROBID) 1.00 0.96 Bib. linking (LATEX) 1.00 0.92 Table 6: Accuracy of paper clustering and bibliography linking for titles and authors in sampled evaluation sets. 5 Pretraining BERT on S2ORC To demonstrate the suitability of S2ORC for language model pretraining, we train BERT-Base (Devlin et al., 2019) on the parsed full text of S2ORC and show that the resulting model (S2ORC-SCIBERT) performs similarly to SCIBERT (Beltagy et al., 2019) on a diverse suite of scientific NLP tasks and datasets. While SCIBERT is a BERT-Base model also trained on multiple domains of scientific text, key differences in its pretraining corpus and vocabulary and those used for S2ORC-SCIBERT are: • Domain: Beltagy et al. (2019) report a pretraining corpus consisting of 82% biomedical and 18% computer science papers. Our S2ORC pretraining corpus consists of a more balanced distribution of papers across diverse academic disciplines (see Figure 2), such that biomedical (42.7%) and computer science (7.2%) papers only comprise half the corpus. • Preprocessing: S2ORC identifies figure captions, table text and captions, headers, footers, and footnotes. We exclude these from the pretraining corpus. We tokenize and sentencize the text using scispaCy (Neumann et al., 2019). We also use heuristic filters to remove ill-formed paragraphs (such as those containing too many symbols). • Size: The resulting S2ORC pretraining cor4975 pus contains 16.4B tokens, nearly five times larger than the corpus for SCIBERT. • Vocab: Following Beltagy et al. (2019), we construct a cased WordPiece (Wu et al., 2016) vocabulary of size 31k using 15% of the S2ORC pretraining corpus. The Jaccard index between the S2ORC-SCIBERT and SCIBERT vocabularies is 0.536. We follow a similar setup to Beltagy et al. (2019) for both pretraining and fine-tuning S2ORC-SCIBERT. Like SCIBERT, S2ORCSCIBERT is pretrained from scratch using the original BERT code15 and default BERT-Base configurations on a single TPU v3-8 for one week. Also like SCIBERT, S2ORC-SCIBERT is finetuned on all tasks by optimizing a cross entropy loss using Adam (Kingma and Ba, 2014), a linear learning rate decay with 10% warm-up, batch size of 32, and dropout of 0.1. We search over an equal-sized grid of hyperparameters as Beltagy et al. (2019). We fine-tune for 1 to 4 epochs with a maximum learning rate of 1e-5, 2e-5, 3e-5, or 5e-5. For each task, we select the optimal combination of these two hyperparameters using the development set and report the corresponding test set results. For details, we refer the reader to SCIBERT code,16 which we use for all experiments. The results in Table 5 show that S2ORCSCIBERT outperforms SCIBERT on many tasks despite including a large percentage of data outside of the biomedical and computer science domains. As the pretraining corpus for SCIBERT is not publicly-available, S2ORC can serve as a large pretraining corpus for evaluating and comparing pretraining approaches on academic text. We also release S2ORC-SCIBERT to serve as a baseline for research. 6 Applications of S2ORC S2ORC can be used for many NLP and analysis tasks over academic text. We give a summary of potential applications below. The combination of structured full text annotated with linked inline citations makes S2ORC well-suited for a variety of citation-related textbased tasks. Without any additional supervision, S2ORC can be used directly for both inline (He 15https://github.com/google-research/ bert 16https://github.com/allenai/scibert et al., 2010; Duma and Klein, 2014; Jeong et al., 2019) and document-level (Yu et al., 2012; Liu et al., 2015; Bhagavatula et al., 2018) citation recommendation. Among document-level recommenders, S2ORC is well-suited to the setting of Liu et al. (2015), who use inline citation contexts to filter document-level recommendations. Figure 3: Word2vec embeddings associated with 20k papers in six AI-related arXiv categories visualized using t-SNE (van der Maaten and Hinton, 2008). Example papers from two randomly selected sub-regions A and B are given in Table 7. Region A cs.LG “On Unifying Deep Generative Models” stat.ML “Learning Disentangled Representations with Semi-Supervised Deep Generative Models” cs.LG “Denoising Criterion for Variational AutoEncoding Framework” cs.CV “Variational methods for conditional multimodal deep learning” Region B cs.CL “TransA: An Adaptive Approach for Knowledge Graph Embedding” cs.AI “TorusE: Knowledge Graph Embedding on a Lie Group” cs.CV “Image-embodied Knowledge Representation Learning” stat.ML “Neural Embeddings of Graphs in Hyperbolic Space” Table 7: Sampled papers in clusters from t-SNE embedding space in Figure 3. Region A consists of papers related to deep generative models; region B consists of papers concerned with graph representation learning. Other tasks that leverage citation contexts in4976 clude classifying citation intent (Teufel et al., 2006; Jurgens et al., 2018; Cohan et al., 2019), identifying citation sentiment (Athar and Teufel, 2012), identifying meaningful citations (Valenzuela et al., 2015), extracting key phrases (Caragea et al., 2014), and citation context-based paper summarization (Teufel et al., 2006; Qazvinian and Radev, 2008; Cohan and Goharian, 2015; Mitrovi´c and M¨uller, 2015). The models in these papers require labeled citation contexts for training. S2ORC could potentially benefit task performance without additional annotation, for example, by pretraining language models on S2ORC citation contexts before fine-tuning to these tasks. Cohan et al. (2019) find that long citation contexts (beyond sentence boundary) are important for tasks like summarization; the wider citation contexts available in S2ORC could be used to augment existing datasets for document-level tasks. Citation contexts can also be used for the more general tasks of identifying similar papers (Kanakia et al., 2019; Eto, 2019; Haruna et al., 2018; Small, 1973) or bibliometric analysis (Ding et al., 2014; Trujillo and Long, 2018; Asatani et al., 2018). Towards these tasks, the citation contexts in S2ORC can provide insight into how and why papers are cited. We illustrate this by following Berger et al. (2016) in training a word2vec skip-gram model (Mikolov et al., 2013) using full text citation contexts in S2ORC, where each inline citation span is replaced with its linked paper identifier. When training over this modified text, the word2vec model learns embeddings corresponding to each unique paper identifier, which can be leveraged as paper embeddings. The resulting embeddings shown in Figure 3 and Table 7 form clusters corresponding closely to arXiv Machine Learning categories. Upon inspection, papers of different categories in the same embedding sub-region share research themes (see Table 7), indicating that these paper embeddings trained from citation contexts capture coherent topic similarity and relatedness. These paper embeddings can be used to identify similar papers, using the similarity between two papers’ citing contexts as a proxy for paper similarity. The LATEX subset of S2ORC also provides unique opportunities for research. In addition to citations and references, we also extract and parse tables from LATEX source into a structured format. There is an opportunity to use these tables for corpus-level results extraction and aggregation. The LATEX subset also has fine-grained extraction and labeling of mathematical formulas, which can be used to understand proof construction, or to assist in symbol co-reference resolution. 7 Related work The ACL Anthology Network (AAN) (Radev et al., 2009) is a bibliometric-enhanced corpus covering papers in the field of computational linguistics. It is built from the ACL Anthology (Bird et al., 2008) and consists of 24.6k papers manually augmented with citation information. The PubMed Central Open Access corpus is a large corpus of 2.6M papers in the biomedical domain with citations linked to PubMed identifiers.17 CiteSeerX (Giles et al., 1998), consists of papers collected primarily via web crawl, without integrating metadata provided by sources outside of the PDF. Although citation contexts are no longer available through CiteSeerX, the RefSeer dataset (Huang et al., 2015)18 is a dataset of short citation context snippets derived from 1.0M papers from CiteSeerX. More recently, Saier and F¨arber (2019) introduce a corpus built using 1.0M arXiv publications. They use LATEX source to extract text, citation spans and bibliography entries, which are linked to papers in the Microsoft Academic Graph. The citation context they provide are extracted snippets and no bibliography parses are provided. An updated version of this dataset (Saier and F¨arber, 2020) released concurrently with this work now includes full text. Compared with these resources, S2ORC represents a significantly larger dataset of linked papers covering broad domains of science by leveraging PDF parsing in addition to LATEX source. S2ORC also provides clean full text for text mining and NLP needs with additional enhancements such as annotations of table and figure references and captions. S2ORC’s wealth of metadata and structured text allows it to be flexibly adapted to a variety of downstream tasks. 8 Conclusion We introduce S2ORC, the largest publiclyavailable corpus of English-language academic papers covering dozens of academic disciplines. 17https://www.ncbi.nlm.nih.gov/pmc/ tools/openftlist/ 18https://psu.app.box.com/v/refseer 4977 S2ORC consists of 81.1M papers, 380.5M resolved citation links, and structured full text from 8.1M open-access PDFs and 1.5M LATEX source files. We aggregate metadata and abstracts from hundreds of trusted sources. Full text is augmented with sections, citation mentions, and references to tables and figures. We demonstrate that S2ORC can be used effectively for downstream NLP tasks in academic paper analysis. The pipeline for creating S2ORC was used to construct the CORD-19 corpus (Wang et al., 2020), which saw fervent adoption as the canonical resource for COVID-19 text mining. CORD-19 is aimed at assisting biomedical experts and policy makers process large amounts of COVID-19 literature in the search for effective treatments and management policies. With over 75K dataset downloads, dozens of search and question-answering systems, and hundreds of participating teams across two shared tasks19 in the first month of its release, there is little doubt of the resource’s impact. Our hope with the release of S2ORC is to ensure such text mining resources are available to researchers even beyond periods of global crisis. Acknowledgements This work was supported in part by ONR grant N00014-18-1-2193, and the University of Washington WRF/Cable Professorship. We thank Doug Downey, Oren Etzioni, Andrew Head, and Bryan Newbold for their valuable feedback on the manuscript. We also thank Isabel Cachola, Dallas Card, Mike D’Arcy, Suchin Gururangan, Daniel King, Rik Koncel-Kedziorski, Susan Liu, Kelvin Luu, Noah Smith, Gabi Stanovsky, and Dave Wadden for feedback on the dataset during early development. Finally, we thank the Semantic Scholar team for assisting with data access and system infrastructure. References Riaz Ahmad and Muhammad Tanvir Afzal. 2018. Cad: an algorithm for citation-anchors detection in research papers. Scientometrics, 117:1405–1423. Waleed Ammar, Dirk Groeneveld, Chandra Bhagavatula, Iz Beltagy, Miles Crawford, Doug Downey, Jason Dunkelberger, Ahmed Elgohary, Sergey Feldman, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, 19The Kaggle CORD-19 and TREC-COVID competitions. See Wang et al. (2020) for details. Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew Peters, Joanna Power, Sam Skjonsberg, Lucy Wang, Chris Wilhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the literature graph in semantic scholar. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 84–91, New Orleans - Louisiana. Association for Computational Linguistics. Kimitaka Asatani, Junichiro Mori, Masanao Ochi, and Ichiro Sakata. 2018. Detecting trends in academic research from a citation network using network representation learning. In PloS one. Awais Athar and Simone Teufel. 2012. Contextenhanced citation sentiment detection. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 597–601, Montr´eal, Canada. Association for Computational Linguistics. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615–3620, Hong Kong, China. Association for Computational Linguistics. Matthew Berger, Katherine McDonough, and Lee M Seversky. 2016. cite2vec: Citation-driven document exploration via word embeddings. IEEE transactions on visualization and computer graphics, 23(1):691–700. Chandra Bhagavatula, Sergey Feldman, Russell Power, and Waleed Ammar. 2018. Content-based citation recommendation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 238–251, New Orleans, Louisiana. Association for Computational Linguistics. Steven Bird, Robert Dale, Bonnie Dorr, Bryan Gibson, Mark Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir Radev, and Yee Fan Tan. 2008. The ACL anthology reference corpus: A reference dataset for bibliographic research in computational linguistics. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco. European Language Resources Association (ELRA). Cornelia Caragea, Florin Adrian Bulgarov, Andreea Godea, and Sujatha Das Gollapalli. 2014. Citationenhanced keyphrase extraction from research papers: A supervised approach. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1435– 1446, Doha, Qatar. Association for Computational Linguistics. 4978 Arman Cohan, Waleed Ammar, Madeleine van Zuylen, and Field Cady. 2019. Structural scaffolds for citation intent classification in scientific publications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3586–3596, Minneapolis, Minnesota. Association for Computational Linguistics. Arman Cohan and Nazli Goharian. 2015. Scientific article summarization using citation-context and article’s discourse structure. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 390–400, Lisbon, Portugal. Association for Computational Linguistics. Nigel Collier and Jin-Dong Kim. 2004. Introduction to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), pages 73– 78, Geneva, Switzerland. COLING. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ying Ding, Guo Zhang, Tamy Chambers, Min Song, Xiaolong Wang, and Cheng xiang Zhai. 2014. Content-based citation analysis: The next generation of citation analysis. JASIST, 65:1820–1833. Rezarta Islamaj Do˘gan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for disease name recognition and concept normalization. Journal of biomedical informatics, 47:1–10. Daniel Duma and Ewan Klein. 2014. Citation resolution: A method for evaluating context-based citation recommendation systems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 358–363, Baltimore, Maryland. Association for Computational Linguistics. Masaki Eto. 2019. Extended co-citation search: Graph-based document retrieval on a co-citation network containing citation context information. Inf. Process. Manage., 56. C. L. Giles, K. D. Bollacker, and S. Lawrence. 1998. Citeseer: an automatic citation indexing system. In Proceedings of the ACM International Conference on Digital Libraries, pages 89–98. ACM. Proceedings of the 1998 3rd ACM Conference on Digital Libraries ; Conference date: 23-06-1998 Through 26-06-1998. Khalid Haruna, Maizatul Akmar Ismail, Abdullahi Baffa Bichi, Victor I. Chang, Sutrisna Wibawa, and Tutut Herawan. 2018. A citation-based recommender system for scholarly paper recommendation. In ICCSA. Qi He, Jian Pei, Daniel Kifer, Prasenjit Mitra, and Lee Giles. 2010. Context-aware citation recommendation. In Proceedings of the 19th International Conference on World Wide Web, WWW ’10, page 421–430, New York, NY, USA. Association for Computing Machinery. Wenyi Huang, Zhaohui Wu, Chen Liang, Prasenjit Mitra, and C. Lee Giles. 2015. A neural probabilistic model for context based citation recommendation. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI’15, page 2404–2410. AAAI Press. Chanwoo Jeong, Sion Jang, Hyuna Shin, Eunjeong Park, and Sungchul Choi. 2019. A context-aware citation recommendation model with bert and graph convolutional networks. arXiv. David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics, 6:391–406. Anshul Kanakia, Zhihong Shen, Darrin Eide, and Kuansan Wang. 2019. A scalable hybrid research paper recommender system for microsoft academic. In The World Wide Web Conference, WWW ’19, page 2893–2899, New York, NY, USA. Association for Computing Machinery. J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun’ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl 1):i180–i182. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Martin Krallinger, Obdulia Rabal, Saber Ahmad Akhondi, Mart´ın P´erez P´erez, J´es´us L´opez Santamar´ıa, Gael P´erez Rodr´ıguez, Georgios Tsatsaronis, Ander Intxaurrondo, Jos´e Antonio Baso L´opez, Umesh Nandal, Erin M. van Buel, A. Poorna Chandrasekhar, Marleen Rodenburg, Astrid Lægreid, Marius A. Doornenbal, Julen Oyarz´abal, An´alia Lourenc¸o, and Alfonso Valencia. 2017. Overview of the biocreative vi chemical-protein interaction track. In N/A. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative V CDR task corpus: a resource for chemical disease relation extraction. Database, 2016. 4979 Haifeng Liu, Xiangjie Kong, Xiaomei Bai, Wei Wang, Teshome Megersa Bekele, and Feng Xia. 2015. Context-based collaborative filtering for citation recommendation. IEEE Access, 3:1695–1703. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094, Minneapolis, Minnesota. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke S. Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. arXiv. Patrice Lopez. 2009. Grobid: Combining automatic bibliographic data recognition and term extraction for scholarship publications. In Proceedings of the 13th European Conference on Research and Advanced Technology for Digital Libraries, ECDL’09, page 473–474, Berlin, Heidelberg. Springer-Verlag. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219–3232, Brussels, Belgium. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey E. Hinton. 2008. Visualizing data using t-sne. In Journal of Machine Learning Research. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS’13, page 3111–3119, Red Hook, NY, USA. Curran Associates Inc. Sandra Mitrovi´c and Henning M¨uller. 2015. Summarizing citation contexts of scientific publications. In Experimental IR Meets Multilinguality, Multimodality, and Interaction, pages 154–165, Cham. Springer International Publishing. Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and robust models for biomedical natural language processing. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 319–327, Florence, Italy. Association for Computational Linguistics. Benjamin Nye, Junyi Jessy Li, Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova, and Byron Wallace. 2018. A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 197–207, Melbourne, Australia. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Brussels, Belgium. Association for Computational Linguistics. Vahed Qazvinian and Dragomir R. Radev. 2008. Scientific paper summarization using citation summary networks. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 689–696, Manchester, UK. Coling 2008 Organizing Committee. Dragomir R. Radev, Pradeep Muthukrishnan, and Vahed Qazvinian. 2009. The acl anthology network corpus. In Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries, NLPIR4DL ’09, page 54–61, USA. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. N/A. Tarek Saier and Michael F¨arber. 2019. Bibliometricenhanced arxiv: A data set for paper-based and citation-based tasks. In Proceedings of the 8th International Workshop on Bibliometric-enhanced Information Retrieval (BIR 2019) co-located with the 41st European Conference on Information Retrieval (ECIR 2019), Cologne, Germany, April 14, 2019, volume 2345 of CEUR Workshop Proceedings, pages 14–26. CEUR-WS.org. Tarek Saier and Michael F¨arber. 2020. unarxive: a large scholarly data set with publications’ full-text, annotated in-text citations, and links to metadata. Scientometrics. Zhihong Shen, Hao Ma, and Kuansan Wang. 2018. A web-scale system for scientific knowledge exploration. In Proceedings of ACL 2018, System Demonstrations, pages 87–92, Melbourne, Australia. Association for Computational Linguistics. Henry Small. 1973. Co-citation in the scientific literature: A new measure of the relationship between 4980 two documents. Journal of the American Society for Information Science, 24(4):265–269. Jie Tang, Jing Zhang, Limin Yao, Juanzi Li, Li Zhang, and Zhong Su. 2008. Arnetminer: Extraction and mining of academic social networks. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’08, page 990–998, New York, NY, USA. Association for Computing Machinery. Simone Teufel, Advaith Siddharthan, and Dan Tidhar. 2006. Automatic classification of citation function. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 103–110, Sydney, Australia. Association for Computational Linguistics. Dominika Tkaczyk, Andrew Collins, Paraic Sheridan, and Joeran Beel. 2018. Machine learning vs. rules and out-of-the-box vs. retrained: An evaluation of open-source bibliographic reference and citation parsers. In Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries, JCDL ’18, page 99–108, New York, NY, USA. Association for Computing Machinery. Caleb M. Trujillo and Tammy M. Long. 2018. Document co-citation analysis to enhance transdisciplinary research. Science Advances, 4(1). Marco Valenzuela, Vu Ha, and Oren Etzioni. 2015. Identifying meaningful citations. AAAI. Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Merrill, Paul Mooney, Dewey Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stilson, Alex D. Wade, Kuansan Wang, Chris Wilhelm, Boya Xie, Douglas Raymond, Daniel S. Weld, Oren Etzioni, and Sebastian Kohlmeier. 2020. CORD-19: The Covid-19 Open Research Dataset. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv, abs/1609.08144. Xiao Yu, Quanquan Gu, Mianwei Zhou, and Jiawei Han. 2012. Citation prediction in heterogeneous bibliographic networks. In SDM. 4981 A Background & Terminology In this work, we distinguish between bibliography entries and inline citations. A bibliography entry is an item in a paper’s bibliography that refers to another paper. It is represented in a structured format that can be used for paper-identifying features such as title, authors, year, and venue or journal, and for journal articles, the volume, issue, and pages. Also commonly represented are unique document identifiers such as the Document Object Identifier (DOI), arXiv identifier, or PubMed identifier. Common formats for bibliography entries are MLA, APA, Vancouver-, and Chicago- style, among others, which are different ways of representing these various features for document identification. There is often variation in the representation of certain fields. For example, Authors can include the first names of each author or only their first initials. In many academic disciplines, journal publications are the norm, whereas conference proceedings dominate in fields such as Computer Science; conference proceedings tend to lack journal-related features such as Volume, Issue, and Pages. Bibliography entry demarcation also varies between different formats. In some cases, each entry is preceded by a citation marker (e.g. “[1]” or “[ABC2019]”) that is used throughout the text of the paper to denote inline citations. An inline citation is a mention span within the paper’s abstract or body text that refers to one of the entries in its bibliography. “ABC (2019) present model 1, which outperforms model 2 (XYZ (2019)).” In this example, the narrative inline citation ABC (2019) appears as a noun phrase in the sentence while the parenthetical inline citation (XYZ, 2019) is inserted into the sentence as an aside. A sentence remains grammatically correct when parenthetical citations are removed. Other styles of parenthetical citations include, but are not limited to, BRACKET-style numbers (e.g. “[1, 35]”) and OTHER styles such as superscripts (e.g. “1,2”), both of which refer to numbered entries in the bibliography. Bibliography entries without numbered entries or citation markers are typically referenced inline using NAME-YEAR format as ABC (2019) or (XYZ, 2019) in the example above. Additionally, an inline reference is a span in a paper that refers to another part of the paper, for example, references to figures, tables, equations, proofs, sections, or appendices. These often take on the form of: “In Figure 3, we show the relationship between A and B.” where Figure 3 refers to a plot displayed on a separate page. These inline references can be important for understanding the relationship between text and objects within the paper. B PDF filters Prior to running GROBID, we filter out PDFs that (i) produce an error when processed using the Python library PyPDF2,20 (ii) have greater than 50 pages (more likely to be a dissertation or report), (iii) have page widths greater than page heights (more likely to be slides), and (iv) those which fail to be extracted using pdfalto, the variant of pdftoxml used by GROBID. Numbers of PDFs removed by these filters are given in Table 8. Filter Number of PDFs PyPDF2 error 0.54M Over 50 pages 2.27M Page width > height 0.28M PDFAlto error 0.21M Table 8: PDFs filtered out before GROBID processing C The paper clustering problem In academic fields in which preprint publishing is common (e.g. arXiv), the notion of a “paper” is somewhat ambiguous. For example, if a published paper differs from its arXiv preprint (as it often does), are the two documents considered separate papers for the purposes of citation? What about different arXiv preprint drafts tagged as different versions but under the same arXiv identifier? In this work, each “paper” of interest is actually a collection (or cluster) of highly-similar (but not necessarily identical) documents. These paper clusters, provided by Semantic Scholar, are constructed to reflect how authors tend to view their 20Used to determine PDF page number and page dimensions 4982 own papers; for example, most authors would consider their arXiv preprint and its associated published version to be the same “paper”. For practical concerns in constructing S2ORC, we further require that one document within the cluster be the canonical document used to represent the paper cluster. There are issues with defining a paper to be a collection of documents. For example, suppose a paper cluster contains both an arXiv preprint and a peer-reviewed draft. And suppose another paper cites the arXiv preprint critiquing content that has been updated in the peer-reviewed draft. If the peer-reviewed draft is chosen as the canonical representation of the paper cluster, then the citation context would not accurately capture the rationale of that reference. While worth noting, we believe such cases are rare and do not affect the vast majority of citation contexts. D S2ORC evaluation criteria Paper cluster quality For each paper cluster, we compare the selected canonical Title and Authors fields with the title and authors of the selected canonical PDF. The Title field is labeled correct if it exactly matches the title seen on the PDF, with some allowance for different capitalization and minor differences in special character representation (e.g. “γ” versus “gamma”) and ignoring whitespace. The Authors field is labeled correct if all authors on the PDF are presented in the correct order, with some allowance for variation in the surface form. This is to avoid penalizing publisher metadata for providing a first initial (instead of the first name) or omitting middle names or titles (e.g. “Dr.”, “PhD”). Paper-Bibliography linking For each paperbibliography pair, we compare the selected canonical Title and Authors fields in the structured bibliography entry to the selected canonical Title and Authors fields of the linked paper cluster. The Title fields are labeled as a match under the same criteria described above for matching paper cluster Title fields and PDF titles. The Authors fields are labeled as a match if there is substantial overlap in the names of the authors. For example, if authors A, B and C are in the bibliography entry and the linked paper cluster has authors A and B, then this is still considered a match. We note that in our evaluation, differences in the two sets of author names primarily stems from incorrectly written bibliography entries or mistakes in publisherprovided metadata. E Training corpus sizes for other language models Language model Training data ELMO (Peters et al., 2018a) 1BW (800M) Wikipedia (1.9B) WMT 2008-2012 (3.6B) BERT (Devlin et al., 2019) BooksCorpus (800M) Wikipedia (2.5B) ROBERTA (Liu et al., 2019b) BooksCorpus (800M) CC-News (~3.8B) OpenWebText (~1.9B) Stories (~1.6B) GPT2 (Radford et al., 2019) Web Text Corpus (~2.8B) Table 9: Reported and estimated (several papers report corpus size in terms of bytes) token counts of training data used to train language models. We estimate that all of S2ORC consists of approximately 25B tokens of full body text and 15B tokens of abstract text. As demonstrated for S2ORC-SCIBERT pretraining, aggressivelycleaned body text from the PDF-parsed subset of S2ORC still yields approximately 16.5B tokens. The size of S2ORC makes it more than sufficient for pretraining large language models such as ELMO, BERT, ROBERTA, GPT2, and others, whose reported training data sizes are given in Table 9 for comparison. Figure 4: Visualization of contextual representations from layer 9 of S2ORC-SCIBERT on numeric surface forms in a subsample of body text from S2ORC. Labels are heuristics based on token-level patterns. 4983 F Numeric representations in S2ORC-SCIBERT Academic papers contain substantially more diverse uses of numeric surface forms than typical web text, such as experimental results, equations, citation references and section/figure markers. To demonstrate this, we cluster contextual word representations involving numbers, heuristically labeling them into one of 8 categories based on surface patterns. Examining the progression of the contextual representations through the layers of BERT reveals an initial focus on sentence position (expected, due to explicit position embeddings) and magnitude, with later layers integrating substantial contextual information, such as the presence of inline LATEX identifiers, citation indicators and PDF references. Following Peters et al. (2018b); Liu et al. (2019a), we observe that the final 2-3 BERT layers provide embeddings that excel at predictive language modeling; as such, Figure 4 uses embeddings from layer 9 of S2ORCSCIBERT.
2020
447
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4984–4997 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4984 Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics Nitika Mathur Timothy Baldwin Trevor Cohn School of Computing and Information Systems The University of Melbourne Victoria 3010, Australia [email protected] {tbaldwin,tcohn}@unimelb.edu.au Abstract Automatic metrics are fundamental for the development and evaluation of machine translation systems. Judging whether, and to what extent, automatic metrics concur with the gold standard of human evaluation is not a straightforward problem. We show that current methods for judging metrics are highly sensitive to the translations used for assessment, particularly the presence of outliers, which often leads to falsely confident conclusions about a metric’s efficacy. Finally, we turn to pairwise system ranking, developing a method for thresholding performance improvement under an automatic metric against human judgements, which allows quantification of type I versus type II errors incurred, i.e., insignificant human differences in system quality that are accepted, and significant human differences that are rejected. Together, these findings suggest improvements to the protocols for metric evaluation and system performance evaluation in machine translation. 1 Introduction Automatic metrics are an indispensable part of machine translation (MT) evaluation, serving as a proxy to human evaluation which is considerably more expensive and time-consuming. They provide immediate feedback during MT system development and serve as the primary metric to report the quality of MT systems. Accordingly, the reliability of metrics is critical to progress in MT research. A particularly worrying finding was made in the most recent Conference on Machine Translation (WMT), as part of their annual competition findings to benchmark progress in translation and translation evaluation. WMT has established a method based on Pearson’s correlation coefficient for measuring how well automatic metrics match with human judgements of translation quality, which is used to rank metrics and to justify their widespread use in lieu of human evaluation. Their findings (Ma et al., 2019) showed that if the correlation is computed for metrics using a large cohort of translation systems, typically very high correlations were found between leading metrics and humans (as high as r = 0.9). However, if considering only the few best systems, the correlation reduced markedly. This is in contrast to findings at sentence-level evaluation, where metrics are better at distinguishing between high-quality translations compared to lowquality translations (Fomicheva and Specia, 2019). When considering only the four best systems, the automatic metrics were shown to exhibit negative correlations in some instances. It would appear that metrics can only be relied upon for making coarse distinctions between poor and good translation outputs, but not for assessing similar quality outputs, i.e., the most common application faced when assessing incremental empirical improvements. Overall these findings raise important questions as to the reliability of the accepted best-practises for ranking metrics, and more fundamentally, cast doubt over these metrics’ utility for tuning highquality systems, and making architecture choices or publication decisions for empirical research. In this paper, we take a closer look into this problem, using the metrics data from recent years of WMT to answer the following questions: 1. Are the above problems identified with Pearson’s correlation evident in other settings besides small collections of strong MT systems? To test this we consider a range of system quality levels, including random samples of systems, and show that the problem is widely apparent. 2. What is the effect of outlier systems in the reported correlations? Systems that are considerably worse than all others can have a dispro4985 portionate effect on the computed correlation, despite offering very little insight into the evaluation problem. We identify a robust method for identifying outliers, and demonstrate their effect on correlation, which for some metrics can result in radically different conclusions about their utility. 3. Given these questions about metrics’ utility, can they be relied upon for comparing two systems? More concretely, we seek to quantify the extent of improvement required under an automatic metric such that the ranking reliably reflects human assessment. In doing so, we consider both type I and II errors, which correspond to accepting negative or insignificant differences as judged by humans, versus rejecting human significant differences; both types of errors have the potential to stunt progress in the field. Overall we find that current metric evaluation methodology can lend false confidence to the utility of a metric, and that leading metrics require either untenably large improvements to serve a gatekeeping role, or overly permissive usage to ensure good ideas are not rejected out of hand. Perhaps unsurprisingly, we conclude that metrics are inadequate as a substitute for human evaluations in MT research. 1 2 Related work Since 2007, the Conference on Machine Translation (WMT) has organized an annual shared task on automatic metrics, where metrics are evaluated based on correlation with human judgements over a range of MT systems that were submitted to the translation task. Methods for both human evaluation and meta evaluation of metrics have evolved over the years. In early iterations, the official evaluation measure was the Spearman’s rank correlation of metric scores with human scores (Callison-Burch and Osborne, 2006). However, many MT system pairs have very small score differences, and evaluating with Spearman’s correlation harshly penalises metrics that have a different ordering for these systems. This was replaced by the Pearson correlation in 2014 (Bojar et al., 2014). To test whether the difference in the performance of two metrics is statis1Code, data and additional analysis available at https://github.com/nitikam/tangled tically significant, the William’s test for dependent correlations is used (Graham and Baldwin, 2014), which takes into account the correlation between the two metrics. Metrics that are not outperformed by any other metric are declared as the winners for that language pair. Pearson’s r is highly sensitive to outliers (Osborne and Overbay, 2004): even a single outlier can have a drastic impact on the value of the correlation coefficient; and in the extreme case, outliers can give the illusion of a strong correlation when there is none, or mask the presence of a true relationship. More generally, very different underlying relationships between the two variables can have the same value of the correlation coefficient (Anscombe, 1973).2 The correlation of metrics with human scores is highly dependent on the underlying systems used. BLEU (Papineni et al., 2002a) has remained mostly unchanged since it was proposed in 2002, but its correlation with human scores has changed each year over ten years of evaluation (2006 to 2016) on the English–German and German–English language pairs at WMT (Reiter, 2018). The low correlation for most of 2006–2012 is possibly due to the presence of strong rule-based systems that tend to receive low BLEU scores (Callison-Burch and Osborne, 2006). By 2016, however, there were only a few submissions of rule-based systems, and these were mostly outperformed by statistical systems according to human judgements (Bojar et al., 2016). The majority of the systems in the last three years have been neural models, for which most metrics have a high correlation with human judgements. BLEU has been surpassed by various other metrics at every iteration of the WMT metrics shared task. Despite this, and extensive analytical evidence of the limitations of BLEU in particular and automatic metrics in general (Stent et al., 2005; Callison-Burch and Osborne, 2006; Smith et al., 2016), the metric remains the de facto standard of evaluating research hypotheses. 2https://janhove.github.io/teaching/ 2016/11/21/what-correlations-look-like contains examples that clearly illustrate the extent of this phenomenon 4986 3 Data 3.1 Direct Assessment (DA) Following Ma et al. (2019), we use direct assessment (DA) scores (Graham et al., 2017) collected as part of the human evaluation at WMT 2019. Annotators are asked to rate the adequacy of a set of translations compared to the corresponding source/reference sentence on a slider which maps to a continuous scale between 0 and 100. Bad quality annotations are filtered out based on quality control items included in the annotation task. Each annotator’s scores are standardised to account for different scales. The score of an MT system is computed as the mean of the standardised score of all its translations. In WMT 19, typically around 1500–2500 annotations were collected per system for language pairs where annotator availability was not a problem. To assess whether the difference in scores between two systems is not just chance, the Wilcoxon rank-sum test is used to test for statistical significance. 3.2 Metrics Automatic metrics compute the quality of an MT output (or set of translations) by comparing it with a reference translation by a human translator. For the WMT 19 metrics task, participants were also invited to submit metrics that rely on the source instead of the reference (QE . In this paper, we focus on the following metrics that were included in evaluation at the metrics task at WMT 2019: Baseline metrics • BLEU (Papineni et al., 2002b) is the precision of n-grams of the MT output compared to the reference, weighted by a brevity penalty to punish overly short translations. BLEU has high variance across different hyper-parameters and pre-processing strategies, in response to which sacreBLEU (Post, 2018) was introduced to create a standard implementation for all researchers to use; we use this version in our analysis. • TER (Snover et al., 2006) measures the number of edits (insertions, deletions, shifts and substitutions) required to transform the MT output to the reference. • CHRF (Popovi´c, 2015) uses character n-grams instead of word n-grams to compare the MT output with the reference. This helps with matching morphological variants of words. Best metrics across language pairs • YISI-1 (Lo, 2019) computes the semantic similarity of phrases in the MT output with the reference, using contextual word embeddings (BERT: Devlin et al. (2019)). • ESIM (Chen et al., 2017; Mathur et al., 2019) is a trained neural model that first computes sentence representations from BERT embeddings, then computes the similarity between the two strings. 3 Source-based metric • YISI-2 (Lo, 2019) is the same as YISI-1, except that it uses cross-lingual embeddings to compute the similarity of the MT output with the source. The baseline metrics, particularly BLEU, were designed to use multiple references. However, in practice, they have only have been used with a single reference in recent years. 4 Re-examining conclusions of Metrics Task 2019 4.1 Are metrics unreliable when evaluating high-quality MT systems? In general, the correlation of reference-based metrics with human scores is greater than r = 0.8 for all language pairs. However, the correlation is dependent on the systems that are being evaluated, and as the quality of MT increases, we want to be sure that the metrics evaluating these systems stay reliable. To estimate the validity of the metrics for highquality MT systems, Ma et al. (2019) sorted the systems based on their Direct Assessment scores, and plotted the correlation of the top N systems, with N ranging from all systems to the best four systems. They found that for seven out of 18 language pairs, the correlation between metric and human scores decreases as we decrease N, and tends towards zero or even negative when N = 4. There are four language pairs (German–English, English–German, English–Russian, and English– Chinese) where the quality of the best MT systems is close to human performance (Barrault et al., 2019). If metrics are unreliable for strong MT systems, we would expect to see a sharp degradation in correlation for these language pairs. But as 3ESIM’s submission to WMT shared task does not include scores for the language pairs en-cs and en-gu. In this paper, we use scores obtained from the same trained model that was used in the original submission. 4987 (a) German–English 16 14 12 10 8 6 4 −1.0 −0.5 0.0 0.5 1.0 Correlation BLEU 16 14 12 10 8 6 4 chrF 16 14 12 10 8 6 4 ESIM 16 14 12 10 8 6 4 YiSi-1 16 14 12 10 8 6 4 YiSi-2 top-N 16 14 12 10 8 6 4 −1.0 −0.5 0.0 0.5 1.0 Correlation 16 14 12 10 8 6 4 16 14 12 10 8 6 4 16 14 12 10 8 6 4 16 14 12 10 8 6 4 N 4 8 (b) English–German 22 18 14 10 6 4 −1.0 −0.5 0.0 0.5 1.0 Correlation BLEU 22 18 14 10 6 4 chrF 22 18 14 10 6 4 ESIM 22 18 14 10 6 4 YiSi-1 22 18 14 10 6 4 YiSi-2 top-N 22 18 14 10 6 4 −1.0 −0.5 0.0 0.5 1.0 Correlation 22 18 14 10 6 4 22 18 14 10 6 4 22 18 14 10 6 4 22 18 14 10 6 4 N 4 8 Figure 1: Pearson correlation coefficient computed over the top-N systems (top row), or over a rolling window of 4 or 8 systems (bottom row). The x axis shows the index of the starting system, and systems are sorted by DA quality score. we look at the top N systems, the correlation decreases for German–English and English–German, stays the same for English–Russian, and actually increases for English–Chinese. On the other hand, we observe this phenomenon with English–Kazakh, where the top systems are far from the quality of human translation. Is there another explanation for these results? Pearson’s r between metrics and DA scores is unstable for small samples, particularly when the systems are very close in terms of quality. The low correlation over top-N systems (when N is small) could be an artefact of this instability. To understand this effect, we instead visualise the correlation of a rolling window of systems, starting with the worst N systems, and moving forward by one system until we reach the top N systems. The number of systems stays constant for all points in these graphs, which makes for a more valid comparison than the original setting where the sample size varies. If the metrics are indeed less reliable for strong systems, we should see the same pattern as with the top N systems. For the German–English language pair (Figure 1 b), the correlation of most metrics is very unstable when N = 4. Both BLEU and CHRF perfectly correlate with human scores for systems ranked 2–5, which then drops to −1 for the top 4 systems. On the other hand, ESIM exhibits the opposite behaviour, even though it shows an upward trend when looking at the top-N systems. Even worse, for English–German, YISI-2 obtains a perfect correlation at some values of N, when in fact its correlation with human scores is negligible once outliers are removed (Section 4.2). We observe similar behaviour across all lan4988 guage pairs: the correlation is more stable as N increases, but there is no consistent trend in the correlation that depends on the quality of the systems in the sample. If we are to trust Pearson’s r at small sample sizes, then the reliability of metrics doesn’t really depend on the quality of the MT systems. Given that the sample size is small to begin with (typically 10–15 MT systems per language pair), we believe that we do not have enough data to use this method to assess whether metric reliability decreases with the quality of MT systems. A possible explanation for the low correlation of subsets of MT systems is that it depends on how close these systems are in terms of quality. In the extreme case, the difference between the DA scores of all the systems in the subset can be statistically insignificant, so metric correlation over these systems can be attributed to chance. 4.2 How do outliers affect the correlation of MT evaluation metrics? An outlier is defined as “an observation (or subset of observations) which appears to be inconsistent with the remainder of the dataset” (Barnett and Lewis, 1974). Pearson’s r is particularly sensitive to outliers in the observations. When there are systems that are generally much worse (or much better) than the rest of the systems, metrics are usually able to correctly assign low (or high) scores to these systems. In this case, the Pearson correlation can over-estimate metric reliability, irrespective of the relationship between human and metric scores of other systems. Based on a visual inspection, we can see there are two outlier systems in the English–German language pair. To illustrate the influence of these systems on Pearson’s r, we repeatedly subsample ten systems from the 22 system submissions (see Figure 2). When the most extreme outlier (en-de-task) is present in the sample, the correlation of all metrics is greater than 0.97. The selection of systems has a higher influence on the correlation when neither outlier is present, and we can see that YISI-1 and ESIM usually correlate much higher than BLEU. One method of dealing with outliers is to calculate the correlation of the rest of the points (called the skipped correlation: Wilcox (2004)). Most of these apply methods to detect multivariate outliers in the joint distribution of the two variables: the 0.4 0.6 0.8 1.0 Correlation BLEU TER CHRF YISI-1 ESIM Metric English-German en-de-task online-X neither Figure 2: Pearson’s r for metrics, when subsampling systems from the English–German language pair. We group the samples in the presence of the two outliers (“en-de-task” and “Online-X”), and when neither is present. metric and human scores in our case. However, multivariate outliers could be system pairs that indicate metric errors, and should not be removed because they provide important data about the metric. Thus, we only look towards detecting univariate outliers based on human ratings. One common method is to simply standardise the scores, and remove systems with scores that are too high or too low. However, standardising depends on the mean and standard deviation, which are themselves affected by outliers. Instead, we use the median and the Median Absolute Deviation (MAD) which are more robust (Iglewicz and Hoaglin, 1993; Rousseeuw and Hubert, 2011; Leys et al., 2013). For MT systems with human scores s, we use the following steps to detect outlier systems: 1. Compute MAD, which is the median of all absolute deviations from the median MAD = 1.483 × median(|s −median(s)|) 2. compute robust scores: z = (s −median(s))/MAD 3. discard systems where the magnitude of z exceeds a cutoff (we use 2.5) Tables 1 and 2 show Pearson’s r with and without outliers for the language pairs that contain outliers. Some interesting observations, are as follows: 4989 0.21 0.27 0.33 0.39 0.45 Metric Score −3 −2 −1 0 1 2 Human score BLEU (r = 0.97/0.81) 0.79 0.82 0.85 0.88 0.91 Metric Score YISI-1 (r = 0.99/0.92) 0.74 0.76 0.78 0.80 0.82 Metric Score YISI-2 (r = 0.92/ −0.01) Outlier Yes No (a) English–German 0.10 0.15 0.20 0.25 Metric Score −0.6 −0.4 −0.2 0.0 0.2 0.4 Human score BLEU (r = 0.83/0.97) 0.55 0.60 0.65 0.70 0.75 Metric Score YISI-1 (r = 0.92/1) 35 40 45 50 Metric Score CHRF (r = 0.95/0.96) Outlier Yes No (b) Gujarati–English Figure 3: Scatter plots (and Pearson’s r) for metrics with and without outliers de–en gu–en kk–en lt–en ru–en zh–en All −out All −out All −out All −out All −out All −out #sys 16 15 11 10 11 9 11 10 14 13 15 13 BLEU 0.81 0.79 0.83 0.97 0.95 0.91 0.96 0.97 0.87 0.81 0.90 0.81 TER 0.87 0.81 0.89 0.95 0.80 0.57 0.96 0.98 0.92 0.90 0.84 0.72 chrF 0.92 0.86 0.95 0.96 0.98 0.77 0.94 0.93 0.94 0.88 0.96 0.84 ESIM 0.94 0.90 0.88 0.99 0.99 0.95 0.99 0.99 0.97 0.95 0.99 0.96 YiSi-1 0.95 0.91 0.92 1.00 0.99 0.92 0.98 0.98 0.98 0.95 0.98 0.90 YiSi-2 0.80 0.61 −0.57 0.82 −0.32 0.66 0.44 0.35 −0.34 0.71 0.94 0.62 Table 1: Correlation of metrics with and without outliers (“All” and “−out”, resp.) for the to-English language pairs that contain outlier systems de–cs en–de en–fi en–kk en–ru fr–de All −out All −out All −out All −out All −out All −out #sys 11 10 22 20 12 11 11 9 12 11 10 7 BLEU 0.87 0.74 0.97 0.81 0.97 0.94 0.85 0.58 0.98 0.95 0.87 0.85 TER 0.89 0.79 0.97 0.84 0.98 0.96 0.94 0.55 0.99 0.98 0.89 0.67 chrF 0.97 0.97 0.98 0.88 0.99 0.97 0.97 0.90 0.94 0.97 0.86 0.80 ESIM 0.98 0.99 0.99 0.93 0.96 0.93 0.98 0.90 0.99 0.99 0.94 0.83 YiSi-1 0.97 0.98 0.99 0.92 0.97 0.94 0.99 0.89 0.99 0.98 0.91 0.85 YiSi-2 0.61 0.12 0.92 −0.01 0.70 0.48 0.34 0.69 −0.77 0.13 −0.53 0.07 Table 2: Correlation of metrics with and without outliers (“All” and “−out”, resp.) for the language pairs into languages other than English that contain outlier systems. 4990 • for language pairs like Lithuanian–English and English–Finnish, the correlation between the reference based metrics and DA is high irrespective of the presence of the outlier; • the correlation of BLEU with DA drops sharply from 0.85 to 0.58 for English–Kazakh when outliers are removed; • for English–German, the correlation of BLEU and TER appears to be almost as high as that of YISI-1 and ESIM. However, when we remove the two outliers, there is a much wider gap between the metrics. • if metrics wrongly assign a higher score to an outlier (e.g. most metrics in Gujarat–English), removing these systems increases correlation, and reporting only the skipped correlation is not ideal. To illustrate the severity of the problem, we show examples from the metrics task data where outliers present the illusion of high correlation when the metric scores are actually independent of the human scores without the outlier. For English– German, the source-based metric YISI-2 correctly assigns a low score to the outlier en-de-task. When this system is removed, the correlation is near zero. At the other extreme, YISI-2 incorrectly assigns a very high score to a low-quality outlier in the English–Russian language pair, resulting in a strongly negative correlation. When we remove this system, we find there is no association between metric and human scores. The results for all metrics that participated in the WMT 19 metrics task are presented in Tables 3, 4 and 5 in the appendix. 5 Beyond correlation: metric decisions for system pairs In practice, researchers use metric scores to compare pairs of MT systems, for instance when claiming a new state of the art, evaluating different model architectures, or even in deciding whether to publish. Basing these judgements on metric score alone runs the risk of making wrong decisions with respect to the true gold standard of human judgements. That is, while a change may result in a significant improvement in BLEU, this may not be judged to be an improvement by human assessors. Thus, we examine whether metrics agree with DA on all the MT systems pairs across all languages used in WMT 19. Following Graham et al. (2014), we use statisti−0.5 0.0 0.5 1.0 1.5 2.0 NS 0.0-1.5 1.5-3.0 3.0-5.0 5.0-7.5 7.5-12.0 12.0-100.0 Metric Dierence BLEU Worse NS Better −0.5 0.0 0.5 1.0 1.5 2.0 NS 0.0-1.5 1.5-3.0 3.0-5.0 5.0-7.5 7.5-12.0 12.0-100.0 Metric Dierence TER Worse NS Better −0.5 0.0 0.5 1.0 1.5 2.0 NS 0.0-1.0 1.0-2.0 2.0-4.0 4.0-6.0 6.0-10.0 10.0-100.0 Metric Dierence chrF Worse NS Better 0.0 0.5 1.0 1.5 2.0 NS 0.0-0.5 0.5-1.0 1.0-2.0 2.0-3.5 3.5-6.0 6.0-100.0 Metric Dierence YiSi-1 Worse NS Better −0.5 0.0 0.5 1.0 1.5 2.0 DA Dierence NS 0.0-2.5 2.5-5.0 5.0-7.5 7.5-15.0 15.0-30.0 30.0-200.0 Metric Dierence ESIM Worse NS Better Figure 4: Pairwise differences in human DA evaluation (x-axis) compared to difference in metric evaluation (binned on y-axis; NS means insignificant metric difference). The colours indicate pairs judged by humans to be insignificantly different (cyan/light gray), significantly worse (red/dark gray on the left) and significantly better (green/dark gray on the right). 4991 cal significance tests to detect if the difference in scores (human or metric) between two systems (S1 and S2) can just be attributed to chance. For human scores, we apply the Wilcoxon ranksum test which is used by WMT when ranking systems. We use the bootstrap method (Koehn, 2004) to test for statistical significance of the difference in BLEU between two systems. YISI-1 and ESIM compute the system score as the average of sentence scores, so we use the paired t-test to compute significance. Although CHRF is technically the macro-average of n-gram statistics over the entire test set, we treat this as a micro-average when computing significance such that we can use the more powerful paired t-test over sentence scores. Figure 4 visualises the agreement between metric score differences and differences in human DA scores. Ideally, only differences judged as truly significant would give rise to significant and large magnitude differences under the metrics; and when metrics judge differences to be insignificant, ideally very few instances would be truly significant. However, this is not the case: there are substantial numbers of insignificant differences even for very high metric differences (cyan, for higher range bins); moreover, the “NS” category — denoting an insignificant difference in metric score — includes many human significant pairs (red and green, top bin). Considering BLEU (top plot in Figure 2), for insignificant BLEU differences, humans judge one system to be better than the other for half of these system pairs. This corresponds to a Type I error. It is of concern that BLEU cannot detect these differences. Worse, the difference in human scores has a very wide range. Conversely, when the BLEU score is significant but in the range 0–3, more than half of these systems are judged to be insignificantly different in quality (corresponding to a Type II error). For higher BLEU deltas, these errors diminish, however, even for a BLEU difference between 3 and 5 points, about a quarter of these system pairs are of similar quality. This paints a dour picture for the utility of BLEU as a tool for gatekeeping (i.e., to define a ‘minimum publishable unit’ in deciding paper acceptance on empirical grounds, through bounding the risk of Type II errors), as the unit would need to be whoppingly large to ensure only meaningful improvements are accepted. Were we seek to minimise Type I errors in the interests of nurturing good ideas, the threshBLEU TER chrF YiSi-1 ESIM BLEU TER chrF YiSi-1 ESIM 326 59 88 99 127 68 335 89 92 125 72 64 310 63 97 63 47 43 290 74 82 71 68 65 281 0 50 100 150 200 250 Figure 5: The agreement between metric errors over all 1362 system comparisons. The values in the diagonal indicate the total number of Type 1 and Type 2 errors for the metric. The off-diagonal cells show the total number of errors made by the row-metric where the column-metric is correct. old would need to be so low as to be meaningless, effectively below the level required for acceptance of the bootstrap significance test. The systems evaluated consist of a mix of systems submitted by researchers (mostly neural models) and anonymous online systems (where the MT system type is unknown). Even when we restrict the set of systems to only neural models submitted by researchers, the patterns of Type 1 and Type 2 errors remain the same (figure omitted for space reasons). TER makes similar errors: TER scores can wrongly show that a system is much better than another when humans have judged them similar, or even worse, drawn the opposite conclusion. CHRF, YISI-1 and ESIM have fewer errors compared to BLEU and TER. When these metrics mistakenly fail to detect a difference between systems, the human score difference is considerably lower than for BLEU. Accordingly, they should be used in place of BLEU. However the above argument is likely to still hold true as to their utility for gatekeeping or nurturing progress, in that the thresholds would still be particularly punitive or permissive, for the two roles, respectively. Finally, Figure 5 looks at agreement between metric decisions when comparing MT systems. As expected, when BLEU or TER disagree with CHRF, ESIM, or YISI-1, the former are more likely to be wrong. BLEU and TER have an 80% overlap in errors. The decisions of ESIM, a trained 4992 neural model, diverge a little more from the other metrics. Overall, despite the variety of approaches towards the task, all five metrics have common biases: over half of all erroneous decisions made by a particular metric are made in common with all other metrics. 6 Conclusion In this paper, we revisited the findings of the metrics task at WMT 2019, which flagged potential problems in the current best practises for assessment of evaluation metrics. Pearson’s correlation coefficient is known to be unstable for small sample sizes, particularly when the systems in consideration are very close in quality. This goes some way to explaining the findings whereby strong correlations between metric scores and human judgements evaporate when considering small numbers of strong systems. We show that the same can be true for any small set of similar quality systems, not just the top systems. This effect can partly be attributed to noise due to the small sample size, rather than true shortcomings in the metrics themselves. We need better methods to empirically test whether our metrics are less reliable when evaluating high quality MT systems. A more serious problem, however, is outlier systems, i.e. those systems whose quality is much higher or lower than the rest of the systems. We found that such systems can have a disproportionate effect on the computed correlation of metrics. The resulting high values of correlation can then lead to to false confidence in the reliability of metrics. Once the outliers are removed, the gap between correlation of BLEU and other metrics (e.g. CHRF, YISI-1 and ESIM) becomes wider. In the worst case scenario, outliers introduce a high correlation when there is no association between metric and human scores for the rest of the systems. Thus, future evaluations should also measure correlations after removing outlier systems. Finally, the same value of correlation coefficient can describe different patterns of errors. Any single number is not adequate to describe the data, and visualising metric scores against human scores is the best way to gain insights into metric reliability. This could be done with scatter plots (e.g. Figure 3a) for each language pair, or Figure 5, which compresses this information into one graph. Metrics are commonly used to compare two systems, and accordingly we have also investigated the real meaning encoded by a difference in metric score, in terms of what this indicates about human judgements of the two systems. Most published work report BLEU differences of 1-2 points, however at this level we show this magnitude of difference only corresponds to true improvements in quality as judged by humans about half the time. Although our analysis assumes the Direct Assessment human evaluation method to be a gold standard despite its shortcomings, our analysis does suggest that the current rule of thumb for publishing empirical improvements based on small BLEU differences has little meaning. Overall, this paper adds to the case for retiring BLEU as the de facto standard metric, and instead using other metrics such as CHRF, YISI-1, or ESIM in its place. They are more powerful in assessing empirical improvements. However, human evaluation must always be the gold standard, and for continuing improvement in translation, to establish significant improvements over prior work, all automatic metrics make for inadequate substitutes. To summarise, our key recommendations are: • When evaluating metrics, use the technique outlined in Section 4.2 to remove outliers before computing Pearson’s r. • When evaluating MT systems, stop using BLEU or TER for evaluation of MT, and instead use CHRF, YISI-1, or ESIM; • Stop using small changes in evaluation metrics as the sole basis to draw important empirical conclusions, and make sure these are supported by manual evaluation. Acknowledgements We are grateful to the anonymous reviewers for their comments and valuable suggestions. This work was supported in part by the Australian Research Council. References Francis J Anscombe. 1973. Graphs in statistical analysis. The American Statistician, 27(1):17–21. Vic Barnett and Toby Lewis. 1974. Outliers in Statistical Data. Wiley. Loïc Barrault, Ondˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. 4993 Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. Ondˇrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, Maryland, USA. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131–198, Berlin, Germany. Association for Computational Linguistics. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of Bleu in machine translation research. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668, Vancouver, Canada. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, USA. Marina Fomicheva and Lucia Specia. 2019. Taking MT evaluation metrics to extremes: Beyond correlation with human judgments. Computational Linguistics, 45(3):515–558. Yvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 172–176, Doha, Qatar. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation systems be evaluated by the crowd alone? Natural Language Engineering, 23(1):3–30. Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2014. Randomized significance tests in machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 266–274, Baltimore, Maryland, USA. Association for Computational Linguistics. Boris Iglewicz and David Caster Hoaglin. 1993. How to detect and handle outliers, volume 16. Asq Press. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Christophe Leys, Christophe Ley, Olivier Klein, Philippe Bernard, and Laurent Licata. 2013. Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4):764–766. Chi-kiu Lo. 2019. YiSi — a unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 507–513, Florence, Italy. Qingsong Ma, Johnny Wei, Ondˇrej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62–90, Florence, Italy. Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2019. Putting evaluation in context: Contextual embeddings improve machine translation evaluation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2799–2808, Florence, Italy. Jason W Osborne and Amy Overbay. 2004. The power of outliers (and why researchers should always check for them). Practical Assessment, Research & Evaluation, 9(6):1–12. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002a. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, USA. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002b. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 4994 the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, USA. Maja Popovi´c. 2015. chrF: character n-gram f-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Ehud Reiter. 2018. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393– 401. Peter J Rousseeuw and Mia Hubert. 2011. Robust statistics for outlier detection. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1):73–79. Aaron Smith, Christian Hardmeier, and Joerg Tiedemann. 2016. Climbing mont BLEU: The strange world of reachable high-BLEU translations. In Proceedings of the 19th Annual Conference of the European Association for Machine Translation, pages 269–281. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the Association for Machine Transaltion in the Americas, pages 223–231. Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In Computational Linguistics and Intelligent Text Processing, pages 341– 351, Berlin, Heidelberg. Springer Berlin Heidelberg. Rand Wilcox. 2004. Inferences based on a skipped correlation coefficient. Journal of Applied Statistics, 31(2):131–143. 4995 A The effect of removing outlier systems on the results of the WMT 19 metrics task de–cs de–fr fr–de All −out All All −out n 11 10 11 10 7 BEER 0.978 0.976 0.941 0.848 0.794 BLEU 0.941 0.922 0.891 0.864 0.821 CDER 0.864 0.734 0.949 0.852 0.794 CHARACTER 0.965 0.959 0.928 0.849 0.848 CHRF 0.974 0.970 0.931 0.864 0.796 CHRF+ 0.972 0.967 0.936 0.848 0.785 EED 0.982 0.984 0.940 0.851 0.792 ESIM 0.980 0.986 0.950 0.942 0.825 HLEPORA_BASELINE 0.941 0.903 0.814 − − HLEPORB_BASELINE 0.959 0.951 0.814 − − NIST 0.954 0.944 0.916 0.862 0.800 PER 0.875 0.757 0.857 0.899 0.427 SACREBLE-BLEU 0.869 0.742 0.891 0.869 0.846 SACREBLE-CHRF 0.975 0.980 0.952 0.882 0.815 TER 0.890 0.787 0.956 0.895 0.673 WER 0.872 0.749 0.956 0.894 0.657 YISI-0 0.978 0.972 0.952 0.820 0.836 YISI-1 0.973 0.980 0.969 0.908 0.846 YISI-1_SRL − − − 0.912 0.814 Source-based metrics: IBM1-MORPHEME 0.355 0.009 0.509 0.625 0.357 IBM1-POS4GRAM − − 0.085 0.478 0.719 YISI-2 0.606 0.122 0.721 0.530 0.066 Table 3: Pearson correlation of metrics for the language pairs that do not involve English. For language pairs that contain outlier systems, we also show correlation after removing outlier systems. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. 4996 de–en fi–en gu–en kk–en lt–en ru–en zh–en All −out All All −out All −out All −out All −out All −out n 16 15 12 11 10 11 9 11 10 14 13 15 13 BEER 0.906 0.852 0.993 0.952 0.982 0.986 0.930 0.947 0.948 0.915 0.819 0.942 0.806 BERTR 0.926 0.897 0.984 0.938 0.995 0.990 0.829 0.948 0.959 0.971 0.933 0.974 0.911 BLEU 0.849 0.770 0.982 0.834 0.975 0.946 0.912 0.961 0.980 0.879 0.830 0.899 0.807 CDER 0.890 0.827 0.988 0.876 0.975 0.967 0.843 0.975 0.981 0.892 0.875 0.917 0.847 CHARACTER 0.898 0.852 0.990 0.922 0.978 0.953 0.833 0.955 0.963 0.923 0.828 0.943 0.845 CHRF 0.917 0.862 0.992 0.955 0.962 0.978 0.775 0.940 0.933 0.945 0.876 0.956 0.841 CHRF+ 0.916 0.860 0.992 0.947 0.961 0.976 0.769 0.940 0.934 0.945 0.878 0.956 0.851 EED 0.903 0.853 0.994 0.976 0.988 0.980 0.779 0.929 0.930 0.950 0.872 0.949 0.840 ESIM 0.941 0.896 0.971 0.885 0.986 0.986 0.945 0.989 0.990 0.968 0.946 0.988 0.961 HLEPORA_BASELINE − − − 0.975 0.855 − − 0.947 0.879 HLEPORB_BASELINE − − − 0.975 0.855 0.906 0.930 − 0.947 0.879 METEOR++_2.0(SYNTAX) 0.887 0.844 0.995 0.909 0.939 0.974 0.859 0.928 0.935 0.950 0.878 0.948 0.836 METEOR++_2.0(SYNTAX+COPY) 0.896 0.850 0.995 0.900 0.930 0.971 0.871 0.927 0.931 0.952 0.890 0.952 0.841 NIST 0.813 0.705 0.986 0.930 0.985 0.942 0.837 0.944 0.963 0.925 0.878 0.921 0.722 PER 0.883 0.808 0.991 0.910 0.948 0.737 0.533 0.947 0.933 0.922 0.880 0.952 0.884 PREP 0.575 0.452 0.614 0.773 0.967 0.776 0.817 0.494 0.397 0.782 0.685 0.592 0.111 SACREBLE-BLEU 0.813 0.794 0.985 0.834 0.975 0.946 0.912 0.955 0.967 0.873 0.813 0.903 0.807 SACREBLE-CHRF 0.910 0.852 0.990 0.952 0.937 0.969 0.750 0.935 0.923 0.919 0.874 0.955 0.846 TER 0.874 0.812 0.984 0.890 0.947 0.799 0.566 0.960 0.975 0.917 0.896 0.840 0.717 WER 0.863 0.803 0.983 0.861 0.926 0.793 0.579 0.961 0.981 0.911 0.885 0.820 0.716 WMDO 0.872 0.857 0.987 0.983 0.981 0.998 0.953 0.900 0.923 0.942 0.844 0.943 0.851 YISI-0 0.902 0.847 0.993 0.993 0.990 0.991 0.876 0.927 0.933 0.958 0.889 0.937 0.782 YISI-1 0.949 0.914 0.989 0.924 0.997 0.994 0.920 0.981 0.978 0.979 0.947 0.979 0.899 YISI-1_SRL 0.950 0.916 0.989 0.918 0.998 0.994 0.917 0.983 0.981 0.978 0.943 0.977 0.897 Source-based metrics: IBM1-MORPHEME 0.345 0.223 0.740 − − 0.487 0.638 − − IBM1-POS4GRAM 0.339 0.137 − − − − − − LASIM 0.247 0.334 − − − − 0.310 0.260 − LP 0.474 0.279 − − − − 0.488 0.168 − UNI 0.846 0.809 0.930 − − − 0.805 0.666 − UNI+ 0.850 0.805 0.924 − − − 0.808 0.669 − YISI-2 0.796 0.612 0.642 0.566 0.820 0.324 0.662 0.442 0.346 0.339 0.708 0.940 0.622 YISI-2_SRL 0.804 0.630 − − − − − 0.947 0.675 Table 4: Pearson correlation of metrics for the to-English language pairs. For language pairs that contain outlier systems, we also show correlation after removing outlier systems. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. 4997 en–cs en–de en–fi en–gu en–kk en–lt en–ru en–zh All All −out All −out All All −out All All −out All n 11 22 20 12 11 11 11 9 12 12 11 12 BEER 0.990 0.983 0.869 0.989 0.978 0.829 0.971 0.826 0.982 0.977 0.947 0.803 BLEU 0.897 0.921 0.419 0.969 0.943 0.737 0.852 0.576 0.989 0.986 0.967 0.901 CDER 0.985 0.973 0.849 0.978 0.957 0.840 0.927 0.668 0.985 0.993 0.981 0.905 CHARACTER 0.994 0.986 0.886 0.968 0.939 0.910 0.936 0.895 0.954 0.985 0.982 0.862 CHRF 0.990 0.979 0.881 0.986 0.972 0.841 0.972 0.900 0.981 0.943 0.968 0.880 CHRF+ 0.991 0.981 0.883 0.986 0.970 0.848 0.974 0.907 0.982 0.950 0.973 0.879 EED 0.993 0.985 0.894 0.987 0.978 0.897 0.979 0.883 0.975 0.967 0.984 0.856 ESIM − 0.991 0.928 0.957 0.926 − 0.980 0.900 0.989 0.989 0.986 0.931 HLEPORA_BASELINE − − − 0.841 0.968 0.852 − − − HLEPORB_BASELINE − − − 0.841 0.968 0.852 0.980 − − NIST 0.896 0.321 0.246 0.971 0.936 0.786 0.930 0.611 0.993 0.988 0.973 0.884 PER 0.976 0.970 0.815 0.982 0.961 0.839 0.921 0.545 0.985 0.981 0.955 0.895 SACREBLE-BLEU 0.994 0.969 0.806 0.966 0.939 0.736 0.852 0.576 0.986 0.977 0.946 0.801 SACREBLE-CHRF 0.983 0.976 0.874 0.980 0.958 0.841 0.967 0.840 0.966 0.985 0.988 0.796 TER 0.980 0.969 0.841 0.981 0.960 0.865 0.940 0.547 0.994 0.995 0.985 0.856 WER 0.982 0.966 0.831 0.980 0.958 0.861 0.939 0.525 0.991 0.994 0.983 0.875 YISI-0 0.992 0.985 0.869 0.987 0.977 0.863 0.974 0.840 0.974 0.953 0.967 0.861 YISI-1 0.962 0.991 0.917 0.971 0.937 0.909 0.985 0.892 0.963 0.992 0.978 0.951 YISI-1_SRL − 0.991 0.917 − − − − − 0.948 Source-based metrics: IBM1-MORPHEME 0.871 0.870 0.198 0.084 0.254 − − 0.810 − − IBM1-POS4GRAM − 0.393 0.449 − − − − − − LASIM − 0.871 0.007 − − − − 0.823 0.336 − LP − 0.569 0.558 − − − − 0.661 0.178 − UNI 0.028 0.841 0.251 0.907 0.808 − − − 0.919 0.760 − UNI+ − − − − − − 0.918 0.746 − USFD − 0.224 0.301 − − − − 0.857 0.514 − USFD-TL − 0.091 0.212 − − − − 0.771 0.177 − YISI-2 0.324 0.924 0.014 0.696 0.478 0.314 0.339 0.685 0.055 0.766 0.134 0.097 YISI-2_SRL − 0.936 0.155 − − − − − 0.118 Table 5: Correlation of metrics for the from-English language pairs. For language pairs that contain outlier systems, we also show correlation after removing outlier systems. Values in bold indicate that the metric is not significantly outperformed by any other metric under the Williams Test.
2020
448
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4998–5007 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 4998 A Transformer-based Approach for Source Code Summarization Wasi Uddin Ahmad University of California, Los Angeles [email protected] Saikat Chakraborty Columbia University [email protected] Baishakhi Ray Columbia University [email protected] Kai-Wei Chang University of California, Los Angeles [email protected] Abstract Generating a readable summary that describes the functionality of a program is known as source code summarization. In this task, learning code representation by modeling the pairwise relationship between code tokens to capture their long-range dependencies is crucial. To learn code representation for summarization, we explore the Transformer model that uses a self-attention mechanism and has shown to be effective in capturing long-range dependencies. In this work, we show that despite the approach is simple, it outperforms the state-of-the-art techniques by a significant margin. We perform extensive analysis and ablation studies that reveal several important findings, e.g., the absolute encoding of source code tokens’ position hinders, while relative encoding significantly improves the summarization performance. We have made our code publicly available1 to facilitate future research. 1 Introduction Program comprehension is an indispensable ingredient of software development and maintenance (Xia et al., 2018). A natural language summary of source code facilitates program comprehension by reducing developers’ efforts significantly (Sridhara et al., 2010). Source code summarization refers to the task of creating readable summaries that describe the functionality of a program. With the advancement of deep learning and the availability of large-scale data through a vast number of open-source repositories, automatic source code summarizing has drawn attention from researchers. Most of the neural approaches generate source code summaries in a sequence-to-sequence fashion. One of the initial works Iyer et al. (2016) trained an embedding matrix to represent the individual code tokens and combine them with a Re1https://github.com/wasiahmad/NeuralCodeSum current Neural Network (RNN) via an attention mechanism to generate a natural language summary. Subsequent works (Liang and Zhu, 2018; Hu et al., 2018a,b) adopted the traditional RNNbased sequence-to-sequence network (Sutskever et al., 2014) with attention mechanism (Luong et al., 2015) on different abstractions of code. The RNN-based sequence models have two limitations in learning source code representations. First, they do not model the non-sequential structure of source code as they process the code tokens sequentially. Second, source code can be very long, and thus RNN-based models may fail to capture the long-range dependencies between code tokens. In contrast to the RNN-based models, Transformer (Vaswani et al., 2017), which leverages self-attention mechanism, can capture long-range dependencies. Transformers have been shown to perform well on many natural language generation tasks such as machine translation (Wang et al., 2019), text summarization (You et al., 2019), story generation (Fan et al., 2018), etc. To learn the order of tokens in a sequence or to model the relationship between tokens, Transformer requires to be injected with positional encodings (Vaswani et al., 2017; Shaw et al., 2018; Shiv and Quirk, 2019). In this work, we show that, by modeling the pairwise relationship between source code tokens using relative position representation (Shaw et al., 2018), we can achieve significant improvements over learning sequence information of code tokens using absolute position representation (Vaswani et al., 2017). We want to emphasize that our proposed approach is simple but effective as it outperforms the fancy and sophisticated state-of-the-art source code summarization techniques by a significant margin. We perform experiments on two wellstudied datasets collected from GitHub, and the results endorse the effectiveness of our approach 4999 over the state-of-the-art solutions. In addition, we provide a detailed ablation study to quantify the effect of several design choices in the Transformer to deliver a strong baseline for future research. 2 Proposed Approach We propose to use Transformer (Vaswani et al., 2017) to generate a natural language summary given a piece of source code. Both the code and summary is a sequence of tokens that are represented by a sequence of vectors, x = (x1, . . . , xn) where xi ∈Rdmodel. In this section, we briefly describe the Transformer architecture (§ 2.1) and how to model the order of source code tokens or their pairwise relationship (§ 2.2) in Transformer. 2.1 Architecture The Transformer consists of stacked multi-head attention and parameterized linear transformation layers for both the encoder and decoder. At each layer, the multi-head attention employs h attention heads and performs the self-attention mechanism. Self-Attention. We describe the self-attention mechanism based on Shaw et al. (2018). In each attention head, the sequence of input vectors, x = (x1, . . . , xn) where xi ∈Rdmodel are transformed into the sequence of output vectors, o = (o1, . . . , on) where oi ∈Rdk as: oi = n X j=1 αij(xjW V ), eij = xiW Q(xjW K)T √dk , where αij = exp eij Pn k=1 exp eik and W Q, W K ∈ Rdmodel×dk, W V ∈Rdmodel×dv are the parameters that are unique per layer and attention head. Copy Attention. We incorporate the copying mechanism (See et al., 2017) in the Transformer to allow both generating words from vocabulary and copying from the input source code. We use an additional attention layer to learn the copy distribution on top of the decoder stack (Nishida et al., 2019). The copy attention enables the Transformer to copy rare tokens (e.g., function names, variable names) from source code and thus improves the summarization performance significantly (§ 3.2). 2.2 Position Representations Now, we discuss how to learn the order of source code tokens or model their pairwise relationship. Dataset Java Python Train 69,708 55,538 Validation 8,714 18,505 Test 8,714 18,502 Unique tokens in code 66,650 307,596 Unique tokens in summary 46,895 56,189 Avg. tokens in code 120.16 47.98 Avg. tokens in summary 17.73 9.48 Table 1: Statistics of the experiment datasets. We thank the authors of Wei et al. (2019) for kindly sharing the Python dataset splits. The Java dataset splits are publicly available. Encoding absolute position. To allow the Transformer to utilize the order information of source code tokens, we train an embedding matrix W Pe that learns to encode tokens’ absolute positions into vectors of dimension dmodel. However, we show that capturing the order of code tokens is not helpful to learn source code representations and leads to poor summarization performance (§ 3.2). It is important to note that we train another embedding matrix W Pd that learns to encode the absolute positions of summary tokens.2 Encoding pairwise relationship. The semantic representation of a code does not rely on the absolute positions of its tokens. Instead, their mutual interactions influence the meaning of the source code. For instance, semantic meaning of the expressions a+b and b+a are the same. To encode the pairwise relationships between input elements, Shaw et al. (2018) extended the self-attention mechanism as follows. oi = n X j=1 αij(xjW V + aV ij), eij = xiW Q(xjW K + aK ij )T √dk , where, aV ij and aK ij are relative positional representations for the two position i and j. Shaw et al. (2018) suggested clipping the maximum relative position to a maximum absolute value of k as they hypothesize that precise relative position information is not useful beyond a certain distance. aK ij = wK clip(j−i,k), aV ij = wV clip(j−i,k), clip(x, k) = max(−k, min(k, x)). Hence, we learn 2k + 1 relative position representations: (wK −k, . . . , wK k ), and (wV −k, . . . , wV k ). 2In this work, we do not study alternative ways of learning position representation for the summary tokens. 5000 Methods Java Python BLEU METEOR ROUGE-L BLEU METEOR ROUGE-L CODE-NN (Iyer et al., 2016) 27.60 12.61 41.10 17.36 09.29 37.81 Tree2Seq (Eriguchi et al., 2016) 37.88 22.55 51.50 20.07 08.96 35.64 RL+Hybrid2Seq (Wan et al., 2018) 38.22 22.75 51.91 19.28 09.75 39.34 DeepCom (Hu et al., 2018a) 39.75 23.06 52.67 20.78 09.98 37.35 API+CODE (Hu et al., 2018b) 41.31 23.73 52.25 15.36 08.57 33.65 Dual Model (Wei et al., 2019) 42.39 25.77 53.61 21.80 11.14 39.45 Our models and ablation study Base Model 43.41 25.91 52.71 31.08 18.57 44.31 Full Model 44.58 26.43 54.76 32.52 19.77 46.73 Full Model w/o Relative Position 44.26 26.23 53.58 31.38 18.69 44.68 Full Model w/o Copy Attention 44.14 26.34 53.95 31.64 19.17 45.42 Table 2: Comparison of our proposed approach with the baseline methods. The results of the baseline methods are directly reported from (Wei et al., 2019). The “Base Model” refers to the vanilla Transformer (uses absolute position representations) and the “Full Model” uses relative position representations and includes copy attention. In this work, we study an alternative of the relative position representations that ignores the directional information (Ahmad et al., 2019). In other words, the information whether the j’th token is on the left or right of the i’th token is ignored. aK ij = wK clip(|j−i|,k), aV ij = wV clip(|j−i|,k), clip(x, k) = min(|x|, k). 3 Experiment 3.1 Setup Datasets and Pre-processing. We conduct our experiments on a Java dataset (Hu et al., 2018b) and a Python dataset (Wan et al., 2018). The statistics of the two datasets are shown in Table 1. In addition to the pre-processing steps followed by Wei et al. (2019), we split source code tokens of the form CamelCase and snake case to respective sub-tokens3. We show that such a split of code tokens improves the summarization performance. Metrics. We evaluate the source code summarization performance using three metrics, BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE-L (Lin, 2004). Baselines. We compare our Transformer-based source code summarization approach with five baseline methods reported in Wei et al. (2019) and their proposed Dual model. We refer the readers to (Wei et al., 2019) for the details about the hyperparameter of all the baseline methods. Hyper-parameters. We follow Wei et al. (2019) to set the maximum lengths and vocabulary sizes 3The CamelCase and snake case tokenization reduces the vocabulary significantly. For example, the number of unique tokens in Java source code reduced from 292,626 to 66,650. for code and summaries in both the datasets. We train the Transformer models using Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 10−4. We set the mini-batch size and dropout rate to 32 and 0.2, respectively. We train the Transformer models for a maximum of 200 epochs and perform early stop if the validation performance does not improve for 20 consecutive iterations. We use a beam search during inference and set the beam size to 4. Detailed hyperparameter settings can be found in Appendix A. 3.2 Results and Analysis Overall results. The overall results of our proposed model and baselines are presented in Table 2. The result shows that the Base model outperforms the baselines (except for ROUGE-L in java), while the Full model improves the performance further.4 We ran the Base model on the original datasets (without splitting the CamelCase and snake case code tokens) and observed that the performance drops by 0.60, 0.72 BLEU and 1.66, 2.09 ROUGE-L points for the Java and Python datasets respectively. We provide a few qualitative examples in Appendix C showing the usefulness of the Full model over the Base model. Unlike the baseline approaches, our proposed model employs the copy attention mechanism. As shown in Table 2, the copy attention improves the performance 0.44 and 0.88 BLEU points for the Java and Python datasets respectively. Impact of position representation. We perform an ablation study to investigate the benefits 4We observe a more significant gain on the Python dataset and a detailed discussion on it is provided in Appendix B. 5001 Source Target BLEU METEOR ROUGE-L   43.41 25.91 52.71   42.34 24.74 50.96   43.59 26.00 52.88   41.85 24.32 50.87 Table 3: Ablation study on absolute positional representations using the “Base Model” on the Java dataset. k Directional BLEU METEOR ROUGE-L 8  44.22 26.35 53.86  42.61 24.67 51.10 16  44.14 26.34 53.95  44.06 26.31 53.51 32  44.55 26.66 54.30  43.95 26.28 53.24 2i  44.37 26.58 53.96  43.58 25.95 52.73 Table 4: Ablation study on relative positional representations (in encoding) for Transformer. While 8, 16, and 32 represents a fixed relative distance for all the layers, 2i (where i = 1, . . . , L; L = 6) represents a layer-wise relative distance for Transformer. of encoding the absolute position of code tokens or modeling their pairwise relationship for the source code summarization task, and the results are presented in Table 3 and 4. Table 3 demonstrates that learning the absolute position of code tokens are not effective as we can see it slightly hurts the performance compared to when it is excluded. This empirical finding corroborates the design choice of Iyer et al. (2016), where they did not use the sequence information of the source code tokens. On the other hand, we observe that learning the pairwise relationship between source code tokens via relative position representations helps as Table 4 demonstrates higher performance. We vary the clipping distance, k, and consider ignoring the directional information while modeling the pairwise relationship. The empirical results suggest that the directional information is indeed important while 16, 32, and 2i relative distances result in similar performance (in both experimental datasets). Varying model size and number of layers. We perform ablation study by varying dmodel and l and the results are presented in Table 5.5 In our experiments, we observe that a deeper model (more layers) performs better than a wider model (larger dmodel). Intuitively, the source code summariza5Considering the model complexity, we do not increase the model size or number of layers further. #Param. BLEU METEOR ROUGE-L Varying the model size (dmodel) 256 15.8 38.21 21.54 48.63 384 28.4 41.71 24.51 51.42 512 44.1 43.41 25.91 52.71 768 85.1 45.29 27.56 54.39 Varying the number of layers (l) 3 22.1 41.26 23.54 51.37 6 44.1 43.41 25.91 52.71 9 66.2 45.03 27.21 54.02 12 88.3 45.56 27.64 54.89 Table 5: Ablation study on the hidden size and number of layers for the “Base Model” on the Java dataset. We use dmodel = H, dff = 4H, h = 8, and dk = dv = 64 in all settings. We set l = 6 and dmodel = 512 while varying dmodel and l respectively. #Param. represents the number of trainable parameters in millions (only includes Transformer parameters). tion task depends on more semantic information than syntactic, and thus deeper model helps. Use of Abstract Syntax Tree (AST). We perform additional experiments to employ the abstract syntax tree (AST) structure of source code in the Transformer. We follow Hu et al. (2018a) and use the Structure-based Traversal (SBT) technique to transform the AST structure into a linear sequence. We keep our proposed Transformer architecture intact, except in the copy attention mechanism, we use a mask to block copying the nonterminal tokens from the input sequence. It is important to note that, with and without AST, the average length of the input code sequences is 172 and 120, respectively. Since the complexity of the Transformer is O(n2 × d) where n is the input sequence length, hence, the use of AST comes with an additional cost. Our experimental findings suggest that the incorporation of AST information in the Transformer does not result in an improvement in source code summarization. We hypothesize that the exploitation of the code structure information in summarization has limited advantage, and it diminishes as the Transformer learns it implicitly with relative position representation. Qualitative analysis. We provide a couple of examples in Table 6 to demonstrate the usefulness of our proposed approach qualitatively (more examples are provided in Table 9 and 10 in the Appendix). The qualitative analysis reveals that, in comparison to the Vanilla Transformer model, the copy enabled model generates shorter summaries 5002 public static String selectText(XPathExpression expr, Node context) { try { return (String)expr.evaluate(context, XPathConstants.STRING ); } catch (XPathExpressionException e) { throw new XmlException(e); } } Base Model: evaluates the xpath expression to a xpath expression . Full Model w/o Relative Position: evaluates the xpath expression . Full Model w/o Copy Attention Attention: evaluates the xpath expression as a single element . Full Model: evaluates the xpath expression as a text string . Human Written: evaluates the xpath expression as text . def get_hosting_service(name): try: return hosting_service_registry.get(u'hosting service id', name) except ItemLookupError: return None Base Model: returns the color limits from the current service name . Full Model w/o Relative Position: return the hosting service . Full Model w/o Copy Attention: return the name of the service . Full Model : return the hosting service name . Human Written: return the hosting service with the given name . Table 6: Qualitative example of different models’ performance on Java and Python datasets. with more accurate keywords. Besides, we observe that in a copy enabled model, frequent tokens in the code snippet get a higher copy probability when relative position representations are used, in comparison to absolute position representations. We suspect this is due to the flexibility of learning the relation between code tokens without relying on their absolute position. 4 Related Work Most of the neural source code summarization approaches frame the problem as a sequence generation task and use recurrent encoder-decoder networks with attention mechanisms as the fundamental building blocks (Iyer et al., 2016; Liang and Zhu, 2018; Hu et al., 2018a,b). Different from these works, Allamanis et al. (2016) proposed a convolutional attention model to summarize the source codes into short, name-like summaries. Recent works in code summarization utilize structural information of a program in the form of Abstract Syntax Tree (AST) that can be encoded using tree structure encoders such as Tree-LSTM (Shido et al., 2019), Tree-Transformer (Harer et al., 2019), and Graph Neural Network (LeClair et al., 2020). In contrast, Hu et al. (2018a) proposed a structure based traversal (SBT) method to flatten the AST into a sequence and showed improvement over the AST based methods. Later, LeClair et al. (2019) used the SBT method and decoupled the code structure from the code tokens to learn better structure representation. Among other noteworthy works, API usage information (Hu et al., 2018b), reinforcement learning (Wan et al., 2018), dual learning (Wei et al., 2019), retrieval-based techniques (Zhang et al., 2020) are leveraged to further enhance the code summarization models. We can enhance a Transformer with previously proposed techniques; however, in this work, we limit ourselves to study different design choices for a Transformer without breaking its’ core architectural design philosophy. 5 Conclusion This paper empirically investigates the advantage of using the Transformer model for the source code summarization task. We demonstrate that the Transformer with relative position representations and copy attention outperforms state-of-the-art approaches by a large margin. In our future work, we want to study the effective incorporation of code structure into the Transformer and apply the techniques in other software engineering sequence generation tasks (e.g., commit message generation for source code changes). Acknowledgments This work was supported in part by National Science Foundation Grant OAC 1920462, CCF 1845893, CCF 1822965, CNS 1842456. 5003 References Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2440–2452, Minneapolis, Minnesota. Association for Computational Linguistics. Miltiadis Allamanis, Hao Peng, and Charles A. Sutton. 2016. A convolutional attention network for extreme summarization of source code. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 2091– 2100. JMLR.org. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 823–833, Berlin, Germany. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Jacob Harer, Chris Reale, and Peter Chin. 2019. Treetransformer: A transformer-based method for correction of tree-structured data. arXiv preprint arXiv:1908.00449. Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018a. Deep code comment generation. In Proceedings of the 26th Conference on Program Comprehension, page 200–210, New York, NY, USA. Association for Computing Machinery. Xing Hu, Ge Li, Xin Xia, David Lo, Shuai Lu, and Zhi Jin. 2018b. Summarizing source code with transferred api knowledge. In Proceedings of the TwentySeventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 2269–2275. International Joint Conferences on Artificial Intelligence Organization. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2073–2083, Berlin, Germany. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72, Vancouver, Canada. Association for Computational Linguistics. Alexander LeClair, Sakib Haque, Linfgei Wu, and Collin McMillan. 2020. Improved code summarization via a graph neural network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Alexander LeClair, Siyuan Jiang, and Collin McMillan. 2019. A neural model for generating natural language summaries of program subroutines. In Proceedings of the 41st International Conference on Software Engineering, page 795–806. IEEE Press. Yuding Liang and Kenny Qili Zhu. 2018. Automatic generation of text descriptive comments for code blocks. In Thirty-Second AAAI Conference on Artificial Intelligence. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, and Junji Tomita. 2019. Multi-style generative reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2273–2284, Florence, Italy. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. 5004 Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana. Association for Computational Linguistics. Yusuke Shido, Yasuaki Kobayashi, Akihiro Yamamoto, Atsushi Miyamoto, and Tadayuki Matsumura. 2019. Automatic source code summarization with extended tree-lstm. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE. Vighnesh Shiv and Chris Quirk. 2019. Novel positional encodings to enable tree-based transformers. In Advances in Neural Information Processing Systems 32, pages 12081–12091. Curran Associates, Inc. Giriprasad Sridhara, Emily Hill, Divya Muppaneni, Lori Pollock, and K. Vijay-Shanker. 2010. Towards automatically generating summary comments for java methods. In Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, page 43–52, New York, NY, USA. Association for Computing Machinery. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Yao Wan, Zhou Zhao, Min Yang, Guandong Xu, Haochao Ying, Jian Wu, and Philip S Yu. 2018. Improving automatic source code summarization via deep reinforcement learning. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 397–407. ACM. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810–1822, Florence, Italy. Association for Computational Linguistics. Bolin Wei, Ge Li, Xin Xia, Zhiyi Fu, and Zhi Jin. 2019. Code generation as a dual task of code summarization. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 6563–6573. Curran Associates, Inc. Xin Xia, Lingfeng Bao, David Lo, Zhenchang Xing, Ahmed E. Hassan, and Shanping Li. 2018. Measuring program comprehension: A large-scale field study with professionals. In Proceedings of the 40th International Conference on Software Engineering, ICSE ’18, page 584, New York, NY, USA. Association for Computing Machinery. Yongjian You, Weijia Jia, Tianyi Liu, and Wenmian Yang. 2019. Improving abstractive document summarization with salient information modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2132– 2141, Florence, Italy. Association for Computational Linguistics. Jian Zhang, Xu Wang, Hongyu Zhang, Hailong Sun, and Xudong Liu. 2020. Retrieval-based neural source code summarization. In Proceedings of the 42nd International Conference on Software Engineering. IEEE. 5005 A Hyper-Parameters Table 7 summarizes the hyper-parameters that we used in our experiments. Hyper-parameter Value Embedding k 16 Model l 6 h 8 dmodel 512 dk, dv 64 dff 2048 Training dropout 0.2 optimizer Adam learning rate 0.0001 batch size 32 Testing beam size 4 Table 7: Hyper-parameters in our experiments. l and h indicates the number of layers and heads in Transformer respectively. k refers to the clipping distance in relative position representations in Transformer. B Recurrent Encoder-Decoder vs. Transformer on Python Dataset Models BLEU METEOR ROUGE-L Seq2seq 30.57 17.86 43.64 Seq2seq∗ 29.08 17.12 42.97 Transformer 31.08 18.57 44.31 Transformer∗ 31.38 18.69 44.68 Table 8: Comparison between recurrent sequence-tosequence (Seq2seq) model and Transformer on the Python dataset. ∗indicates models are equipped with the copy attention mechanism. While conducting our study using the Transformer on the Python dataset, we observed a significant gain over the state-of-the-art methods as reported in Wei et al. (2019). However, our initial experiments on this dataset using recurrent sequence-to-sequence models also demonstrated higher performance compared to the results report in Wei et al. (2019). We suspect that such lower performance is due to not tuning the hyperparameters correctly. So for the sake of fairness and to investigate the true advantages of Transformer, we present a comparison on recurrent Seq2seq model and Transformer in Table 8 using our implementation.6 6Our implementation is based on Open-NMT (Klein et al., 2017) and PyTorch 1.3. We can see from Table 8, the performance of the recurrent Seq2seq model is much better than the results reported in prior works. However, to our surprise, the copy attention mechanism does not result in improvement for the recurrent Seq2seq model. When we looked into the training perplexity and the validation performance, we also observed lower performance in comparison to the base recurrent Seq2seq model. In comparison, our proposed Transformer-based approach outperforms the recurrent Seq2seq models by a large margin showing its effectiveness for source code summarization. 5006 C Qualitative Examples public static terminal find(String with_name) { if(with_name == null) return null; else return (terminal)all.get(with_name); } Base Model: lookup a non terminal by name string Full Model w/o Relative Position: lookup a terminal terminal by name string Full Model w/o Copy Attention: lookup a non terminal by name string Full Model: lookup a terminal by name Human Written: lookup a terminal by name string . public static String selectText(XPathExpression expr, Node context) { try { return (String)expr.evaluate(context, XPathConstants.STRING ); } catch (XPathExpressionException e) { throw new XmlException(e); } } Base Model: evaluates the xpath expression to a xpath expression . Full Model w/o Relative Position: evaluates the xpath expression . Full Model w/o Copy Attention Attention: evaluates the xpath expression as a single element . Full Model: evaluates the xpath expression as a text string . Human Written: evaluates the xpath expression as text . public CTaggingPanel( final JFrame parent, final ZyGraph graph, final ITagManager manager) { super(new BorderLayout()); mtagsTree = new CTagsTree(parent, graph, manager); final JScrollPane pane = new JScrollPane(mtagsTree); pane.setVerticalScrollBarPolicy( ScrollPaneConstants.VERTICAL_SCROLLBAR_AS_NEEDED); pane.setHorizontalScrollBarPolicy( ScrollPaneConstants.HORIZONTAL_SCROLLBAR_AS_NEEDED); add(pane); setBorder(new TitledBorder(new LineBorder(Color.LIGHT_GRAY, NUM, BOOL), STRING)); setDoubleBuffered(BOOL); } Base Model: creates a new dnetscapesslservername dialog . Full Model w/o Relative Position: creates a new settings dialog . Full Model w/o Copy Attention: creates a new toolbar panel . Full Model: creates a new api panel object . Human Written: creates a new panel object . public DSignCsr(JFrameparent, PKCS10CertificationRequest pkcs10Csr, File csrFile, PrivateKey signPrivateKey, KeyPairType signKeyPairType, X509Certificate verificationCertificate, Provider provider) throws CryptoException{ super(parent, Dialog.ModalityType.DOCUMENT_MODAL); this.pkcs10Csr = pkcs10Csr; this.csrFile = csrFile; this.signPrivateKey = signPrivateKey; this.signKeyPairType = signKeyPairType; this.verificationCertificate = verificationCertificate; this.provider = provider; setTitle(res.getString(STRING)); initComponents(); } Base Model: creates a new dsigncsr dialog for a spkac formatted csr . Full Model w/o Relative Position: creates a new signer dialog for a pkcs # 10 formatted . Full Model w/o Copy Attention: creates a new dsigncsr dialog for a spkac formatted csr . Full Model: creates a new dsigncsr dialog for a pkcs # 10 formatted csr . Human Written: creates a new dsigncsr dialog for a pkcs # 10 formatted csr . Table 9: Qualitative example of different models’ performance in Java dataset. 5007 def get_hosting_service(name): try: return hosting_service_registry.get(u'hosting service id', name) except ItemLookupError: return None Base Model: returns the color limits from the current service name . Full Model w/o Relative Position: return the hosting service . Full Model w/o Copy Attention: return the name of the service . Full Model : return the hosting service name . Human Written: return the hosting service with the given name . def save_pickle(obj, fname): with get_file_obj(fname, 'wb') as fout: cPickle.dump(obj, fout, protocol=-1) Base Model: pickle object obj to file fname . Full Model w/o Relative Position: save object to file . Full Model w/o Copy Attention: raw data: object obj to file fname . Full Model : save object to file fname . Human Written: save the object to file via pickling . def get_temp_dir: temp = get_environ_variable('TMP') if temp is None: temp = get_environ_variable('TEMP') if temp is None or '' in temp and os.name == 'nt': temp = 'C \\temp' if temp None or '' in temp and os.name == 'posix': temp = '/tmp' return temp Base Model: returns the name of the sample environment variable . Full Model w/o Relative Position: returns the next temporary directory of a file . Full Model w/o Copy Attention: get the directory related to store the stubbed . Full Model : return a temporary filename . Human Written: returns a temporary directory . def get_exploration_memcache_key(exploration_id, version=None): if version: return 'exploration-version %s %s' % exploration_id, version else: return 'exploration %s' % exploration_id Base Model: returns the key for an instance for the project . Full Model w/o Relative Position: returns a memcache key for the given version . Full Model w/o Copy Attention: returns a memcache for the exploration id . Full Model : returns a memcache key for the specified exploration . Human Written: returns a memcache key for an exploration . def get_svc_avail_path(): return AVAIL_SVR_DIRS Base Model: get the actual path . Full Model w/o Relative Position: returns a list of services . Full Model w/o Copy Attention: return a list of services that are available . Full Model : returns a list of available services . Human Written: return list of paths that may contain available services . def volume_attach(provider, names, **kwargs): client.get_client_info() client.extra_action(provider=provider, names=names, action='volume attach', **kwargs) return info Base Model: attempt to attach volume . Full Model w/o Relative Position: attach volume cli example: . Full Model w/o Copy Attention: attach volume cli example: . Full Model : attach volume information cli example: . Human Written: attach volume to a server cli example: . Table 10: Qualitative example of different models’ performance in Python dataset.
2020
449
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 465–476 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 465 Dice Loss for Data-imbalanced NLP Tasks Xiaoya Li♣, Xiaofei Sun♣, Yuxian Meng♣, Junjun Liang♣, Fei Wu♠and Jiwei Li♠♣ ♠Department of Computer Science and Technology, Zhejiang University ♣Shannon.AI {xiaoya li, xiaofei sun, yuxian meng, jiwei li}@shannonai.com, [email protected] Abstract Many NLP tasks such as tagging and machine reading comprehension (MRC) are faced with the severe data imbalance issue: negative examples significantly outnumber positive ones, and the huge number of easy-negative examples overwhelms training. The most commonly used cross entropy criteria is actually accuracy-oriented, which creates a discrepancy between training and test. At training time, each training instance contributes equally to the objective function, while at test time F1 score concerns more about positive examples. In this paper, we propose to use dice loss in replacement of the standard cross-entropy objective for data-imbalanced NLP tasks. Dice loss is based on the Sørensen–Dice coefficient (Sorensen, 1948) or Tversky index (Tversky, 1977), which attaches similar importance to false positives and false negatives, and is more immune to the data-imbalance issue. To further alleviate the dominating influence from easy-negative examples in training, we propose to associate training examples with dynamically adjusted weights to deemphasize easy-negative examples. Experimental results show that this strategy narrows down the gap between the F1 score in evaluation and the dice loss in training. With the proposed training objective, we observe significant performance boosts over a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task, and competitive or even better results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity recognition task along with the machine reading comprehension and paraphrase identification tasks. The code can be found at https://github.com/ShannonAI/ dice_loss_for_NLP. Task # neg # pos ratio CoNLL03 NER 170K 34K 4.98 OntoNotes5.0 NER 1.96M 239K 8.18 SQuAD 1.1 (Rajpurkar et al., 2016) 10.3M 175K 55.9 SQuAD 2.0 (Rajpurkar et al., 2018) 15.4M 188K 82.0 QUOREF (Dasigi et al., 2019) 6.52M 38.6K 169 Table 1: Number of positive and negative examples and their ratios for different data-imbalanced NLP tasks. 1 Introduction Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table 1 gives concrete examples: for the Named Entity Recognition (NER) task (Sang and De Meulder, 2003; Nadeau and Sekine, 2007), most tokens are backgrounds with tagging class O. Specifically, the number of tokens with tagging class O is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Dataimbalanced issue is more severe for MRC tasks (Rajpurkar et al., 2016; Nguyen et al., 2016; Rajpurkar et al., 2018; Koˇcisk`y et al., 2018; Dasigi et al., 2019) with the value of negative-positive ratio being 50-200, which is due to the reason that the task of MRC is usually formalized as predicting the starting and ending indexes conditioned on the query and the context, and given a chunk of text of an arbitrary length, only two tokens are positive (or of interest) with all the rest being background. Data imbalance results in the following two issues: (1) the training-test discrepancy: Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label. This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function, whereas at test time, F1 gives equal weight to positive and negative examples; (2) the overwhelming effect of easy-negative examples. As pointed out by Meng et al. (2019), a significantly large number of negative examples also 466 means that the number of easy-negative example is large. The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learn to distinguish between positive examples and hard-negative examples. The crossentropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks (Lample et al., 2016; Wu et al., 2019; Devlin et al., 2018; Yu et al., 2018a; McCann et al., 2018; Ma and Hovy, 2016; Chen et al., 2017), handles neither of the issues. To handle the first issue, we propose to replace CE or MLE with losses based on the Sørensen–Dice coefficient (Sorensen, 1948) or Tversky index (Tversky, 1977). The Sørensen–Dice coefficient, dice loss for short, is the harmonic mean of precision and recall. It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets. Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the Fβ score, and thus comes with more flexibility. Therefore, we use dice loss or Tversky index to replace CE loss to address the first issue. Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a soft version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn’t a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easynegative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss (Lin et al., 2017) in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to (1 −p), and this weight dynamically changes as training proceeds. This strategy helps deemphasize confident examples during training as their probability p approaches 1, making the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples. Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks. The rest of this paper is organized as follows: related work is presented in Section 2. We describe different proposed losses in Section 3. Experimental results are presented in Section 4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6. 2 Related Work 2.1 Data Resampling The idea of weighting training examples has a long history. Importance sampling (Kahn and Marshall, 1953) assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost (Kanduri et al., 2018) select harder examples to train subsequent classifiers. Similarly, hard example mining (Malisiewicz et al., 2011) downsamples the majority class and exploits the most difficult examples. Oversampling (Chen et al., 2010; Chawla et al., 2002) is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss (Lin et al., 2017) used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning (Kumar et al., 2010), example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, selfpaced learning algorithm optimizes model parameters and example weights jointly. Other works (Chang et al., 2017; Katharopoulos and Fleuret, 2018) adjusted the weights of different training examples based on training loss. Besides, recent work (Jiang et al., 2017; Fan et al., 2018) proposed to learn a separate network to predict sample weights. 2.2 Data Imbalance Issue in Computer Vision The background-object label imbalance issue is severe and thus well studied in the field of object detection (Li et al., 2015; Girshick, 2015; He et al., 2015; Girshick et al., 2013; Ren et al., 2015). The idea of hard negative mining (HNM) (Girshick et al., 2013) has gained much attention recently. Pang et al. (2019) proposed a novel method called IoU-balanced sampling and Chen et al. (2019) designed a ranking model to replace the conventional classification task with an average-precision loss 467 to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP. Sudre et al. (2017) addressed the severe class imbalance issue for the image segmentation task. They proposed to use the class re-balancing property of the Generalized Dice Loss as the training objective for unbalanced tasks. Shen et al. (2018) investigated the influence of Dice-based loss for multi-class organ segmentation using a dataset of abdominal CT volumes. Kodym et al. (2018) proposed to use the batch soft Dice loss function to train the CNN network for the task of segmentation of organs at risk (OAR) of medical images. Shamir et al. (2019) extended the definition of the classical Dice coefficient to facilitate the direct comparison of a ground truth binary image with a probabilistic map. In this paper, we introduce dice loss into NLP tasks as the training objective and propose a dynamic weight adjusting strategy to address the dominating influence of easy-negative examples. 3 Losses 3.1 Notation For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification. Let X denote a set of training instances and each instance xi ∈X is associated with a golden binary label yi = [yi0, yi1] denoting the ground-truth class xi belongs to, and pi = [pi0, pi1] is the predicted probabilities of the two classes respectively, where yi0, yi1 ∈ {0, 1}, pi0, pi1 ∈[0, 1] and pi1 + pi0 = 1. 3.2 Cross Entropy Loss The vanilla cross entropy (CE) loss is given by: CE = −1 N X i X j∈{0,1} yij log pij (1) As can be seen from Eq.1, each xi contributes equally to the final objective. Two strategies are normally used to address the the case where we wish that not all xi are treated equally: associating different classes with different weighting factor α or resampling the datasets. For the former, Eq.1 is adjusted as follows: Weighted CE = −1 N X i αi X j∈{0,1} yij log pij (2) where αi ∈[0, 1] may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation. In this work, we use lg( n−nt nt +K) to calculate the coefficient α, where nt is the number of samples with class t and n is the total number of samples in the training set. K is a hyperparameter to tune. Intuitively, this equation assigns less weight to the majority class and more weight to the minority class. The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g. extracting equal training samples from each class. Both strategies are equivalent to changing the data distribution during training and thus are of the same nature. Empirically, these two methods are not widely used due to the trickiness of selecting α especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes (Valverde et al., 2017). 3.3 Dice Coefficient and Tversky Index Sørensen–Dice coefficient (Sorensen, 1948; Dice, 1945), dice coefficient (DSC) for short, is an F1oriented statistic used to gauge the similarity of two sets. Given two sets A and B, the vanilla dice coefficient between them is given as follows: DSC(A, B) = 2|A ∩B| |A| + |B| (3) In our case, A is the set that contains all positive examples predicted by a specific model, and B is the set of all golden positive examples in the dataset. When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: DSC = 2TP 2TP + FN + FP = 2 TP TP+FN TP TP+FP TP TP+FN + TP TP+FP = 2Pre × Rec Pre+Rec = F1 (4) For an individual example xi, its corresponding dice coefficient is given as follows: DSC(xi) = 2pi1yi1 pi1 + yi1 (5) As can be seen, a negative example (yi1 = 0) does not contribute to the objective. For smoothing purposes, it is common to add a γ factor to both the nominator and the denominator, making the form to be as follows (we simply set γ = 1 in the rest of 468 Loss Formula (one sample xi) CE −P j∈{0,1} yij log pij WCE −αi P j∈{0,1} yij log pij DL 1 −2pi1yi1+γ p2 i1+y2 i1+γ TL 1 − pi1yi1+γ pi1yi1+α pi1yi0+β pi0yi1+γ DSC 1 −2(1−pi1)pi1·yi1+γ (1−pi1)pi1+yi1+γ FL −αi P j∈{0,1}(1 −pij)γ log pij Table 2: Different losses and their formulas. We add +1 to DL, TL and DSC so that they are positive. this paper): DSC(xi) = 2pi1yi1 + γ pi1 + yi1 + γ (6) As can be seen, negative examples whose DSC is γ pi1+γ , also contribute to the training. Additionally, Milletari et al. (2016) proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): DL = 1 N X i  1 −2pi1yi1 + γ p2 i1 + y2 i1 + γ  (7) Another version of DL is to directly compute setlevel dice coefficient instead of the sum of individual dice coefficient, which is easier for optimization: DL = 1 − 2 P i pi1yi1 + γ P i p2 i1 + P i y2 i1 + γ (8) Tversky index (TI), which can be thought as the approximation of the Fβ score, extends dice coefficient to a more general case. Given two sets A and B, tversky index is computed as follows: TI = |A ∩B| |A ∩B| + α|A\B| + β|B\A| (9) Tversky index offers the flexibility in controlling the tradeoff between false-negatives and falsepositives. It degenerates to DSC if α = β = 0.5. The Tversky loss (TL) is thus given as follows: TL = 1 N X i  1 − pi1yi1 + γ pi1yi1 + α pi1yi0 + β pi0yi1 + γ  (10) 3.4 Self-adjusting Dice Loss Consider a simple case where the dataset consists of only one example xi, which is classified as positive as long as pi1 is larger than 0.5. The computation of F1 score is actually as follows: F1(xi) = 2 I(pi1 > 0.5)yi1 I(pi1 > 0.5) + yi1 (11) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 Derivatives FL( =1) DL( =1) TL( =0.5) DSC Figure 1: An illustration of derivatives of the four losses. The derivative of DSC approaches zero right after p exceeds 0.5, and for the other losses, the derivatives reach 0 only if the probability is exactly 1, which means they will push p to 1 as much as possible. Comparing Eq.5 with Eq.11, we can see that Eq.5 is actually a soft form of F1, using a continuous p rather than the binary I(pi1 > 0.5). This gap isn’t a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hardnegative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability p with a decaying factor (1 −p), changing Eq.11 to the following adaptive variant of DSC: DSC(xi) = 2(1 −pi1)pi1 · yi1 + γ (1 −pi1)pi1 + yi1 + γ (12) One can think (1−pi1) as a weight associated with each example, which changes as training proceeds. The intuition of changing pi1 to (1 −pi1)pi1 is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, (1−pi1)pi1 makes the model attach significantly less focus to them. A close look at Eq.12 reveals that it actually mimics the idea of focal loss (FL for short) (Lin et al., 2017) for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a (1 −p)γ factor, leading the final loss to be −(1 −p)γ log p. 469 CTB5 CTB6 UD1.4 Model Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Joint-POS(Sig)(Shao et al., 2017) 93.68 94.47 94.07 90.81 89.28 89.54 89.41 Joint-POS(Ens)(Shao et al., 2017) 93.95 94.81 94.38 89.67 89.86 89.75 Lattice-LSTM(Zhang and Yang, 2018) 94.77 95.51 95.14 92.00 90.86 91.43 90.47 89.70 90.09 BERT-Tagger(Devlin et al., 2018) 95.86 96.26 96.06 94.91 94.63 94.77 95.42 94.17 94.79 BERT+FL 96.11 97.42 96.76 95.80 95.08 95.44 96.33 95.85 96.81 (+0.70) (+0.67) (+2.02) BERT+DL 96.77 98.87 97.81 94.08 96.12 95.09 96.10 97.79 96.94 (+1.75) (+0.32) (+2.15) BERT+DSC 97.10 98.75 97.92 96.29 96.85 96.57 96.24 97.73 96.98 (+1.86) (+1.80) (+2.19) Table 3: Experimental results for Chinese POS datasets including CTB5, CTB6 and UD1.4. English WSJ Model Prec. Rec. F1 Meta BiLSTM(Bohnet et al., 2018) 98.23 BERT-Tagger (Devlin et al., 2018) 99.21 98.36 98.86 BERT-Tagger+FL 98.36 98.97 98.88 (+0.02) BERT-Tagger+DL 99.34 98.22 98.91 (+0.05) BERT-Tagger+DSC 99.41 98.93 99.38 (+0.52) English Tweets Model Prec. Rec. F1 FastText+CNN+CRF(Godin, 2019) 91.78 BERT-Tagger (Devlin et al., 2018) 92.33 91.98 92.34 BERT-Tagger+FL 91.24 93.22 92.47 (+0.13) BERT-Tagger+DL 91.44 92.88 92.52 (+0.18) BERT-Tagger+DSC 92.87 93.54 92.58 (+0.24) Table 4: Experimental results for English POS datasets. In Table 2, we summarize all the aforementioned losses. Figure 1 gives an explanation from the perspective in derivative: The derivative of DSC approaches zero right after p exceeds 0.5, which suggests the model attends less to examples once they are correctly classified. But for the other losses, the derivatives reach 0 only if the probability is exactly 1, which means they will push p to 1 as much as possible. 4 Experiments We evaluated the proposed method on four NLP tasks, part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification. Hyperparameters are tuned on the corresponding development set of each dataset. More experiment details including datasets and hyperparameters are shown in supplementary material. 4.1 Part-of-Speech Tagging Settings Part-of-speech tagging (POS) is the task of assigning a part-of-speech label (e.g., noun, verb, adjective) to each word in a given text. In this paper, we choose BERT (Devlin et al., 2018) as the backbone and conduct experiments on three widely used Chinese POS datasets including Chinese Treebank (Xue et al., 2005) 5.0/6.0 and UD1.4 and English datasets including Wall Street Journal (WSJ) and the dataset proposed by Ritter et al. (2011). We report the span-level micro-averaged precision, recall and F1 for evaluation. Baselines We used the following baselines: • Joint-POS: Shao et al. (2017) jointly learns Chinese word segmentation and POS. • Lattice-LSTM: Zhang and Yang (2018) constructs a word-character lattice network. • Bert-Tagger: Devlin et al. (2018) treats partof-speech as a tagging task. Results Table 3 presents the experimental results on Chinese datasets. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we know, we are achieving SOTA performances on the three datasets. Focal loss only obtains a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in solving the data imbalance issue. Table 4 presents the experimental results for English datasets. 470 English CoNLL 2003 Model Prec. Rec. F1 ELMo(Peters et al., 2018) 92.22 CVT(Clark et al., 2018) 92.6 BERT-Tagger(Devlin et al., 2018) 92.8 BERT-MRC(Li et al., 2019) 92.33 94.61 93.04 BERT-MRC+FL 93.13 93.09 93.11 (+0.06) BERT-MRC+DL 93.22 93.12 93.17 (+0.12) BERT-MRC+DSC 93.41 93.25 93.33 (+0.29) English OntoNotes 5.0 Model Prec. Rec. F1 CVT (Clark et al., 2018) 88.8 BERT-Tagger (Devlin et al., 2018) 90.01 88.35 89.16 BERT-MRC(Li et al., 2019) 92.98 89.95 91.11 BERT-MRC+FL 90.13 92.34 91.22 (+0.11) BERT-MRC+DL 91.70 92.06 91.88 (+0.77) BERT-MRC+DSC 91.59 92.56 92.07 (+0.96) Chinese MSRA Model Prec. Rec. F1 Lattice-LSTM (Zhang and Yang, 2018) 93.57 92.79 93.18 BERT-Tagger (Devlin et al., 2018) 94.97 94.62 94.80 Glyce-BERT (Wu et al., 2019) 95.57 95.51 95.54 BERT-MRC(Li et al., 2019) 96.18 95.12 95.75 BERT-MRC+FL 95.45 95.89 95.67 (-0.08) BERT-MRC+DL 96.20 96.68 96.44 (+0.69) BERT-MRC+DSC 96.67 96.77 96.72 (+0.97) Chinese OntoNotes 4.0 Model Prec. Rec. F1 Lattice-LSTM (Zhang and Yang, 2018) 76.35 71.56 73.88 BERT-Tagger (Devlin et al., 2018) 78.01 80.35 79.16 Glyce-BERT (Wu et al., 2019) 81.87 81.40 80.62 BERT-MRC(Li et al., 2019) 82.98 81.25 82.11 BERT-MRC+FL 83.63 82.97 83.30 (+1.19) BERT-MRC+DL 83.97 84.05 84.01 (+1.90) BERT-MRC+DSC 84.22 84.72 84.47 (+2.36) Table 5: Experimental results for NER task. 4.2 Named Entity Recognition Settings Named entity recognition (NER) is the task of detecting the span and semantic category of entities within a chunk of text. Our implementation uses the current state-of-the-art model proposed by Li et al. (2019) as the backbone, and changes the MLE loss to DSC loss. Datasets that we use include OntoNotes4.0 (Pradhan et al., 2011), MSRA (Levow, 2006), CoNLL2003 (Sang and Meulder, 2003) and OntoNotes5.0 (Pradhan et al., 2013). We report span-level micro-averaged precision, recall and F1. Baselines We use the following baselines: • ELMo: a tagging model with pretraining from Peters et al. (2018). • Lattice-LSTM: Zhang and Yang (2018) constructs a word-character lattice, only used in Chinese datasets. • CVT: Clark et al. (2018) uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. • Bert-Tagger: Devlin et al. (2018) treats NER as a tagging task. • Glyce-BERT: Wu et al. (2019) combines Chinese glyph information with BERT pretraining. • BERT-MRC: Li et al. (2019) formulates NER as a machine reading comprehension task and achieves SOTA results on Chinese and English NER benchmarks. Results Table 5 shows experimental results on NER datasets. DSC outperforms BERT-MRC(Li et al., 2019) by +0.29, +0.96, +0.97 and +2.36 respectively on CoNLL2003, OntoNotes5.0, MSRA and OntoNotes4.0. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets. 4.3 Machine Reading Comprehension Settings The task of machine reading comprehension (MRC) (Seo et al., 2016; Wang et al., 2016; Wang and Jiang, 2016; Wang et al., 2016; Shen et al., 2017; Chen et al., 2017) predicts the answer span in the passage given a question and the passage. We followed the standard protocols in Seo et al. (2016), in which the start and end indexes of answer are predicted. We report Extract Match (EM) as well as F1 score on validation set. We use three datasets on this task: SQuAD v1.1, SQuAD v2.0 (Rajpurkar et al., 2016, 2018) and Quoref (Dasigi et al., 2019). Baselines We used the following baselines: • QANet: Yu et al. (2018b) builds a model based on convolutions and self-attentions. Convolutions are used to model local interactions and self-attention are used to model global interactions. • BERT: Devlin et al. (2018) scores each candidate span and the maximum scoring span is used as a prediction. • XLNet: Yang et al. (2019) proposes a generalized autoregressive pretraining method that 471 SQuAD v1.1 SQuAD v2.0 QuoRef Model EM F1 EM F1 EM F1 QANet (Yu et al., 2018b) 73.6 82.7 34.41 38.26 BERT (Devlin et al., 2018) 84.1 90.9 78.7 81.9 58.44 64.95 BERT+FL 84.67 91.25 78.92 82.20 60.78 66.19 (+0.57) (+0.35) (+0.22) (+0.30) (+2.34) (+1.24) BERT+DL 84.83 91.86 78.99 82.88 62.03 66.88 (+0.73) (+0.96) (+0.29) (+0.98) (+3.59) (+1.93) BERT+DSC 85.34 91.97 79.02 82.95 62.44 67.52 (+1.24) (+1.07) (+0.32) (+1.05) (+4.00) (+2.57) XLNet (Yang et al., 2019) 88.95 94.52 86.12 88.79 64.52 71.49 XLNet+FL 88.90 94.55 87.04 89.32 65.19 72.34 (-0.05) (+0.03) (+0.92) (+0.53) (+0.67) (+0.85) XLNet+DL 89.13 95.36 87.22 89.44 65.77 72.85 (+0.18) (+0.84) (+1.10) (+0.65) (+1.25) (+1.36) XLNet+DSC 89.79 95.77 87.65 89.51 65.98 72.90 (+0.84) (+1.25) (+1.53) (+0.72) (+1.46) (+1.41) Table 6: Experimental results for MRC task. MRPC QQP Model F1 F1 BERT (Devlin et al., 2018) 88.0 91.3 BERT+FL 88.43 91.86 (+0.43) (+0.56) BERT+DL 88.71 91.92 (+0.71) (+0.62) BERT+DSC 88.92 92.11 (+0.92) (+0.81) XLNet (Yang et al., 2019) 89.2 91.8 XLNet+FL 89.25 92.31 (+0.05) (+0.51) XLNet+DL 89.33 92.39 (+0.13) (+0.59) XLNet+DSC 89.78 92.60 (+0.58) (+0.79) Table 7: Experimental results for PI task. enables learning bidirectional contexts. Results Table 6 shows the experimental results for MRC task. With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1. For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM. For SQuAD v2.0, the proposed method achieves 87.65 on EM and 89.51 on F1. On QuoRef, the proposed method surpasses XLNet by +1.46 on EM and +1.41 on F1. 4.4 Paraphrase Identification Settings Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We conduct experiments on the two widely-used datasets: MRPC (Dolan and Brockett, 2005) and QQP. F1 score is reported for comparison. We use BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019) as baselines. Results Table 7 shows the results. We find that replacing the training objective with DSC introduces performance boost for both settings, +0.58 for MRPC and +0.73 for QQP. 5 Ablation Studies 5.1 Datasets imbalanced to different extents It is interesting to see how differently the proposed objectives affect datasets imbalanced to different extents. We use the paraphrase identification dataset QQP (37% positive and 63% negative) for studies. To construct datasets with different imbalance degrees, we used the original QQP dataset to construct synthetic training sets with different positive-negative ratios. Models are trained on these different synthetic sets and then test on the same original test set. • Original training set (original) The original dataset with 363,871 examples, with 37% being positive and 63% being negative • Positive augmentation (+ positive) We created a balanced dataset by adding positive examples. We first randomly chose positive training examples in the original training set as templates. Then we used Spacy1 to retrieve entity mentions and replace them with new ones by linking mentions to their corresponding entities in DBpedia. The augmented set contains 458,477 examples, with 50% being positive and 50% being negative. • Negative augmentation (+ negative) We created a more imbalanced dataset. The size of the newly constructed training set and 1https://github.com/explosion/spaCy 472 original + positive + negative - negative + positive & negative BERT 91.3 92.27 90.08 89.73 93.14 BERT+FL 91.86(+0.56) 92.64(+0.37) 90.61(+0.53) 90.79(+1.06) 93.45(+0.31) BERT+DL 91.92(+0.62) 92.87(+0.60) 90.22(+0.14) 90.49(+0.76) 93.52(+0.38) BERT+DSC 92.11(+0.81) 92.92(+0.65) 90.78(+0.70) 90.80(+1.07) 93.63(+0.49) Table 8: The effect of different data augmentation ways for QQP in terms of F1-score. the data augmented technique are exactly the same as +negative, except that we chose negative training examples as templates. The augmented training set contains 458,477 examples, with 21% being positive and 79% being negative. • Negative downsampling (- negative) We down-sampled negative examples in the original training set to get a balanced training set. The down-sampled set contains 269,165 examples, with 50% being positive and 50% being negative. • Positive and negative augmentation (+ positive & +negative) We augmented the original training data with additional positive and negative examples with the data distribution staying the same. The augmented dataset contains 458,477 examples, with 50% being positive and 50% being negative. Results are shown in Table 8. We first look at the first line, with all results obtained using the MLE objective. We can see that + positive outperforms original, and +negative underperforms original. This is in line with our expectation since + positive creates a balanced dataset while +negative creates a more imbalanced dataset. Despite the fact that -negative creates a balanced dataset, the number of training data decreases, resulting in inferior performances. DSC achieves the highest F1 score across all datasets. Specially, for +positive, DSC achieves minor improvements (+0.05 F1) over DL. In contrast, it significantly outperforms DL for +negative dataset. This is in line with our expectation since DSC helps more on more imbalanced datasets. The performance of FL and DL are not consistent across different datasets, while DSC consistently performs the best on all datasets. 5.2 Dice loss for accuracy-oriented tasks? We argue that the cross-entropy objective is actually accuracy-oriented, whereas the proposed losses perform as a soft version of F1 score. To SST-2 SST-5 Model Acc Acc BERT+CE 94.90 55.57 BERT+DL 94.37 54.63 BERT+DSC 94.84 55.19 Table 9: The effect of DL and DSC on sentiment classification tasks. BERT+CE refers to fine-tuning BERT and setting cross-entropy as the training objective. explore the effect of the dice loss on accuracyoriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank (SST) datasets including SST-2 and SST-5. We fine-tuned BERTLarge with different training objectives. Experimental results for SST are shown in Table 9. For SST-5, BERT with CE achieves 55.57 in terms of accuracy, while DL and DSC perform slightly worse (54.63 and 55.19, respectively). Similar phenomenon is observed for SST-2. These results verify that the proposed dice loss is not accuracy-oriented, and should not be used for accuracy-oriented tasks. 5.3 Hyper-parameters in Tversky Index As mentioned in Section 3.3, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives. In this subsection, we explore the effect of hyperparameters (i.e., α and β) in TI to test how they manipulate the tradeoff. We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset. Experimental results are shown in Table 10. The highest F1 on Chinese OntoNotes4.0 is 84.67 when α is set to 0.6 while for QuoRef, the highest F1 is 68.44 when α is set to 0.4. In addition, we can observe that the performance varies a lot as α changes in distinct datasets, which shows that the hyperparameters α, β acturally play an important role in TI. 6 Conclusion In this paper, we propose the dice-based loss to narrow down the gap between training objective and evaluation metrics (F1 score). Experimental results show that the proposed loss function help 473 α Chinese Onto4.0 English QuoRef α = 0.1 80.13 63.23 α = 0.2 81.17 63.45 α = 0.3 84.22 65.88 α = 0.4 84.52 68.44 α = 0.5 84.47 67.52 α = 0.6 84.67 66.35 α = 0.7 81.81 65.09 α = 0.8 80.97 64.13 α = 0.9 80.21 64.84 Table 10: The effect of hyperparameters in Tversky Index. We set β = 1 −α and thus we only list α here. to achieve significant performance boost without changing model architectures. Acknowledgement We thank all anonymous reviewers, as well as Qinghong Han, Wei Wu and Jiawei Wu for their comments and suggestions. The work is supported by the National Natural Science Foundation of China (NSFC No. 61625107 and 61751209). References Bernd Bohnet, Ryan T. McDonald, Gonc¸alo Sim˜oes, Daniel Andor, Emily Pitler, and Joshua Maynez. 2018. Morphosyntactic tagging with a meta-bilstm model over context sensitive token encodings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2642–2652. Haw-Shiuan Chang, Erik G. Learned-Miller, and Andrew McCallum. 2017. Active bias: Training more accurate neural networks by emphasizing high variance samples. In NIPS. N. V. Chawla, K. W. Bowyer, Lawrence O. Hall, and W. P. Kegelmeyer. 2002. Smote: Synthetic minority over-sampling technique. J. Artif. Intell. Res., 16:321– 357. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. arXiv preprint arXiv:1704.00051. Kean Chen, Jianguo Li, Weiyao Lin, John See, Ji Wang, Lingyu Duan, Zhibo Chen, Changwei He, and Junni Zou. 2019. Towards accurate one-stage object detection with ap-loss. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 5119–5127. Shijuan Chen, Haibo He, and Edwardo A. Garcia. 2010. Ramoboost: Ranked minority oversampling in boosting. IEEE Transactions on Neural Networks, 21:1624– 1642. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc V. Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Procfessing, Brussels, Belgium, October 31 November 4, 2018, pages 1914–1925. Pradeep Dasigi, Nelson F Liu, Ana Marasovic, Noah A Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. arXiv preprint arXiv:1908.05803. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Lee R Dice. 1945. Measures of the amount of ecologic association between species. Ecology, 26(3):297–302. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Yang Fan, Fei Tian, Tao Qin, Xiuping Li, and Tie-Yan Liu. 2018. Learning to teach. ArXiv, abs/1805.03643. Ross B. Girshick. 2015. Fast r-cnn. 2015 IEEE International Conference on Computer Vision (ICCV), pages 1440–1448. Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2013. Rich feature hierarchies for accurate object detection and semantic segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 580–587. Fr´ederic Godin. 2019. Improving and Interpreting Neural Networks for Word-Level Prediction Tasks in Natural Language Processing. Ph.D. thesis, Ghent University, Belgium. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2017. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML. H. Kahn and A. W. Marshall. 1953. Methods of reducing sample size in monte carlo computations. Operations Research, 1(5):263–278. Anil Kanduri, Mohammad Hashem Haghbayan, Amir M. Rahmani, Muhammad Shafique, Axel Jantsch, and Pasi Liljeberg. 2018. adboost: Thermal aware performance boosting through dark silicon patterning. IEEE Trans. Computers, 67(8):1062–1077. Angelos Katharopoulos and Franc¸ois Fleuret. 2018. Not all samples are created equal: Deep learning with importance sampling. In ICML. Tom´aˇs Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´aabor Melis, and Edward Grefenstette. 2018. The narrativeqa reading 474 comprehension challenge. Transactions of the Association of Computational Linguistics, 6:317–328. Oldrich Kodym, Michal Spanel, and Adam Herout. 2018. Segmentation of head and neck organs at risk using CNN with batch dice loss. In Pattern Recognition 40th German Conference, GCPR 2018, Stuttgart, Germany, October 9-12, 2018, Proceedings, pages 105– 114. M. Pawan Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada., pages 1189–1197. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108–117, Sydney, Australia. Association for Computational Linguistics. H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua. 2015. A convolutional neural network cascade for face detection. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5325–5334. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2019. A unified MRC framework for named entity recognition. CoRR, abs/1910.11476. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354. Tomasz Malisiewicz, Abhinav Gupta, and Alexei A. Efros. 2011. Ensemble of exemplar-svms for object detection and beyond. In IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain, November 6-13, 2011, pages 89–96. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. Yuxian Meng, Muyu Li, Wei Wu, and Jiwei Li. 2019. Dsreg: Using distant supervision as a regularizer. arXiv preprint arXiv:1905.11658. Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. 2016. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), pages 565–571. IEEE. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1):3–26. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Jiangmiao Pang, Kai Chen, Jianping Shi, Huajun Feng, Wanli Ouyang, and Dahua Lin. 2019. Libra R-CNN: towards balanced learning for object detection. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 821–830. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Sameer Pradhan, Mitchell P. Marcus, Martha Palmer, Lance A. Ramshaw, Ralph M. Weischedel, and Nianwen Xue, editors. 2011. Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task. ACL. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143–152, Sofia, Bulgaria. Association for Computational Linguistics. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:1137–1149. Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An experimental study. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1524–1534, Edinburgh, Scotland, UK. Association for Computational Linguistics. Erik F Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. arXiv preprint cs/0306050. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. In Proceed475 ings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLTNAACL 2003, Edmonton, Canada, May 31 - June 1, 2003, pages 142–147. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Reuben R. Shamir, Yuval Duchin, Jinyoung Kim, Guillermo Sapiro, and Noam Harel. 2019. Continuous dice coefficient: a method for evaluating probabilistic segmentations. CoRR, abs/1906.11031. Yan Shao, Christian Hardmeier, J¨org Tiedemann, and Joakim Nivre. 2017. Character-based joint segmentation and pos tagging for chinese using bidirectional rnncrf. arXiv preprint arXiv:1704.01314. Chen Shen, Holger R. Roth, Hirohisa Oda, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, and Kensaku Mori. 2018. On the influence of dice loss function in multi-class organ segmentation of abdominal CT using 3d fully convolutional networks. CoRR, abs/1801.05912. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1047– 1055. ACM. Th A Sorensen. 1948. A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on danish commons. Biol. Skar., 5:1–34. Carole H. Sudre, Wenqi Li, Tom Vercauteren, S´ebastien Ourselin, and M. Jorge Cardoso. 2017. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Qu´ebec City, QC, Canada, September 14, 2017, Proceedings, pages 240–248. Amos Tversky. 1977. Features of similarity. Psychological review, 84(4):327. Sergi Valverde, Mariano Cabezas, Eloy Roura, Sandra Gonz´alez-Vill`a, Deborah Pareto, Joan C Vilanova, Llu´ıs Rami´o-Torrent`a, `Alex Rovira, Arnau Oliver, and Xavier Llad´o. 2017. Improving automated multiple sclerosis lesion segmentation with a cascaded 3d convolutional neural network approach. NeuroImage, 155:159–168. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211. Wei Wu, Yuxian Meng, Qinghong Han, Muyu Li, Xiaoya Li, Jie Mei, Ping Nie, Xiaofei Sun, and Jiwei Li. 2019. Glyce: Glyph-vectors for chinese character representations. arXiv preprint arXiv:1901.10125. Naiwen Xue, Fei Xia, Fudong Choiu, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018a. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018b. Qanet: Combining local convolution with global self-attention for reading comprehension. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. arXiv preprint arXiv:1805.02023. A Dataset Details A.1 Part-of-Speech Tagging Datasets We conduct experiments on three widely used benchmark, i.e., Chinese Treebank 5.02/6.03 and UD1.44. • CTB5 is a Chinese dataset for tagging and parsing, which contains 507,222 words, 824,983 characters and 18,782 sentences extracted from newswire sources, including 698 articles from Xinhua (1994-1998), 55 articles from Information Services Department of HKSAR (1997) and 132 articles from Sinorama Magazine (1996-1998 & 2000-2001). • CTB6 is an extension of CTB5, containing 781,351 words, 1,285,149 characters and 28,295 sentences. • UD is the abbreviation of Universal Dependencies, which is a framework for consistent 2https://catalog.ldc.upenn.edu/ LDC2005T01 3https://catalog.ldc.upenn.edu/ LDC2007T36 4https://universaldependencies.org/ 476 annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. In this work, we use UD1.4 for Chinese POS tagging. A.2 Named Entity Recognition Datasets For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.05 and MSRA6 , and English datasets, i.e., CoNLL2003 7 and OntoNotes5.08. • CoNLL2003 is an English dataset with 4 entity types: Location, Organization, Person and Miscellaneous. We followed data processing protocols in (Ma and Hovy, 2016). • English OntoNotes5.0 consists of texts from a wide variety of sources and contains 18 entity types. We use the standard train/dev/test split of CoNLL2012 shared task. • Chinese MSRA performs as a Chinese benchmark dataset containing 3 entity types. Data in MSRA is collected from news domain. Since the development set is not provided in the original MSRA dataset, we randomly split the training set into training and development splits by 9:1. We use the official test set for evaluation. • Chinese OntoNotes4.0 is a Chinese dataset and consists of texts from news domain, which has 18 entity types. In this paper, we take the same data split as Wu et al. (2019) did. A.3 Machine Reading Comprephension Datasets For MRC task, we use three datasets: SQuADv1.1/v2.09 and Queref10 datasets. • SQuAD v1.1 and SQuAD v2.0 are the most widely used QA benchmarks. SQuAD1.1 is a collection of 100K crowdsourced question-answer pairs, and SQuAD2.0 extends SQuAD1.1 allowing no short answer exists in the provided passage. 5https://catalog.ldc.upenn.edu/ LDC2011T03 6http://sighan.cs.uchicago.edu/ bakeoff2006/ 7https://www.clips.uantwerpen.be/ conll2003/ner/ 8https://catalog.ldc.upenn.edu/ LDC2013T19 9https://rajpurkar.github.io/ SQuAD-explorer/ 10https://allennlp.org/quoref • Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems, containing 24K questions over 4.7K paragraphs from Wikipedia. A.4 Paraphrase Identification Datasets Experiments are conducted on two PI datasets: MRPC11 and QQP12. • MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations of whether the sentence pairs are semantically equivalent. The MRPC dataset has imbalanced classes (6800 pairs in total, and 68% for positive, 32% for negative). • QQP is a collection of question pairs from the community question-answering website Quora. The class distribution in QQP is also unbalanced (over 400,000 question pairs in total, and 37% for positive, 63% for negative). 11https://www.microsoft.com/en-us/ download/details.aspx?id=52398 12https://www.quora.com/q/quoradata/ First-Quora-Dataset-Release-Question-Pairs
2020
45
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5008–5020 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5008 Asking and Answering Questions to Evaluate the Factual Consistency of Summaries Alex Wang∗ New York University [email protected] Kyunghyun Cho Facebook AI New York University CIFAR Associate Fellow [email protected] Mike Lewis Facebook AI [email protected] Abstract Practical applications of abstractive summarization models are limited by frequent factual inconsistencies with respect to their input. Existing automatic evaluation metrics for summarization are largely insensitive to such errors. We propose QAGS,1 an automatic evaluation protocol that is designed to identify factual inconsistencies in a generated summary. QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source. To evaluate QAGS, we collect human judgments of factual consistency on model-generated summaries for the CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018) summarization datasets. QAGS has substantially higher correlations with these judgments than other automatic evaluation metrics. Also, QAGS offers a natural form of interpretability: The answers and questions generated while computing QAGS indicate which tokens of a summary are inconsistent and why. We believe QAGS is a promising tool in automatically generating usable and factually consistent text. Code for QAGS will be available at https://github. com/W4ngatang/qags. 1 Introduction Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and — crucially — factually correct. Recent progress in conditional text generation has led to models that can generate fluent, topical summaries (Lewis et al., 2019). However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability (Kryscinski et al., 2019a). The problem of factual inconsistency is due in part to the lack of automatic evaluation metrics that can detect such errors. Standard metrics for 1Pronounced “kags”. evaluating generated text are predominantly based on counting n-grams, which weigh all n-grams equally and are insensitive to semantic errors. This inadequacy leaves human evaluation as the primary method for evaluating the factual consistencies, which has been noted to be challenging even for humans (Daume III and Marcu, 2005; Kryscinski et al., 2019b), in addition to being slow and costly. We argue that evaluation metrics that are able to capture subtle semantic errors are required to build better models. In this work, we introduce a general framework for evaluating conditional text generation that is designed to detect factual inconsistencies in generated text with respect to some input. Our framework consists of three steps: (1) Given a generated text, a question generation (QG) model generates a set of questions about the text. (2) We then use question answering (QA) models to answer these questions given both the input and the generated text. (3) A quality score is computed based on the similarity of corresponding answers. This approach leverages recent progress in QA and QG to ask and answer human readable, ontopic questions (Devlin et al., 2019; Song et al., 2019). It only assumes access to a question answering dataset to train the QG and QA models, and is applicable to any modality where a QA model is available, e.g. text, images, or knowledge graphs. We use this framework to develop QAGS (Question Answering and Generation for Summarization), a metric for evaluating the factual consistency of abstractive document summaries. Compared to commonly used automatic metrics such as ROUGE (Lin, 2004), QAGS shows dramatically higher correlations with human judgements of factuality, for example achieving a Pearson correlation coefficient of 54.52 on the CNN/DailyMail summarization task, compared to 17.72 for ROUGE-2. QAGS also achieves new state-of-the-art results on evaluating the factuality of summaries, outper5009 forming recently proposed NLI models for this task (Kryscinski et al., 2019b). Finally, we analyse the robustness of QAGS through an ablation study. QAGS shows robustness to the quality of the underlying QG and QA models, the domain of the models, and the number of questions asked. Even under the worst ablation settings, QAGS still has stronger correlation with human judgments than other automatic metrics. Overall, we contribute the following: (1) We introduce QAGS, an automatic model-based evaluation metric for measuring the factual consistency of model-generated text. (2) We collect a new set of human judgments of factual consistency of model-generated summaries for two summarization datasets. We demonstrate that QAGS correlates with these judgments significantly better than other automatic metrics. (3) We show via ablations that QAGS is robust to a number of factors including underlying model quality and domain mismatch. (4) We analyze the questions and answers produced in computing QAGS to illustrate which parts of summaries are inconsistent. (5) We will release models and code to compute QAGS. 2 Background: Automatically Evaluating Machine Generated Text Standard approaches to evaluating generated text are primarily based on counting n-gram overlap. These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference n-grams in the generated summary. We briefly describe the most common metrics in this family, and refer readers to Liu et al. (2016) for further discussion. ROUGE (Lin, 2004) was developed specifically for evaluating automatic summarization, and its variants are the de facto standard for such. The most common variant is ROUGE-n (typically n ∈ {1, 2}), which computes the F1 score for all reference n-grams in the generated summary. ROUGEL, another commonly used variant, is the length of the longest common subsequence (possibly nonconsecutive) between a summary and references. BLEU (Papineni et al., 2002) is closely related to ROUGE but was developed for machine translation. BLEU computes the precision of the reference ngrams in the generated summary. METEOR (Lavie and Agarwal, 2007) extends BLEU by using an alignment between the generated text and a reference, as well as using stemming and synonym replacement for more flexible n-gram matching. We identify two key deficiencies when using these n-gram based evaluation metrics to detect factual inconsistencies in generated text. First, these metrics require one or more reference texts to compare against. Obtaining references can be expensive and challenging, and as such many text generation datasets contain only a single reference. This problem is exacerbated with highentropy generation tasks, such as summarization or dialogue, where there is a very large number of acceptable outputs. In these settings, comparing against a single reference is woefully inadequate. Second, given a reference to compare against, n-gram based approach weigh all portions of the text equally, even when only a small fraction of the n-grams carry most of the semantic content. Factual inconsistencies caused by minor changes may be drowned out by otherwise high n-gram overlap, making these metrics insensitive to these errors. For example, the sentences “I am writing my paper in Vancouver.” and “I am not writing my paper in Vancouver.” share nearly all unigrams and bigrams despite having the opposite meaning. 3 A Framework for Automatically Evaluating Factual Consistency We introduce a framework for automatically detecting factual inconsistencies in generated text while also addressing the deficiencies of current approaches. Let X and Y be sequences of tokens coming from a vocabulary V where X is a source text and Y is a summary of X. We define p(Q|Y ) as a distribution over all possible questions Q given summary Y , and p(A|Q, X) and p(A|Q, Y ) as distributions over all possible answers A to a particular question Q given either the source X or the summary Y . We constrain the questions Q and answers A to also be sequences of tokens from V . Then the factual consistency of the summary Y is EQ∼p(Q|Y )  D p(A|Q, X), p(A|Q, Y )  , (1) where D is some function measuring the similarity of the two answer distributions. This expression is maximized when Y contains a subset of the information in X such that it produces the same answer for any question from p(Q|Y ). This happens trivially when Y = X, i.e. we take X as its own summary, but in many cases this solution is unacceptable. 5010 Summarization Kevin Sinfield scored his first try of the season against Castleford. Leeds Rhino scored unbeaten run against Tigers to six matches. Ryan Hall was sent to Leeds Rhino for first time in his career . Leeds showed they are in good shape to cope with Kevin Sinfield’s retirement as they claimed a 26 - 12 derby victory over Castleford in front of a sell-out crowd at the Mend-a-Hose Jungle. [...] Ryan Hall was sent to the sin-bin for the first time in his career […] Joel Moon scored his first try of the season […] Leeds extended their unbeaten run against the Tigers to six matches Generated Questions Who scored their first try of the season? Joel Moon Kevin Sinfield Who was sent to Leeds Rhino for the first time? <unanswerable> Ryan Hall How many matches did they win? Six matches Six matches Summary Answers Source Answers Source Summary Figure 1: Overview of QAGS. A set of questions is generated based on the summary. The questions are then answered using both the source article and the summary. Corresponding answers are compared using a similarity function and averaged across questions to produce the final QAGS score. This framework addresses the two issues with ngram based approaches. Instead of requiring a reference to compare against, our framework asks questions based on the generation itself, and compares answers with the provided source text. Also, the use of questions focuses the metric on the semantically relevant parts of the generated text, rather than weighting all parts of the text equally. In practice, exactly computing the expectation in Equation 1 is intractable due to the large space of possible questions. One potential workaround is to randomly sample questions from p(Q|Y ), but this suffers from high variance and requires many samples to obtain a good estimate. Instead, we focus on producing highly probable questions, e.g. as produced by beam search, which may be biased in the limit, but will require fewer questions to estimate because of the higher quality of the questions. 4 QAGS Using this framework requires specifying the question distribution p(Q|Y ), the answer distributions p(A|Q, ∗), and the answer similarity function D. We apply this framework to summarization to develop QAGS and describe our instantiations of these components. Question Generation To instantiate p(Q|Y ), we draw on recent work on automatic question generation (QG), which models this distribution using neural seq2seq models (Du et al., 2017; Krishna and Iyyer, 2019). We over-sample questions, and then filter out low quality questions as follows. First, we train and generate from answerconditional QG models. During training, the model receives both the answer and the source article, and is trained to maximize the likelihood of the paired question. At test time, given a summary Y , we determine candidate answers. We condition on these answers and the summary to generate questions. Next, we filter out low-quality questions using a number of heuristics, such as duplicates and questions less than three tokens long. We also found it especially useful to run the QA model (see next section) on all of the candidate questions, and filter out questions for which the QA model predicted no answer or a different answer than expected. 5011 Question Answering We instantiate the answer distributions p(A|Q, ∗) as extractive QA models, for simplicity. In using extractive QA models, we assume the facts are represented as text spans in the article and summary. Future work should explore using abstractive QA models, which could match paraphrases of the same answer. Answer Similarity We use token-level F1 to compare answers, which is standard for extractive QA and equivalent to defining D as F1(arg max p(A|Q, X), arg max p(A|Q, Y )) The QAGS Score Given these components, we obtain the QAGS score of a generation by (1) generating K questions conditioned on the summary, (2) answering the questions using both the source article and the summary to get two sets of answers, (3) comparing corresponding answers using the answer similarity metric, and (4) averaging the answer similarity metric over all questions. We depict this process in Figure 1. 5 Experiments 5.1 Human Evaluation We test whether QAGS accurately measures the factual consistency of a summary with respect to a source article by computing correlations with human judgments of factual consistency. Datasets We focus on abstractive summarization, which is particularly interesting because factual consistency with the original text is crucial to usability, and a lack of such consistency has plagued abstractive neural summarization models (Cao et al., 2018; Falke et al., 2019; Kryscinski et al., 2019b, i.a.). To compare with prior work on evaluating summarization, we use two common abstractive summarization datasets, CNN/Daily Mail (CNNDM, Hermann et al., 2015; Nallapati et al., 2016) and XSUM (Narayan et al., 2018). CNN/DM is a standard dataset for summarization that consists of CNN and DailyMail articles. Each reference summary consists of the concatenation of three editor-written, bullet point highlights. For summaries, we use 235 test outputs from Gehrmann et al. (2018). XSUM was created by taking the first sentence of a news article as the summary, and using the rest of the article as the source. Consequently, XSUM summaries are significantly more abstractive than Metric CNN/DM XSUM ROUGE-1 28.74 13.22 ROUGE-2 17.72 8.95 ROUGE-L 24.09 8.86 METEOR 26.65 10.03 BLEU-1 29.68 11.76 BLEU-2 25.65 11.68 BLEU-3 23.96 8.41 BLEU-4 21.45 5.64 BERTScore 27.63 2.51 QAGS 54.53 17.49 Table 1: Summary-level Pearson correlation coefficients between various automatic metrics and human judgments of correctness for summarization datasets. All correlations are significant at p < .01 and p < .05 for CNN/DM and XSUM, respectively. QAGS obtains substantially higher correlations than all other automatic metrics. those of CNN/DM, and extractive summarization models perform poorly on this dataset. We found that while the XSUM summaries are more abstractive, frequently there are facts (e.g. first names) in the summary that are not available in the “article”. This quirk made it especially difficult for humans and QAGS to tell when factual errors were being made by the summarization model. To remedy this, for human evaluation and QAGS, we prepend the summary back to the “article”. We use a subset of 239 test outputs from BART fine-tuned on XSUM (Lewis et al., 2019). Annotation Protocol We collect human judgments on Amazon Mechanical Turk2 via ParlAI (Miller et al., 2017). We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix A for further details. We collect 3 annotations per summary. To obtain a single consistency score per summary, we first take the majority vote for each sentence, then average the binary scores across summary sentences to produce a final score. Inter-annotator agreement as measured by Krip2https://www.mturk.com/ 5012 pendorff’s α is 0.51 and 0.34 for CNN/DM and XSUM, respectively indicating “moderate” and “fair” agreement (Ageeva et al., 2015). While not perfect, these agreement numbers are in-line with similar figures from previous work on summarization evaluation (Daume III and Marcu, 2005). 5.2 Experimental Details Question Generation We train answerconditional QG models by fine-tuning a pretrained BART language model (Lewis et al., 2019) on NewsQA (Trischler et al., 2017), a dataset consisting of CNN articles and crowdsourced questions. During training, the model receives the concatenation of the source article and an answer, and is trained to predict the question. The answer, source article, and question are concatenated with intervening special tokens to mark the boundaries. At test time, the model receives the concatentation of a summary and an expected answer, and outputs question candidates. For each summary, we extract 10 named entities and noun phrases as answer candidates using the en-web-sm spaCy model.3 For each summary-answer pair, we generate questions using beam search with width 10, for a total of 100 question candidates. We experimented with generating via top-k (Holtzman et al., 2019) and top-p (Fan et al., 2018) sampling, but the generated questions, while diverse, were noisy and frequently nongrammatical. After filtering, we use the K = 20 most probable questions. If a summary has too few filtered questions, we randomly sample questions to reach the required number. For additional filtering and training details, see Appendix B. We implement these models with fairseq (Ott et al., 2019). Question Answering We train extractive QA models by fine-tuning BERT (Devlin et al., 2019) on SQuAD2.0 (Rajpurkar et al., 2018). We use the large-uncased BERT variant via the transformers library (Wolf et al., 2019). We found that allowing the model to predict that a question is unanswerable, as is the case in SQuAD2.0, is particularly useful in filtering out bad questions, as questions based on hallucinated facts in the summary should be unanswerable using the source article. Baselines We compare against a number of automatic evaluation metrics: ROUGE (Lin, 2004), 3https://spacy.io/api/entityrecognizer METEOR (Lavie and Agarwal, 2007), BLEU (Papineni et al., 2002), and BERTScore (Zhang et al., 2019). The latter uses BERT representations to compute an alignment between generation and reference tokens, and which is then used to compute a soft version of unigram F1. We use the large-uncased BERT variant. 5.3 Results We present Pearson correlations between humanjudged consistency scores and various automatic metrics in Table 1. For CNN/DM, all results are significant with p < 0.01; for XSUM, all results are significant with p < .05. QAGS strongly outperforms other automatic evaluation metrics in terms of correlation with the summary-level human judgments of factual consistency. BLEU and ROUGE perform comparably, and lower order n-gram metrics work better. BERTScore matches the best ngram metrics on CNN/DM, but the worst overall on XSUM. On CNN/DM, QAGS obtains nearly twice the correlation of the next best automatic metric (BLEU-1). We speculate that this large increase is due to the sensitivity of the QA model to the sentence fusing behavior exhibited in many summarization models trained on CNN/DM (Lebanoff et al., 2019). When two sentences are fused to produce an incorrect summary statement, the QA model produces different answers when using the source article than when using the summary. On XSUM, all metrics correlate worse with human judgments than on CNN/DM, which reflects the fact that XSUM is more abstractive. QAGS still outperforms the next best automatic metric. 5.4 Ablations A potential issue with model-based evaluation is that the quality of the evaluation metric may depend heavily on specific hyperparameter settings. We explore the extent to which this is true with QAGS by performing ablations on several factors. Model Quality We first consider the degree to which the quality of the underlying models impacts their evaluation capabilities. For QA quality, we answer this question by training QA models of varying quality by finetuning different versions of BERT on SQuAD. We present results in Table 2. The QA models perform similarly despite substantially different performances on the SQuAD develop5013 QA model SQuAD CNN/DM XSUM (F1) (Pear.) (Pear.) bert-base 75.95 55.20 20.71 bert-large 81.57 54.53 17.49 bert-large-wwm 84.36 51.36 18.07 Table 2: Pearson correlations between human judgments of factual consistency and QAGS using QA models of different qualities, as measured by performance on the SQuAD2.0 development set (F1). The correlations are stable across QA model quality. NewsQA CNN/DM XSUM (ppl.) (Pear.) (Pear.) 5.48 54.53 17.49 9.50 50.09 19.93 18.56 47.92 16.38 Table 3: Pearson correlations between human judgments of factual consistency and QAGS with QG models of varying quality, as measured by perplexity on the NewsQA development set. We see some decrease in correlation on CNN/DM as QG perplexity increases, though we do not see a similar trend for XSUM. ment set. Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. On XSUM, bert-base slightly outperforms the other two BERT variants. These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs. To ablate QG quality, we use models with increasing perplexity on the NewsQA development set. Results in Table 3 show that QAGS is robust to the QG model quality, with some decrease in correlation with human judgments as perplexity increases on CNN/DM, and no clear trend on XSUM. Even the weakest QG model still significantly outperforms all other automatic metrics in Table 1. Domain Effects Our approach relies on having a labeled dataset to train QG and QA models. However, for relatively niche domains, such a labeled QA/QG dataset may not exist. Instead, we may need to resort to using models trained on outof-domain data, leading to domain shift effects that negatively impact the quality of the QAGS scores. We simulate this setting by fine-tuning the # Questions CNN/DM XSUM 5 41.61 15.63 10 41.17 15.49 20 54.53 17.49 50 57.94 17.74 Table 4: Pearson correlation coefficients between QAGS scores with varying number of questions and human judgments of correctness for summarization datasets. The correlation increases with the number of questions used, but with decreasing marginal benefit. QG model on SQuAD, which is of similar size to NewsQA but drawn from Wikipedia articles rather than CNN articles, which exactly matches the genre of the summarization datasets. Evaluating with this QG model, we get correlations of 51.53 and 15.28 with human judgments on CNN/DM and XSUM respectively, versus 54.53 and 17.49 when using the NewsQA-tuned QG model. The drop in performance indicates a negative domain shift effect. However using the SQuAD-tuned QG model still substantially outperforms all other automatic metrics, again pointing to the robustness of QAGS. Number of Questions Next, we investigate the correlation with human judgments when varying the number of questions used. Results in Table 4 show that increasing the number of questions used improves correlations with human judgments. We observe a large increase when moving from 10 to 20 questions, and a smaller increase from 20 to 50 questions, indicating decreasing marginal benefit moving beyond 50 questions. However, we observe frequent clusters of generated questions that only differ by a few tokens. Encouraging greater diversity when generating questions might lead to better correlations when more questions are used. Still, With just 5 questions used QAGS substantially outperforms other automatic metrics, which indicates its robustness. Answer Similarity Metric Finally, we consider using exact match as an alternative answer similarity metric. Exact match is another common evaluation metric for extractive QA, and is more restrictive than F1. When using EM, we obtain Pearson correlations with human judgments of 45.97 and 18.10 on CNN/DM and XSUM, as opposed to 54.53 and 17.49 when using F1. 5014 Model/Metric % Correct (↑) Random 50.0% BERT NLI 64.1% ESIM 67.6% FactCC 70.0% QAGS 72.1% Table 5: Results on the sentence ranking task from Falke et al. (2019). Results using BERT NLI and ESIM are from Falke et al. (2019); FactCC is from Kryscinski et al. (2019b). QAGS outperforms previous work. 6 Re-ranking with QAGS Several works explore the use of natural language inference (NLI) models to detect factual consistency in generated text (Welleck et al., 2019; Falke et al., 2019). We compare against these methods by evaluating on the sentence ranking experiment from Falke et al. (2019). The experiment uses 373 triplets of source sentences from CNN/DM and two summary sentences generated from the model from Chen and Bansal (2018). One summary sentence is factually consistent with the source sentence, and the other is inconsistent. A metric (or model) is evaluated based on how often it ranks the consistent sentence higher than the inconsistent sentence. We present the results in Table 5. Results using two NLI models fine-tuned on MultiNLI (Williams et al., 2018), BERT NLI, and ESIM (Chen et al., 2017), are from Falke et al. (2019). FactCC (Kryscinski et al., 2019b) is an NLI-based factchecking model that is trained on a dataset tailor made for detecting factual inconsistencies in generated text. QAGS outperforms these methods, while requiring no special supervision for this task. 7 Qualitative Analysis Interpreting QAGS The questions and answers produced in computing QAGS are directly interpretable, and highlight errors in summaries. We present examples of articles, summaries, and the QAGS questions and answers in Table 6. On the first example (Table 6, top), QAGS detects several factual inconsistencies in the generated summary: The summary mistakes the first name of the attacker, the location of the attack, and the weapons used. Because the QG model focuses on these details, QAGS is able to correctly penalize the summary for its hallucinations. Because the answer candidates used are mostly named entities and noun phrases, QAGS is particularly effective at detecting errors of this kind. Using more diverse answer candidates may broaden the set of inconsistencies that QAGS is able to detect. The second example (Table 6, bottom), illustrates failure modes of QAGS. For example, the QA model incorrectly marks question 2 as unanswerable. On question 4, both answers produced are correct, but because they have no common tokens, they are marked inconsistent by QAGS. Error Analysis The interpretability of QAGS allows for error analysis on the metric. We manually annotate 400 triplets of generated questions, article answers, and summary answers that are produced in computing QAGS on the XSUM summaries, and label them by the quality of the generated questions, predicted answers, and answer similarity scores. Among the generated questions, 8.75% are nonsensical, while 3.00% are well-formed but unanswerable using the generated summary they were conditioned upon. These figures indicate that the vast majority of questions are understandable and on-topic. We frequently observe multiple questions with slightly different wordings, which is likely due to the low number of answer candidates in XSUM summaries (which are one sentence long) and due to beam search. 8.25% of questions are well-formed but unanswerable using the source, which is usually due to a hallucinated fact in the summary that the QG model turns into a question. Among predicted answers, 1.75% of questions are potentially answerable using the summary, but are incorrectly answered. This percentage increases to 32.50% for the article, which indicates that the transfer ability of the QA model is lacking. In a small number of cases, we found that while a question had a single answer in the summary, it could have multiple answers in the article. Finally, for 8.00% of the examples, the question is answered correctly using both the article and summary, but the answers have high lexical variation such that F1 score fails to detect their similarity. While this happens in a relatively small number of cases, exploring similarity metrics other than n-gram based approaches could be useful. Limitations We emphasize that QAGS and our overall framework are specifically designed to detect factual inconsistencies in generated summaries relative to the source article. QAGS does not measure other desirable properties of generated text, 5015 Article: On Friday, 28-year-old Usman Khan stabbed reportedly several people at Fishmongers’ Hall in London with a large knife, then fled up London Bridge. Members of the public confronted him; one man sprayed Khan with a fire extinguisher, others struck him with their fists and took his knife, and another, a Polish chef named ukasz, harried him with a five-foot narwhal tusk. [...] Summary : On Friday afternoon , a man named Faisal Khan entered a Cambridge University building and started attacking people with a knife and a fire extinguisher . Question 1: What did the attacker have ? Article answer: a large knife Summary answer: a knife and a fire extinguisher Question 2: When did the attack take place ? Article answer: Friday Summary answer: Friday afternoon Question 3: What is the attacker’s name ? Article answer: Usman Khan Summary answer: Faisal Khan Question 4: Where did the attack take place ? Article answer: Fishmongers’ Hall Summary answer: Cambridge University building Article: In findings published on Wednesday in the journal PLOS ONE, an international team of scientists report ancient Egyptians captured sacred ibises (Threskiornis aethiopicus) from the wild for use in ritual sacrifice rather than domesticating the birds. [. . . ] The team collected DNA samples from mummified birds collected from six separate catacombs including sites at Abydos, Saqqara, and Tuna el-Gebel with permission from the Egyptian Ministry of State for Antiquity, and several museums offered to send tissue samples from the mummified ibises in their collections. [...] Summary : Archaeologists have used DNA samples from ancient ibis birds to determine whether the birds were domesticated or sacrificed in ancient Egypt Question 1: Archaeologists have used what to determine whether the birds were domesticated ? Article Answer: hatchery structures Summary Answer: DNA samples Question 2: Who used DNA samples to determine whether the birds were domesticated ? Article Answer: [NO ANSWER] Summary Answer: Archaeologists Question 3: What are archeologists using to determine whether the birds were domesticated ? Article Answer: DNA samples Summary Answer: DNA samples Question 4: Where were the birds found? Article Answer: six separate catacombs Summary Answer: ancient Egypt Table 6: Example questions and answers generated when computing QAGS. The questions are overwhelmingly fluent and relevant. The answers indicate which tokens in the summary are factually consistent or inconsistent. The news articles are originally from https://en.wikinews.org/wiki/Bystanders_foil_knife-weilding_ man_on_London_Bridge_with_fire_extinguisher,_whale_tusk and https://en.wikinews.org/ wiki/Ancient_Egyptians_collected_wild_ibis_birds_for_sacrifice,_says_study. including fluency, readability, or factual recall. We therefore recommend using QAGS in conjunction with complementary evaluation metrics. The choices of QG and QA models in QAGS are particular to abstractive summarization and may require adaptation to be used for other conditional text generation tasks. For example, we expect that extractive summarization models may obtain nearly perfect QAGS scores because facts and statements are directly copied from the source article. 8 Related Work Automatic summarization and its evaluation are long-standing lines of work in NLP, dating at least as far back as the Document Understanding Conferences (Chali and Kolla, 2004). The primary evaluation metric then and now is ROUGE (Lin, 2004), though much work has demonstrated the limited ability of ROUGE and its relatives to evaluate summaries (Dorr et al., 2004; Liu and Liu, 2009; Kedzie et al., 2018, i.a.). Other metrics have focused on specific aspects of summarization quality, including content selection (Nenkova and Passonneau, 2004), relevance prediction (Daume III and Marcu, 2005), and many more. The idea of evaluating summaries by their ability to answer a set of questions is also long-standing (Mani et al., 1999). Like our work, Eyal et al. 5016 (2019) and Scialom et al. (2019) extend this line of work by incorporating neural network modules. We diverge from these works in two important ways. First, both works use Cloze-style questions, which are generated by masking entities in either the source document or the reference summary. We instead generate the questions with a model, allowing a much greater range of questions. Second, we produce questions conditioned on the generated summary, rather than the reference summary or source article. Producing questions from the generated summary is more appropriate for verifying the accuracy of the text, whereas using the reference or source measures content selection. There has been a recent resurgence of work leveraging NLU models for evaluating the factuality of generated text. Goodrich et al. (2019) use information extraction models to measure factual overlap, but facts are restricted to pre-defined schemas. Falke et al. (2019) investigate the use of NLI models to evaluate the factual correctness of CNN/DM summaries, and conclude that current NLI models are too brittle to be reliably used in this manner. Kryscinski et al. (2019b) train a NLI-based factchecking model by building a dataset of factual inconsistencies based on noise heuristics. Our QA approach allows a finer-grained analysis, because NLI operates on complete sentences, whereas QAGS can ask many different questions about the same sentence. 9 Conclusion We introduce a framework for automatically detecting factual inconsistencies in conditionally generated texts and use this framework to develop QAGS, a metric for measuring inconsistencies in abstractive summarization. QAGS correlates with human judgments of factuality significantly better than standard automatic evaluation metrics for summarization, and outperforms related NLI-based approaches to factual consistency checking. QAGS is naturally interpretable: The questions and answers produced in computing QAGS indicate which tokens in a generated summary are inconsistent and why. The framework we present is general, and extending it to other conditional text generation tasks such as image captioning or machine translation is a promising directions. Inspecting the generated questions and answers, we identify the transfer ability of QA models and the rigidity of F1 score as a measure of answer similarity as two key performance bottlenecks. We expect improvements in either would straightforwardly improve the quality of QAGS evaluation. Additionally, incorporating a content selection mechanism to focus the generated questions on salient facts is a promising direction. Overall, we believe QAGS demonstrates the potential of this framework to quantify and incentivize factually consistent text generation. Acknowledgments We thank Margaret Li and Jack Urbanek for help with Amazon Mechanical Turk. AW is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1342536. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. KC was partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). References Ekaterina Ageeva, Mikel L. Forcada, Francis M. Tyers, and Juan Antonio P´erez-Ortiz. 2015. Evaluating machine translation for assimilation via a gap-filling task. In Proceedings of the 18th Annual Conference of the European Association for Machine Translation, pages 137–144, Antalya, Turkey. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstractive summarization. In Thirty-Second AAAI Conference on Artificial Intelligence. Yllias Chali and Maheedhar Kolla. 2004. Summarization techniques at duc 2004. In In Proceedings of the Document Understanding Conference. Citeseer. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Hal Daume III and Daniel Marcu. 2005. Bayesian summarization at duc and a suggestion for extrinsic 5017 evaluation. In Proceedings of the Document Understanding Conference, DUC-2005, Vancouver, USA. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Bonnie Dorr, Christof Monz, Douglas Oard, David Zajic, and Richard Schwartz. 2004. Extrinsic evaluation of automatic metrics for summarization. Technical report, MARYLAND UNIV COLLEGE PARK INST FOR ADVANCED COMPUTER STUDIES. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342– 1352. Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question answering as an automatic evaluation metric for news article summarization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3938–3948. Tobias Falke, Leonardo FR Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 2214–2220. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109. Ben Goodrich, Vinay Rao, Peter J. Liu, and Mohammad Saleh. 2019. Assessing the factual accuracy of generated text. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, pages 166– 175, New York, NY, USA. ACM. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693–1701. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818–1828. Kalpesh Krishna and Mohit Iyyer. 2019. Generating question-answer hierarchies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2321–2334, Florence, Italy. Association for Computational Linguistics. Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019a. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Volume 1 (Long and Short Papers). Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019b. Evaluating the factual consistency of abstractive text summarization. Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228–231. Association for Computational Linguistics. Logan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Analyzing sentence fusion in abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 104–110. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. arXiv preprint 1910.13461. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Feifan Liu and Yang Liu. 2009. Exploring correlation between rouge and human evaluation on meeting summaries. IEEE Transactions on Audio, Speech, and Language Processing, 18(1):187–196. 5018 Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. Inderjeet Mani, David House, Gary Klein, Lynette Hirschman, Therese Firmin, and Beth M Sundheim. 1999. The tipster summac text summarization evaluation. In Ninth Conference of the European Chapter of the Association for Computational Linguistics. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. Parlai: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ a˘glar Gulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of the human language technology conference of the north american chapter of the association for computational linguistics: Hlt-naacl 2004, pages 145–152. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. Fairseq: A fast, extensible toolkit for sequence modeling. NAACL HLT 2019, page 48. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you dont know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3237–3247. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. In International Conference on Machine Learning, pages 5926–5936. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200. Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731–3741, Florence, Italy. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. 2019. Transformers: State-of-the-art natural language processing. arXiv preprint 1910.03771. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint 1904.09675. 5019 A Human Evaluation Task Design We restrict our pool of workers to US-based workers. Workeres are required to have at least 1000 approved HITs with an acceptance rate of at least 98%. The base reward for our task is $0.15. For each summary, we include automatic quality checks including • Time checks: workers who complete the task under 30s fail the check • Attention checks: we include exact copies of article sentences and corrupted mixtures of two article sentences as positive and negative control task. If a worker fails to answer both of these examples correctly, they fail the check • Explanation checks: For each sentence in the summary, the worker is required to provide a short explanation of their decision If a worker passes all checks, they are awarded a $0.85 bonus, totalling $1.00 per correct annotation. According to turkerview.com, workers of our HIT are paid well in excess of $15.00 on average. We show our annotation interfaces for the annotation task for CNN/DM and XSUM respectively in Figures 2 and 3. We use slightly different instructions to accommodate for the quirks of each dataset. For XSUM, we prepend the reference “summary” back onto the source article, as without it, workers were struggling to identify factual inconsistencies. B Model and Generation Details Question Generation We fine-tune BART for question generation using the same tuning hyperparameters as the original work. We optimize label smoothed cross entropy with smoothing parameter 0.1 (Pereyra et al., 2017) and a peak learning rate of 2e-5. We optimize for 100k steps with 5k warmup steps, and use the model with the best perplexity on the development set. To turn NewsQA into an answer conditional QG dataset, we concatenate the answer to the source article with a special marker token in between. We then concatenate another special marker token and the question. At test time, we get 10 named entities and noun phrases as answer candidates using the en-web-sm spaCy model. We randomly sample 10 if there are more than 10, and randomly duplicate some answers if there are fewer than 10. The model predicts the question after seeing an answer and the article. During decoding, we use beam search with beam size 10, length penalty 1.0, and trigram repetition blocking. Generations have minimum length 8 and max length 60. To filter the questions, we first use simple heuristics, including removing • everything after the first question mark in a question • exact duplicates • questions shorter than three tokens long For the remaining questions, we use our QA model to answer each question and we remove questions for which the QA model deems unanswerable. We then take the top 20 most probable questions, random sampling some of the filtered questions if there were too few. Question Answering We fine-tune BERT for question answering following the original work. Similar to the QG setting, we append the question and answer to the source article with intervening special marker tokens. We optimize using AdamW (Loshchilov and Hutter, 2018) with initial learning rate 5e-5. We train for 3 epochs, with a warmup ratio of 0.1. We use the model with the best development set performance. 5020 Figure 2: Annotation interface and instructions for CNN/DM factual consistency task. Figure 3: Annotation interface and instructions for XSUM factual consistency task.
2020
450
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5021–5031 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5021 Discourse-Aware Neural Extractive Text Summarization Jiacheng Xu∗1, Zhe Gan2, Yu Cheng2, Jingjing Liu2 1The University of Texas at Austin 2Microsoft Dynamics 365 AI Research [email protected]; {zhe.gan,yu.cheng,jingjl}@microsoft.com Abstract Recently BERT has been adopted for document encoding in state-of-the-art text summarization models. However, sentence-based extractive models often result in redundant or uninformative phrases in the extracted summaries. Also, long-range dependencies throughout a document are not well captured by BERT, which is pre-trained on sentence pairs instead of documents. To address these issues, we present a discourse-aware neural summarization model - DISCOBERT1. DISCOBERT extracts sub-sentential discourse units (instead of sentences) as candidates for extractive selection on a finer granularity. To capture the long-range dependencies among discourse units, structural discourse graphs are constructed based on RST trees and coreference mentions, encoded with Graph Convolutional Networks. Experiments show that the proposed model outperforms state-of-the-art methods by a significant margin on popular summarization benchmarks compared to other BERT-base models. 1 Introduction Neural networks have achieved great success in the task of text summarization (Nenkova et al., 2011; Yao et al., 2017). There are two main lines of research: abstractive and extractive. While the abstractive paradigm (Rush et al., 2015; See et al., 2017; Celikyilmaz et al., 2018; Sharma et al., 2019) focuses on generating a summary word-by-word after encoding the full document, the extractive approach (Cheng and Lapata, 2016; Zhou et al., 2018; Narayan et al., 2018) directly selects sentences from the document to assemble into a summary. The abstractive approach is more flexible ∗Most of this work was done when the first author was an intern at Microsoft. 1Code, illustration and datasets are available at: https://github.com/jiacheng-xu/DiscoBERT. 1. [It is one of the most prestigious honors]1 [bestowed upon journalists and people in the arts.]2 2. [And today, the Pulitzer prize for journalism went to The Post and Courier newspaper of Charleston, South Carolina,]1 [which has a tiny staff of just 80 and a daily circulation of 85,000.]2 …… 5. [Winner: ]1 [This iconic photo by New York Times photographer Daniel Berehulak, was part of a winning series,]2 [and shows James Dorbor, 8,]3 [suspected of being infected with Ebola,]4 [being carried by medical staff to an Ebola treatment center in Monrovia, Liberia.]5 …… 20. [The Pulitzer prizes,]1 [awarded annually by Columbia University,]2 [recognize extraordinary work in U.S. journalism, literature, drama, and other categories.]3 …… 22. [Other winners of the coveted award included the St. Louis Post-Dispatch.]1 …… Coref Graph RST Graph Document … … … … … … … … [EDU Selection] Mentions of ‘Pulitzer prizes’ 5. [Winner: ]1 [This iconic photo by New York Times photographer Daniel Berehulak, was part of a winning series,]2 [and shows James Dorbor, 8,]3 [suspected of being infected with Ebola,]4 [being carried by medical staff to an Ebola treatment center in Monrovia, Liberia.]5 1. [It is one of the most prestigious honors]1 [bestowed upon journalists and people in the arts.]2 2. [And today, the Pulitzer prize for journalism went to The Post and Courier newspaper of Charleston, South Carolina,]1 [which has a tiny staff of just 80 and a daily circulation of 85,000.]2 Sentence Selection Figure 1: Illustration of DISCOBERT for text summarization. Sentence-based BERT model (baseline) selects whole sentences 1, 2 and 5. The proposed discourse-aware model DISCOBERT selects EDUs {11, 2-1, 5-2, 20-1, 20-3, 22-1}. The right side of the figure illustrates the two discourse graphs we use: (i) Coref(erence) Graph (with the mentions of ‘Pulitzer prizes’ highlighted as examples); and (ii) RST Graph (induced by RST discourse trees). and generally produces less redundant summaries, while the extractive approach enjoys better factuality and efficiency (Cao et al., 2018). Recently, some hybrid methods have been proposed to take advantage of both, by designing a two-stage pipeline to first select and then rewrite (or compress) candidate sentences (Chen and Bansal, 2018; Gehrmann et al., 2018; Zhang et al., 2018; Xu and Durrett, 2019). Compression or rewriting aims to discard uninformative phrases in the selected sentences. However, most of these hybrid systems suffer from the inevitable disconnection between the two stages in the pipeline. Meanwhile, modeling long-range context for document summarization remains a challenge (Xu 5022 et al., 2016). Pre-trained language models (Devlin et al., 2019) are designed mostly for sentences or a short paragraph, thus poor at capturing longrange dependencies throughout a document. Empirical observations (Liu and Lapata, 2019) show that adding standard encoders such as LSTM or Transformer (Vaswani et al., 2017) on top of BERT to model inter-sentential relations does not bring in much performance gain. In this paper, we present DISCOBERT, a discourse-aware neural extractive summarization model built upon BERT. To perform compression with extraction simultaneously and reduce redundancy across sentences, we take Elementary Discourse Unit (EDU), a sub-sentence phrase unit originating from RST (Mann and Thompson, 1988; Carlson et al., 2001)2 as the minimal selection unit (instead of sentence) for extractive summarization. Figure 1 shows an example of discourse segmentation, with sentences broken down into EDUs (annotated with brackets). By operating on the discourse unit level, our model can discard redundant details in sub-sentences, therefore retaining additional capacity to include more concepts or events, leading to more concise and informative summaries. Furthermore, we finetune the representations of discourse units with the injection of prior knowledge to leverage intra-sentence discourse relations. More specifically, two discourse-oriented graphs are proposed: RST Graph GR and Coreference Graph GC. Over these discourse graphs, Graph Convolutional Network (GCN) (Kipf and Welling, 2017) is imposed to capture long-range interactions among EDUs. RST Graph is constructed from RST parse trees over EDUs of the document. On the other hand, Coreference Graph connects entities and their coreference clusters/mentions across the document. The path of coreference navigates the model from the core event to other occurrences of that event, and in parallel explores its interactions with other concepts or events. The main contribution is threefold: (i) We propose a discourse-aware extractive summarization model, DISCOBERT, which operates on a subsentential discourse unit level to generate concise and informative summary with low redundancy. (ii) We propose to structurally model 2We adopt RST as the discourse framework due to the availability of existing tools, the nature of the RST tree structure for compression, and the observations from Louis et al. (2010). Other alternatives includes Graph Bank (Wolf and Gibson, 2005) and PDTB (Miltsakaki et al., 2004). (b) RST Discourse Tree 1 2 3 4 5 [4-5] elaboration [2-5] span [1-5] span (c) Converted RST Discourse Tree 1 2 3 4 5 [1] Winner: [2] This iconic photo by New York Times photographer Daniel Berehulak, was part of a winning series, [3] and shows James Dorbor, 8, [4] suspected of being infected with Ebola, [5] being carried by medical staff to an Ebola treatment center in Monrovia, Liberia. [3-5] list (a) Conversion (Sec 2.2) Figure 2: Example of discourse segmentation and RST tree conversion. The original sentence is segmented into 5 EDUs in box (a), and then parsed into an RST discourse tree in box (b). The converted dependencybased RST discourse tree is shown in box (c). Nucleus nodes including [2], [3] and [5], and Satellite nodes including [2] and [4] are denoted by solid lines and dashed lines, respectively. Relations are in italic. The EDU [2] is the head of the whole tree (span [1-5]), while the EDU [3] is the head of the span [3-5]. inter-sentential context with two types of discourse graph. (iii) DISCOBERT achieves new state of the art on two popular newswire text summarization datasets, outperforming other BERT-base models. 2 Discourse Graph Construction In this section, we first introduce the Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), a linguistic theory for discourse analysis, and then explain how we construct discourse graphs used in DISCOBERT. Two types of discourse graph are considered: RST Graph and Coreference Graph. All edges are initialized as disconnected, and connections are later added for a subset of nodes based on RST discourse parse tree or coreference mentions. 2.1 Discourse Analysis Discourse analysis focuses on inter-sentential relations in a document or conversation. In the RST framework, the discourse structure of text can be represented in a tree format. The whole document can be segmented into contiguous, adjacent and non-overlapping text spans called Elementary Discourse Units (EDUs). Each EDU is tagged as either Nucleus or Satellite, which characterizes its nuclearity or saliency. Nucleus nodes are generally more central, and Satellite nodes are more peripheral and less important in terms of content and grammatical reliance. There are dependencies among EDUs that represent their rhetorical relations. In this work, we treat EDU as the minimal unit for content selection in text summarization. Fig5023 ure 2 shows an example of discourse segmentation and the parse tree of a sentence. Among these EDUs, rhetorical relations represent the functions of different discourse units. As observed in Louis et al. (2010), the RST tree structure already serves as a strong indicator for content selection. On the other hand, the agreement between rhetorical relations tends to be lower and more ambiguous. Thus, we do not encode rhetorical relations explicitly in our model. In content selection for text summarization, we expect the model to select the most concise and pivotal concept in the document, with low redundancy.3 However, in traditional extractive summarization methods, the model is required to select a whole sentence, even though some parts of the sentence are not necessary. Our proposed approach can select one or several fine-grained EDUs to render the generated summaries less redundant. This serves as the foundation of our DISCOBERT model. 2.2 RST Graph When selecting sentences as candidates for extractive summarization, we assume each sentence is grammatically self-contained. But for EDUs, some restrictions need to be considered to ensure grammaticality. For example, Figure 2 illustrates an RST discourse parse tree of a sentence, where “[2] This iconic ... series” is a grammatical sentence but “[3] and shows ... 8” is not. We need to understand the dependencies between EDUs to ensure the grammaticality of the selected combinations. The detail of the derivation of the dependencies could be found in Sec 4.3. The construction of the RST Graph aims to provide not only local paragraph-level but also longrange document-level connections among EDUs. We use the converted dependency version of the tree to build the RST Graph GR, by initializing an empty graph and treating every discourse dependency from the i-th EDU to the j-th EDU as a directed edge, i.e., GR[i][j] = 1. 2.3 Coreference Graph Text summarization, especially news summarization, usually suffers from the well-known ‘position bias’ issue (Kedzie et al., 2018), where most of the key information is described at the very beginning 3For example, in Figure 2, details such as the name of the suspected child in [3], the exact location of the photo in [5], and who was carrying the child in [4], are unlikely to be reflected in the final summary. Algorithm 1 Construction of the Coreference Graph GC. Require: Coreference clusters C = {C1, C2, · · · , Cn}; mentions for each cluster Ci = {Ei1, · · · , Eim}. Initialize the Graph GC without any edge GC[∗][∗] = 0. for i = 0 to n do Collect the location of all occurences {Ei1, · · · , Eim} to L = {l1, · · · , lm}. for j = 1 to m, k = 1 to m do GC[j][k] = 1 end for end for return Constructed Graph GC. of the document. However, there is still a decent amount of information spread in the middle or at the end of the document, which is often ignored by summarization models. We observe that around 25% of oracle sentences appear after the first 10 sentences in the CNNDM dataset. Besides, in long news articles, there are often multiple core characters and events throughout the whole document. However, existing neural models are poor at modeling such long-range context, especially when there are multiple ambiguous coreferences to resolve. To encourage and guide the model to capture the long-range context in the document, we propose a Coreference Graph built upon discourse units. Algorithm 1 describes how to construct the Coreference Graph. We first use Stanford CoreNLP (Manning et al., 2014) to detect all the coreference clusters in an article. For each coreference cluster, all the discourse units containing the mention of the same cluster will be connected. This process is iterated over all the coreference mention clusters to create the final Coreference Graph. Figure 1 provides an example, where ‘Pulitzer prizes’ is an important entity and has occurred multiple times in multiple discourse units. The constructed Coreference Graph is shown on the right side of the document4. When graph GC is constructed, edges among 1-1, 2-1, 20-1 and 22-1 are all connected due to the mentions of ‘Pulitzer prizes’. 3 DISCOBERT Model 3.1 Overview Figure 3 provides an overview of the proposed model, consisting of a Document Encoder and a Graph Encoder. For the Document Encoder, a pretrained BERT model is first used to encode the 4We intentionally ignore other entities and mentions in this example for simplicity. 5024 … … … It is one of the most prestigious honors <CLS> w11 <SEP> bestowed upon … the arts. And today, the Pulitzer Prize for … South Carolina, which has a tiny … 85,000. <CLS> w12 hS 1 Label <SEP> <CLS> w18 w21 w28 w31 … … … … … … … … … … … … … … … BERT SpanExt SpanExt SpanExt SpanExt hB 10 hB 11 hB 12 hB 18 hB 21 hB 28 hB 2* hB 30 hB 31 hB 32 hB 33 w32 w33 hB 3|18 w3|18 w41 hB 41 hB 4|14 w4|14 hB 4* hB 50 hS 2 hS 3 hS 4 Stacked Discourse Graph Encoders hG 1 hG 2 hG 3 hG 4 MLP MLP MLP MLP ̂y2 ̂y3 ̂y4 ̂y1 1 0 1 0 Prediction … … … DGE(k) FFNN h(k) LN Dropout Dropout LN h(k+1) Graph Convolutional Network Figure 3: (Left) Model architecture of DISCOBERT. The Stacked Discourse Graph Encoders contain k stacked DGE blocks. (Right) The architecture of each Discourse Graph Encoder (DGE) block. whole document on the token level. Then, a selfattentive span extractor is used to obtain the EDU representations from the corresponding text spans. The Graph Encoder takes the output of the Document Encoder as input and updates the EDU representations with Graph Convolutional Network based on the constructed discourse graphs, which are then used to predict the oracle labels. Assume that document D is segmented into n EDUs in total, i.e., D = {d1, d2, · · · , dn}, where di denotes the i-th EDU. Following Liu and Lapata (2019), we formulate extractive summarization as a sequential labeling task, where each EDU di is scored by neural networks, and decisions are made based on the scores of all EDUs. The oracle labels are a sequence of binary labels, where 1 stands for being selected and 0 for not. We denote the labels as Y = {y∗ 1, y∗ 2, · · · , y∗ n}. During training, we aim to predict the sequence of labels Y given the document D. During inference, we need to further consider discourse dependency to ensure the coherence and grammaticality of the output summary. 3.2 Document Encoder BERT is a pre-trained deep bidirectional Transformer encoder (Vaswani et al., 2017; Devlin et al., 2019). Following Liu and Lapata (2019), we encode the whole document with BERT and finetune the BERT model for summarization. BERT is originally trained to encode a single sentence or sentence pair. However, a news article typically contains more than 500 words, hence we need to make some adaptation to apply BERT for document encoding. Specifically, we insert ⟨CLS⟩ and ⟨SEP⟩tokens at the beginning and the end of each sentence, respectively.5 In order to encode long documents such as news articles, we also extend the maximum sequence length that BERT can take from 512 to 768 in all our experiments. The input document after tokenization is denoted as D = {d1, · · · , dn}, and di = {wi1, · · · , wiℓi}, where ℓi is the number of BPE tokens in the i-th EDU. If di is the first EDU in a sentence, there is also a ⟨CLS⟩token prepended to di; if dj is the last EDU in a sentence, there is a ⟨SEP⟩token appended to dj (see Figure 3). The schema of insertion of ⟨CLS⟩and ⟨SEP⟩is an approach used in Liu and Lapata (2019). For simplicity, these two tokens are not shown in the equations. BERT model is then used to encode the document: {hB 11, · · · , hB nℓn} = BERT({w11, · · · , wnℓn}) , where {hB 11, · · · , hB nℓn} is the BERT output of the whole document in the same length as the input. After the BERT encoder, the representation of the ⟨CLS⟩token can be used as sentence representation. However, this approach does not work in our setting, since we need to extract the representation for EDUs instead. Therefore, we adopt a 5We also tried inserting ⟨CLS⟩and ⟨SEP⟩at the beginning and the end of every EDU, and treating the corresponding ⟨CLS⟩representation as the representation for each EDU, but the performance drops drastically. 5025 Self-Attentive Span Extractor (SpanExt), proposed in Lee et al. (2017), to learn EDU representation. For the i-th EDU with ℓi words, with the output from the BERT encoder {hB i1, hB i2, · · · , hB iℓi}, we obtain EDU representation as follows: αij = W2 · ReLU(W1hB ij + b1) + b2 aij = exp(αij) Pℓi k=1 exp(αik) , hS i = ℓi X j=1 aij · hB ij , where αij is the score of the j-th word in the EDU, aij is the normalized attention of the j-th word w.r.t. all the words in the span. hS i is a weighted sum of the BERT output hidden states. Throughout the paper, all the W matrices and b vectors are parameters to learn. We abstract the above Self-Attentive Span Extractor as hS i = SpanExt(hB i1, · · · , hB iℓi). After the span extraction step, the whole document is represented as a sequence of EDU representations: hS = {hS 1 , · · · , hS n} ∈Rdh×n, which will be sent to the graph encoder. 3.3 Graph Encoder Given the constructed graph G = (V, E), nodes V correspond to the EDUs in a document, and edges E correspond to either RST discourse relations or coreference mentions. We then use Graph Convolutional Network to update the representations of all the EDUs, to capture long-range dependencies missed by BERT for better summarization. To modularize architecture design, we present a single Discourse Graph Encoder (DGE) layer. Multiple DGE layers are stacked in our experiments. Assume that the input for the k-th DGE layer is denoted as h(k) = {h(k) 1 , . . . , h(k) n } ∈Rdh×n, and the corresponding output is denoted as h(k+1) = {h(k+1) 1 , . . . , h(k+1) n } ∈Rdh×n. The k-th DGE layer is designed as follows: u(k) i = W(k) 4 ReLU(W(k) 3 h(k) i + b(k) 3 ) + b(k) 4 v(k) i = LN(h(k) i + Dropout(u(k) i )) w(k) i = ReLU  X j∈Ni 1 |Ni|W(k) 5 v(k) j + b(k) 5  h(k+1) i = LN(Dropout(w(k) i ) + v(k) i ) , where LN(·) represents Layer Normalization, Ni denotes the neighorhood of the i-th EDU node. h(k+1) i is the output of the i-th EDU in the k-th DGE layer, and h(1) = hS, which is the output from the Document Encoder. After K layers of Dataset Document Sum. # E in Graph # sent. # EDU # tok. # tok. GR GC CNNDM 24 67 541 54 66 233 NYT 22 66 591 87 65 143 Table 1: Statistics of the datasets. The first block shows the average number of sentences, EDUs and tokens in the documents. The second block shows the average number of tokens in the reference summaries. The third block shows the average number of edges in the constructed RST Graphs (GR) and Coreference Graphs (GC), respectively. graph propagation, we obtain hG = h(K+1) ∈ Rdh×n, which is the final representation of all the EDUs after the stacked DGE layers. For different graphs, the parameter of DGEs are not shared. If we use both graphs, their output are concatenated: hG = ReLU(W6[hG C; hG R] + b6) . 3.4 Training & Inference During training, hG is used for predicting the oracle labels. Specifically, ˆyi = σ(W7hG i + b7) where σ(·) represents the logistic function, and ˆyi is the prediction probability ranging from 0 to 1. The training loss of the model is binary cross-entropy loss given the predictions and oracles: L = −Pn i=1(y∗ i log(ˆyi) + (1 −y∗ i ) log(1 −ˆyi)) . For DISCOBERT without graphs, the output from Document Encoder hS is used for prediction instead. The creation of oracle is operated on EDU level. We greedily pick up EDUs with their necessary dependencies until R-1 F1 drops. During inference, given an input document, after obtaining the prediction probabilities of all the EDUs, i.e., ˆy = {ˆy1, · · · , ˆyn}, we sort ˆy in descending order, and select EDUs accordingly. Note that the dependencies between EDUs are also enforced in prediction to ensure grammacality of generated summaries. 4 Experiments In this section, we present experimental results on two popular news summarization datasets. We compare our proposed model with state-of-the-art baselines and conduct detailed analysis to validate the effectiveness of DISCOBERT. 4.1 Datasets We evaluate the models on two datasets: New York Times (NYT) (Sandhaus, 2008), CNN and Dailymail (CNNDM) (Hermann et al., 2015). We use the 5026 script from See et al. (2017) to extract summaries from raw data, and Stanford CoreNLP for sentence boundary detection, tokenization and parsing (Manning et al., 2014). Due to the limitation of BERT, we only encode up to 768 BERT BPEs. Table 1 provides statistics of the datasets. The edges in GC are undirected, while those in GR are directional. For CNNDM, there are 287,226, 13,368 and 11,490 samples for training, validation and test, respectively. We use the un-anonymized version as in previous summarization work. NYT is licensed by LDC6. Following previous work (Zhang et al., 2019; Xu and Durrett, 2019), we use 137,778, 17,222 and 17,223 samples for training, validation, and test, respectively. 4.2 State-of-the-art Baselines We compare our model with the following state-ofthe-art neural text summarization models. Extractive Models: BanditSum treats extractive summarization as a contextual bandit problem, trained with policy gradient methods (Dong et al., 2018). NeuSum is an extractive model with seq2seq architecture, where the attention mechanism scores the document and emits the index as the selection (Zhou et al., 2018). Compressive Models: JECS is a neural textcompression-based summarization model using BLSTM as the encoder (Xu and Durrett, 2019). The first stage is selecting sentences, and the second stage is sentence compression by pruning constituency parsing tree. BERT-based Models: BERT-based models have achieved significant improvement on CNNDM and NYT, when compared with LSTM counterparts. BertSum is the first BERT-based extractive summarization model (Liu and Lapata, 2019). Our baseline model BERT is the re-implementation of BertSum. PNBert proposed a BERT-based model with various training strategies, including reinforcement learning and Pointer Networks (Zhong et al., 2019). HiBert is a hierarchical BERT-based model for document encoding, which is further pretrained with unlabeled data (Zhang et al., 2019). 4.3 Implementation Details We use AllenNLP (Gardner et al., 2018) as the code framework. The implementation of graph 6https://catalog.ldc.upenn.edu/ LDC2008T19 Model R-1 R-2 R-L Lead3 40.42 17.62 36.67 Oracle (Sentence) 55.61 32.84 51.88 Oracle (Discourse) 61.61 37.82 59.27 NeuSum (Zhou et al., 2018) 41.59 19.01 37.98 BanditSum (Dong et al., 2018) 41.50 18.70 37.60 JECS (Xu and Durrett, 2019) 41.70 18.50 37.90 PNBERT (Zhong et al., 2019) 42.39 19.51 38.69 PNBERT w. RL 42.69 19.60 38.85 BERT (Zhang et al., 2019) 41.82 19.48 38.30 HIBERTS 42.10 19.70 38.53 HIBERT∗ S 42.31 19.87 38.78 HIBERT∗ M 42.37 19.95 38.83 BERTSUM (Liu and Lapata, 2019) 43.25 20.24 39.63 T5-Base (Raffel et al., 2019) 42.05 20.34 39.40 BERT 43.07 19.94 39.44 DISCOBERT 43.38 20.44 40.21 DISCOBERT w. GC 43.58 20.64 40.42 DISCOBERT w. GR 43.68 20.71 40.54 DISCOBERT w. GR & GC 43.77 20.85 40.67 Table 2: Results on the test set of the CNNDM dataset. ROUGE-1, -2 and -L F1 are reported. Models with the asterisk symbol (*) used extra data for pre-training. R1 and R-2 are shorthands for unigram and bigram overlap; R-L is the longest common subsequence. encoding is based on DGL (Wang et al., 2019). Experiments are conducted on a single NVIDIA P100 card, and the mini-batch size is set to 6 due to GPU memory capacity. The length of each document is truncated to 768 BPEs. We use the pre-trained ‘bert-base-uncased’ model and fine tune it for all experiments. We train all our models for up to 80,000 steps. ROUGE (Lin, 2004) is used as the evaluation metrics, and ‘R-2’ is used as the validation criteria. The realization of discourse units and structure is a critical part of EDU pre-processing, which requires two steps: discourse segmentation and RST parsing. In the segmentation phase, we use a neural discourse segmenter based on the BiLSTM CRF framework (Wang et al., 2018)7. The segmenter achieved 94.3 F1 score on the RST-DT test set, in which the human performance is 98.3. In the parsing phase, we use a shift-reduce discourse parser to extract relations and identify nuclearity (Ji and Eisenstein, 2014)8. The dependencies among EDUs are crucial to the grammaticality of selected EDUs. Here are the two steps to learn the derivation of dependencies: head inheritance and tree conversion. Head inheritance defines the head node for each valid non-terminal tree node. For each leaf node, the 7https://github.com/PKU-TANGENT/ NeuralEDUSeg 8https://github.com/jiyfeng/DPLP 5027 head is itself. We determine the head node(s) of non-terminal nodes based on their nuclearity.9 For example, in Figure 2, the heads of text spans [1-5], [2-5], [3-5] and [4-5] need to be grounded to a single EDU. We propose a simple yet effective schema to convert RST discourse tree to a dependencybased discourse tree.10 We always consider the dependency restriction such as the reliance of Satellite on Nucleus, when we create oracle during preprocessing and when the model makes the prediction. For the example in Figure 2, if the model selects “[5] being carried ... Liberia.” as a candidate span, we will enforce the model to select “[3] and shows ... 8,” and “[2] This ... series,” as well. The number of chosen EDUs depends on the average length of the reference summaries, dependencies across EDUs as mentioned above, and the length of the existing content. The optimal average number of EDUs selected is tuned on the development set. 4.4 Experimental Results Results on CNNDM Table 2 shows results on CNNDM. The first section includes Lead3 baseline, sentence-based oracle, and discourse-based oracle. The second section lists the performance of baseline models, including non-BERT-based and BERTbased variants. The performance of our proposed model is listed in the third section. BERT is our implementation of sentence-based BERT model. DISCOBERT is our discourse-based BERT model without Discourse Graph Encoder. DISCOBERT w. GC and DISCOBERT w. GR are the discoursebased BERT model with Coreference Graph and RST Graph, respectively. DISCOBERT w. GR & GC is the fusion model encoding both graphs. The proposed DISCOBERT beats the sentencebased counterpart and all the competitor models. With the help of Discourse Graph Encoder, the graph-based DISCOBERT beats the stateof-the-art BERT model by a significant margin (0.52/0.61/1.04 on R-1/-2/-L on F1). Ablation study with individual graphs shows that the RST Graph is slightly more helpful than the Coreference 9If both children are N(ucleus), then the head of the current node inherits the head of the left child. Otherwise, when one child is N and the other is S, the head of the current node inherits the head of the N child. 10If one child node is N and the other is S, the head of the S node depends on the head of the N node. If both children are N and the right child does not contain a subject in the discourse, the head of the right N node depends on the head of the left N node. Model R-1 R-2 R-L Lead3 41.80 22.60 35.00 Oracle (Sentence) 64.22 44.57 57.27 Oracle (Discourse) 67.76 48.05 62.40 JECS (Xu and Durrett, 2019) 45.50 25.30 38.20 BERT (Zhang et al., 2019) 48.38 29.04 40.53 HIBERTS 48.92 29.58 41.10 HIBERTM 49.06 29.70 41.23 HIBERT∗ S 49.25 29.92 41.43 HIBERT∗ M 49.47 30.11 41.63 BERT 48.48 29.01 40.62 DISCOBERT 49.78 30.30 42.44 DISCOBERT w. GC 49.79 30.18 42.48 DISCOBERT w. GR 49.86 30.25 42.55 DISCOBERT w. GR & GC 50.00 30.38 42.70 Table 3: Results on the test set of the NYT dataset. Models with the asterisk symbol (*) used extra data for pre-training. Graph, while the combination of both achieves better performance overall. Results on NYT Results are summarized in Table 3. The proposed model surpasses previous state-of-the-art BERT-based model by a significant margin. HIBERT∗ S and HIBERT∗ M used extra data for pre-training the model. We notice that in the NYT dataset, most of the improvement comes from the use of EDUs as minimal selection units. DISCOBERT provides 1.30/1.29/1.82 gain on R-1/-2/-L over the BERT baseline. However, the use of discourse graphs does not help much in this case. 4.5 Grammaticality Due to segmentation and partial selection of sentence, the output of our model might not be as grammatical as the original sentence. We manually examined and automatically evaluated model output, and observed that overall, the generated summaries are still grammatical, given the RST dependency tree constraining the rhetorical relations among EDUs. A set of simple yet effective post-processing rules helps to complete the EDUs in some cases. Automatic Grammar Checking We followed Xu and Durrett (2019) to perform automatic grammar checking using Grammarly. Table 4 shows the grammar checking results, where the average number of errors in every 10,000 characters on CNNDM and NYT datasets is reported. We compare DISCOBERT with sentence-based BERT model. ‘All’ shows the summation of the number of errors in all categories. As shown in the table, the 5028 Source M All CR PV PT O CNNDM Sent 33.0 18.7 9.0 2.3 3.0 Disco 34.0 18.3 8.4 2.6 4.7 NYT Sent 23.3 13.5 5.9 0.8 3.1 Disco 23.8 13.9 5.7 0.8 3.4 Table 4: Number of errors per 10,000 characters based on automatic grammaticality checking with Grammarly on CNNDM and NYT. Lower values are better. Detailed error categories, including correctness (CR), passive voice (PV) misuse, punctuation (PT) in compound/complex sentences and others (O), are listed from left to right. Model All Coherence Grammaticality Sent 3.45 ± 0.87 3.30 ± 0.90 3.45 ± 1.06 Disco 3.24 ± 0.84 3.15 ± 0.95 3.25 ± 1.02 Ref 3.28 ± 0.99 3.12 ± 0.94 3.29 ± 1.06 Table 5: Human evaluation results. We ask Turkers to grade the overall preference, coherence and grammaticality from 1 to 5. Mean values along with standard deviations are reported. summaries generated by our model have retained the quality of the original text. Human Evaluation We sampled 200 documents from the test set of CNNDM and for each sample, we asked two Turkers to grade three summaries from 1 to 5. Results are shown in Table 5. SentBERT model (the original BERTSum model) selects sentences from the document, hence providing the best overall readability, coherence, and grammaticality. In some cases, reference summaries are just long phrases, so the scores are slightly lower than those from the sentence model. DISCOBERT model is slightly worse than Sent-BERT model but is fully comparable to the other two variants. Examples & Analysis We show some examples of model output in Table 6. We notice that a decent amount of irrelevant details are removed from the extracted summary. Despite the success, we further conducted error analysis and found that the errors mostly originated from the RST dependency resolution and the upstream parsing error of the discourse parser. The misclassification of RST dependencies and the hand-crafted rules for dependency resolution hurted the grammaticality and coherence of the ‘generated’ outputs. Common punctuation issues include extra or missing commas, as well as missing quotation marks. Some of the coherence issue Clare Hines , who lives in Brisbane, was diagnosed with a brain tumour after suffering epileptic seizures. After a number of tests doctors discovered she had a benign tumour that had wrapped itself around her acoustic, facial and balance nerve – and told her she had have it surgically removed or she risked the tumour turning malignant. One week before brain surgery she found out she was pregnant. Jordan Henderson, in action against Aston Villa at Wembley on Sunday, has agreed a new Liverpool deal. The club’s vice captain puts pen to paper on a deal which will keep him at Liverpool until 2020. Rodgers will consider Henderson for the role of club captain after Steven Gerrard moves to LA Galaxy at the end of the campaign but, for now, the England international is delighted to have agreed terms on a contract that will take him through the peak years of his career. Table 6: Example outputs from CNNDM by DISCOBERT. Strikethrough indicates discarded EDUs. originates from missing or improper or missing anaphora resolution. In this example “[‘Johnny is believed to have drowned,]1 [but actually he is fine,’]2 [the police say.]3”, only selecting the second EDU yields a sentence “actually he is fine”, which is not clear who is ‘he’ mentioned here. 5 Related Work Neural Extractive Summarization Neural networks have been widely used in extractive summarization. Various decoding approaches, including ranking (Narayan et al., 2018), index prediction (Zhou et al., 2018) and sequential labelling (Nallapati et al., 2017; Zhang et al., 2018; Dong et al., 2018), have been applied to content selection. Our model uses a similar configuration to encode the document with BERT as Liu and Lapata (2019) did, but we use discourse graph structure and graph encoder to handle the long-range dependency issue. Neural Compressive Summarization Text summarization with compression and deletion has been explored in some recent work. Xu and Durrett (2019) presented a two-stage neural model for selection and compression based on constituency tree pruning. Dong et al. (2019) presented a neural sentence compression model with discrete operations including deletion and addition. Different from these studies, as we use EDUs as minimal selection basis, sentence compression is achieved automatically in our model. Discourse & Summarization The use of discourse theory for text summarization has been explored before. Louis et al. (2010) examined the 5029 benefit of graph structure provided by discourse relations for text summarization. Hirao et al. (2013); Yoshida et al. (2014) formulated the summarization problem as the trimming of the document discourse tree. Durrett et al. (2016) presented a system of sentence extraction and compression with ILP methods using discourse structure. Li et al. (2016) demonstrated that using EDUs as units of content selection leads to stronger summarization performance. Compared with them, our proposed method is the first neural end-to-end summarization model using EDUs as the selection basis. Graph-based Summarization Graph approach has been explored in text summarization over decades. LexRank introduced a stochastic graphbased method for computing relative importance of textual units (Erkan and Radev, 2004). Yasunaga et al. (2017) employed a GCN on the relation graphs with sentence embeddings obtained from RNN. Tan et al. (2017) also proposed graphbased attention in abstractive summarization model. Fernandes et al. (2018) developed a framework to reason long-distance relationships for text summarization. 6 Conclusion In this paper, we present DISCOBERT, which uses discourse unit as the minimal selection basis to reduce summarization redundancy and leverages two types of discourse graphs as inductive bias to capture long-range dependencies among discourse units. We validate the proposed approach on two popular summarization datasets, and observe consistent improvement over baseline models. For future work, we will explore better graph encoding methods, and apply discourse graphs to other tasks that require long document encoding. Acknowledgement Thanks to Junyi Jessy Li, Greg Durrett, Yen-Chun Chen, and to the other members of the Microsoft Dynamics 365 AI Research team for the proofreading, feedback and suggestions. References Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the Original: Fact Aware Neural Abstractive Summarization. In AAAI Conference on Artificial Intelligence. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurovsky. 2001. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1662–1675, New Orleans, Louisiana. Association for Computational Linguistics. Yen-Chun Chen and Mohit Bansal. 2018. Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Association for Computational Linguistics. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393–3402, Florence, Italy. Association for Computational Linguistics. Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, and Jackie Chi Kit Cheung. 2018. BanditSum: Extractive Summarization as a Contextual Bandit. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3739–3748. Association for Computational Linguistics. Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-Based Single-Document Summarization with Compression and Anaphoricity Constraints. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1998–2008. Association for Computational Linguistics. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text 5030 summarization. Journal of artificial intelligence research, 22:457–479. Patrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt. 2018. Structured neural summarization. arXiv preprint arXiv:1811.01824. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1– 6, Melbourne, Australia. Association for Computational Linguistics. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-Up Abstractive Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc. Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Singledocument summarization as a tree knapsack problem. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA. Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13–24, Baltimore, Maryland. Association for Computational Linguistics. Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818–1828. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proceedings of ICLR. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics. Junyi Jessy Li, Kapil Thadani, and Amanda Stent. 2016. The role of discourse units in near-extractive summarization. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 137–147. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3728–3738, Hong Kong, China. Association for Computational Linguistics. Annie Louis, Aravind Joshi, and Ani Nenkova. 2010. Discourse indicators for content selection in summarization. In Proceedings of the SIGDIAL 2010 Conference, pages 147–156, Tokyo, Japan. Association for Computational Linguistics. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. Association for Computational Linguistics. Eleni Miltsakaki, Rashmi Prasad, Aravind Joshi, and Bonnie Webber. 2004. The penn discourse treebank. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04). Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents. In AAAI Conference on Artificial Intelligence. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759. Association for Computational Linguistics. Ani Nenkova, Kathleen McKeown, et al. 2011. Automatic summarization. Foundations and Trends R⃝in Information Retrieval, 5(2–3):103–233. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. 5031 Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Association for Computational Linguistics. Evan Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–1083. Association for Computational Linguistics. Eva Sharma, Luyang Huang, Zhe Hu, and Lu Wang. 2019. An entity-driven framework for abstractive summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3271–3282. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1181, Vancouver, Canada. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J Smola, and Zheng Zhang. 2019. Deep graph library: Towards efficient and scalable deep learning on graphs. ICLR Workshop on Representation Learning on Graphs and Manifolds. Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmentation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 962–967, Brussels, Belgium. Association for Computational Linguistics. Florian Wolf and Edward Gibson. 2005. Representing discourse coherence: A corpus-based study. Computational linguistics, 31(2):249–287. Jiacheng Xu, Danlu Chen, Xipeng Qiu, and Xuanjing Huang. 2016. Cached long short-term memory neural networks for document-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1660–1669, Austin, Texas. Association for Computational Linguistics. Jiacheng Xu and Greg Durrett. 2019. Neural extractive text summarization with syntactic compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China. Association for Computational Linguistics. Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2017. Recent advances in document summarization. Knowledge and Information Systems, 53(2):297–336. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 452–462, Vancouver, Canada. Association for Computational Linguistics. Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency-based discourse parser for single-document summarization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1834–1839, Doha, Qatar. Association for Computational Linguistics. Xingxing Zhang, Mirella Lapata, Furu Wei, and Ming Zhou. 2018. Neural Latent Extractive Document Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 779–784. Association for Computational Linguistics. Xingxing Zhang, Furu Wei, and Ming Zhou. 2019. HIBERT: Document level pre-training of hierarchical bidirectional transformers for document summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5059–5069, Florence, Italy. Association for Computational Linguistics. Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2019. Searching for effective neural extractive summarization: What works and what’s next. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1049–1058, Florence, Italy. Association for Computational Linguistics. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural Document Summarization by Jointly Learning to Score and Select Sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654– 663. Association for Computational Linguistics.
2020
451
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5032–5042 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5032 Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction Raphael Schumann1, Lili Mou2, Yao Lu3, Olga Vechtomova3, Katja Markert1 1Institute of Computational Linguistics, Heidelberg University, Germany {rschuman, markert}@cl.uni-heidelberg.de 2University of Alberta, Canada; Alberta Machine Intelligence Institute (Amii) [email protected] 3University of Waterloo, Canada {yao.lu, ovechtom}@uwaterloo.ca Abstract Automatic sentence summarization produces a shorter version of a sentence, while preserving its most important information. A good summary is characterized by language fluency and high information overlap with the source sentence. We model these two aspects in an unsupervised objective function, consisting of language modeling and semantic similarity metrics. We search for a high-scoring summary by discrete optimization. Our proposed method achieves a new state-of-the art for unsupervised sentence summarization according to ROUGE scores. Additionally, we demonstrate that the commonly reported ROUGE F1 metric is sensitive to summary length. Since this is unwillingly exploited in recent work, we emphasize that future evaluation should explicitly group summarization systems by output length brackets.1 1 Introduction Sentence summarization transforms a long source sentence into a short summary, while preserving key information (Rush et al., 2015). Sentence summarization has wide applications, for example, news headline generation and text simplification. State-of-the-art sentence summarization systems are based on sequence-to-sequence neural networks (Rush et al., 2015; Nallapati et al., 2016; Wang et al., 2019), which require massive parallel data for training. Therefore, unsupervised sentence summarization has recently attracted increasing interest. Cycle-consistency approaches treat the summary as a discrete latent variable and use it to reconstruct the source sentence (Wang and Lee, 2018; Baziotis et al., 2019). Such latent-space generation fails to explicitly model the resemblance between the source sentence and the target summary. 1Our code and system outputs are available at: https://github.com/raphael-sch/HC_ Sentence_Summarization the world 's biggest miner bhp billiton announced tuesday it was dropping its controversial hostile takeover bid for rival rio tinto due to the state of the global economy bhp billiton dropping hostile bid for rio tinto bhp billiton drops rio tinto takeover bid summary: reference: Figure 1: Summarizing a sentence x by hill climbing. Each row is a Boolean vector at at a search step t . A black cell indicates a word is selected, and vice versa. Randomly swapping two values in the Boolean vector yields a new summary that is scored by an objective function that measures language fluency and semantic similarity. If the new summary increases the objective, this summary is accepted as the current best solution. Rejected solutions are not depicted. Zhou and Rush (2019) propose a left-to-right beam search approach based on a heuristically defined scoring function. However, beam search is biased towards the first few words of the source. In this paper, we propose a hill-climbing approach to unsupervised sentence summarization, directly extracting words from the source sentence. This is motivated by the observation that humanwritten reference summaries exhibit high word overlap with the source sentence, even preserving word order to a large extent. To perform word extraction for summarization, we define a scoring function — similar to Miao et al. (2019) and Zhou and Rush (2019) — that evaluates the quality of a candidate summary by language fluency, semantic similarity to the source, and a hard constraint on output length. We search towards our scoring function by first choice hill-climbing (FCHC), shown in Figure 1. We start from a random subset of words of the required output length. For each search step, a new candidate is sampled by randomly swapping 5033 a selected word and a non-selected word. We accept the new candidate if its score is higher than the current one. In contrast to beam search (Zhou and Rush, 2019), our summary is not generated sequentially from the beginning of a sentence, and therefore not biased towards the first few words. Due to the nature of the search action, our approach is able to explicitly control the length of a summary as a hard constraint. In all previous work, the summary length is weakly controlled by length embeddings or a soft length penalty (Zhou and Rush, 2019; Wang and Lee, 2018; Fevry and Phang, 2018; Baziotis et al., 2019). Thus, the generated summaries by different systems vary considerably in average length, for example, ranging from 9 to 15 on a headline corpus (Section 4.1). Previous work uses ROUGE F1 to compare summaries that might differ in length. We show that ROUGE F1 is unfortunately sensitive to summary output length, in general favoring models that produce longer summaries. Therefore, we argue that controlling the output length should be an integral part of the summarization task and that a fair system comparison can only be conducted between summaries in the same length bracket. Our model establishes a new state-of-the-art for unsupervised sentence summarization across all commonly-used length brackets and different ROUGE metrics on the Gigaword dataset for headline generation (Rush et al., 2015) and on DUC2004 (Over and Yen, 2004). The main contributions of this paper are: • We propose a novel method for unsupervised sentence summarization by hill climbing with word-level extraction. • We outperform current unsupervised sentence summarization systems, including more complex sentence reconstruction models. • We show that ROUGE F1 is sensitive to summary length and thus emphasize the importance of explicitly controlling summary length for a fair comparison among different summarization systems. 2 Related Work Text Summarization. The task can be categorized by source text types, such as multi-document summarization (Erkan and Radev, 2004; Radev et al., 2000; Haghighi and Vanderwende, 2009) and single-document summarization (Mihalcea and Tarau, 2004; Zhou and Hovy, 2004; Zheng and Lapata, 2019). Traditional approaches are mostly extractive, i.e., they extract entire sentences from a document. Recently, sequence-to-sequence (Seq2Seq) models have been used for abstractive summaries, where the system is able to synthesize new sentences (Nallapati et al., 2016, 2017; Gehrmann et al., 2018; Lewis et al., 2019; Fabbri et al., 2019). The copy mechanism (Gu et al., 2016) in a Seq2Seq model can be viewed as word-level extraction in abstractive summarization (See et al., 2017; Paulus et al., 2018). Both state-of-the-art extractive and abstractive approaches are usually supervised. Sentence summarization yields a short summary for a long sentence. Hori and Furui (2004) and Clarke and Lapata (2006) extract single words from the source sentence based on language model fluency and linguistic constraints. They search via dynamic programming with a trigram language model, which restricts the model capacity. The Hedge Trimmer method (Dorr et al., 2003) also uses hand-crafted linguistic rules to remove constituents from a parse tree until a certain length is reached. Rush et al. (2015) propose a supervised abstractive sentence summarization system with an attention mechanism (Bahdanau et al., 2015), and they also introduce a dataset for headline generation derived from Gigaword.2 Subsequent models for this dataset were also supervised and mostly based on Seq2seq architectures (Nallapati et al., 2016; Chopra et al., 2016; Wang et al., 2019). Recently, unsupervised approaches for sentence summarization have attracted increasing attention. Fevry and Phang (2018) learn a denoising autoencoder and control the summary length by a length embedding. Wang and Lee (2018) and Baziotis et al. (2019) use cycle-consistency (He et al., 2016) to learn the reconstruction of the source sentence and return the intermediate discrete representation as a summary. Zhou and Rush (2019) use beam search to optimize a scoring function, which considers language fluency and contextual matching. Our work can be categorized under unsupervised sentence summarization. We accomplish this by word-level extraction from the source sentence. Constrained Sentence Generation. Neural sentence generation is usually accomplished in an autoregressive way, for example, by recurrent neu2https://catalog.ldc.upenn.edu/ LDC2003T05 5034 ral networks generating words left-to-right. This is often enhanced by beam search (Sutskever et al., 2014), which keeps a beam of candidates in a partially greedy fashion. A few studies allow hard constraints on this decoding procedure. Hokamp and Liu (2017) use grid-beam search to impose lexical constraints during decoding. Anderson et al. (2017) propose constrained beam search to predict fixed image tags in an image transcription task. Miao et al. (2019) propose a Metropolis–Hastings sampler for sentence generation, where hard constraints can be incorporated into the target distribution. This is further extended to simulated annealing (Liu et al., 2020), or applied to the text simplification task (Kumar et al., 2020). Different from the above concurrent work, this paper applies the stochastic search framework to text summarization, and design our specific search space and search actions for word extraction. In previous work on text summarization, length embeddings (Kikuchi et al., 2016; Fan et al., 2018) have been used to indicate the desired summary length. However, these are not hard constraints, because the model may learn to ignore such information. 3 Proposed Model Given a source sentence x = (x1, x2, . . . , xn) as input, our goal is to generate a shorter sentence y = (y1, y2, . . . , ym) as a summary of x. We perform word-level extraction, in addition keeping the original word order intact. Thus, y is a subsequence of x. Our word-level extraction optimizes a manually defined objective function f(y; x, s), where the summary length s is predefined (s < n) and not subject to optimization. In the remainder of this section, we will describe the objective function, search space, and the search algorithm in detail. 3.1 Search Objective We define an objective function f(y; x, s), which our algorithm maximizes. It evaluates the fitness of a candidate sentence y as the summary of an input x, involving three aspects, namely, language fluency f←→ LM(y), semantic similarity fSIM(y; x), and a length constraint fLEN(y, s). This is given by f(y; x, s) = f←→ LM(y) · fSIM(y; x)γ · fLEN(y; s), (1) where the relative weight γ balances f←→ LM(y) and fSIM(y; x). We treat the summary length as a hard constraint, and therefore we do not need a weighting hyperparameter for fLEN. Language Fluency. The language fluency scorer quantifies how grammatical and idiomatic a candidate summary y is. Our model generates a candidate summary in a non-autoregressive fashion, in contrast to the beam search in Zhou and Rush (2019). Thus, we are able to simultaneously consider forward and backward language models, using the geometric average of their perplexities. Using both forward and backward language models is less biased towards sentence beginnings or endings. ←−→ PPL(y) = 2|y| v u u t |y| Y i 1 p−→ LM(yi|y<i) |y| Y i 1 p←− LM(yi|y>i). Our fluency scorer is the inverse perplexity. f←→ LM(y) = ←−→ PPL(y) −1. (2) Depending on applications, the language models could be pretrained on a target corpus.3 In this case, the fluency scorer also measures whether the summary style is consistent with the target language. This could be important in certain applications, e.g., headline generation, where the summary language differs from the input in style. Semantic Similarity. A semantic similarity scorer ensures that the summary keeps the key information of the input sentence. We adopt the cosine similarity between sentence embeddings as fSIM(y; x) = cos(e(x), e(y)), (3) where e is a sentence embedding method. In our work, we use unigram word embeddings learned by the sent2vec model (Pagliardini et al., 2018). Then, e(x) is computed as the average of these unigram embeddings, weighted by the inverse-document frequency (idf) of the words. We use sent2vec because it is trained in an unsupervised way on individual sentences. By contrast, other unsupervised methods like SiameseCBOW (Kenter et al., 2016) or BERT (Devlin et al., 2019) use adjacent sentences as part of the training signal. Length Constraint. Our discrete searching approach is able to impose the output length as a hard constraint, allowing the model to generate summaries of any given length. Suppose the desired output length is s, then our length scorer is 3We use the terminology unsupervised summarization, following Zhou and Rush (2019). While we train the language models on the desired target language, we do not need parallel source-target pairs, i.e., sentences together with their groundtruth summaries. 5035 fLEN(y; s) = ( 1, if |y| = s, −∞, otherwise. (4) In other words, a candidate summary y is infeasible if it does not satisfy the length constraint. In practice, we implement this hard constraint by searching among feasible solutions only. 3.2 Search Space Most sentence generation models choose a word from the vocabulary at each time step, such as autoregressive generation that predicts the next word (Sutskever et al., 2014; Rush et al., 2015), and edit-based generation with deletion or insertion operations (Miao et al., 2019; Dong et al., 2019). In these cases, the search space is |V|s, given a vocabulary V and a summary length s. However, reference summaries are highly extractive. In the headline generation dataset (Rush et al., 2015), for example, 45% of the words in the reference summary also appear in the source sentence. This yields a ceiling of 45 ROUGE-1 F1 points4 for a purely extractive method, which is higher than the current state-of-the-art supervised abstractive result of 39 points (Wang et al., 2019). We are thus motivated to propose our word-extraction approach that extracts a subsequence of the input as the summary. Additionally, we arrange the words in the same order as the input, motivated by the monotonicity assumption in summarization (Yu et al., 2016; Raffel et al., 2017). Formally, we define the search space as a = (a1, . . . , an) ∈{0, 1}n, where n is the length of the input sentence x. The vector a is a Boolean filter over the source words x. The summary sequence can then be represented by y = xa, i.e., we sequentially extract words from the source sequence x by the Boolean vector a. If ai = 1, then xi is extracted for the summary, and vice versa. Further, we only consider the search space of all feasible solutions {a : f(xa; x, s) > −∞}. That is to say, the candidate summary has to satisfy the length constraint in Section 3.1. Equivalently, the output length can be expressed by a constraint on the search space such that P i ai = s. The above restrictions reduce the search space to n s  solutions. In a realistic setting, our search 4We assume an extracted summary has the same length as the reference, and 45% words of the reference are in the original sentence. This gives us a ceiling of 45% precision and recall. Algorithm 1 First-Choice Hill Climbing input objective function f(y; x, s), source sentence x, summary length s, number of steps T, initial random solution a0, neighbor function q(a′|a) for t = 1 to T do yt−1 = xat−1 a′ ∼q(·|at−1) y′ = xa′ if f(y′; x, s) ≥f(yt−1; x, s) then at = a′ else at = at−1 return y∗←−xaT space is much smaller than that of generating words from the entire vocabulary. 3.3 Search Algorithm We optimize our objective function f(y; x, s) by first-choice hill climbing (FCHC, Russell and Norvig, 2016). This is a stochastic optimization algorithm that proposes a candidate solution by local change at every search step. The candidate is accepted if it is better than the current solution. Otherwise, the algorithm keeps the current solution. FCHC maximizes the objective function in a greedy fashion and yields a (possibly local) optimum. Algorithm 1 shows the optimization procedure of our FCHC. For each search step, a new candidate is sampled from the neighbor function q(a′|a). This is accomplished by randomly swapping two actions ai and aj for ai ̸= aj, i.e., replacing a word in the summary with a word from the source sentence that is not in the current summary. The order of selected words is kept as in the source sentence. If the candidate solution achieves a higher score, then it is accepted. Otherwise, the candidate is rejected and the algorithm proceeds with the current solution. Our search terminates if it exceeds a predefined budget. The last solution is returned as the summary, as it is also the best-scored candidate due to our greedy algorithm. One main potential drawback of hill climbing algorithms is that they may get stuck in a local optimum. To alleviate this problem, we restart the algorithm with multiple random initial word selections a0 and return the overall best solution. We set the number of restarts as βR · ns2 and number of search steps as βT · ns2, where βR and βT are controlling hyperparameters. We design the formula to encourage more search for longer input sentences, but only with a tractable growth: linear for input length and quadratic for summary length. As the summary length is usually much smaller 5036 than the input length, quadratic search is possible. Increasing the number of restarts (and search steps) monotonically improves the scoring function, and thus in practice can be set according to the available search budget. Other discrete optimization algorithms can be explored for sentence generation, such as simulated annealing (Liu et al., 2020) and genetic algorithms. Our analysis on short sentences (where exhaustive search is tractable) showed that hill climbing with restarts achieves ROUGE scores similar to exhaustive search (Section 5.4). 4 Evaluation Framework In this section, we will describe the datasets, evaluation metrics, and a widely used baseline (called Lead). Additionally, we report the observation that the commonly used evaluation metric, ROUGE F1, is sensitive to summary length, preferring longer summaries. Thus, we propose to group models with similar output length during evaluation for fair comparison. 4.1 Datasets We evaluate our models on the dataset provided for DUC2004 Task 1 (Over and Yen, 2004) and a headline generation corpus5 (Rush et al., 2015), both widely adopted in the summarization literature. The DUC2004 dataset is designed and used for testing only. It consists of 500 news articles, each paired with four human written summaries. We follow Rush et al. (2015) and adopt DUC2004 for sentence summarization by using only the first sentence of an article as input. The reference summaries are around 10 words long on average. The headline generation dataset (Rush et al., 2015) is derived from the Gigaword news corpus. Each headline/title is viewed as the reference summary of the first sentence of an article. The dataset contains 3.8M training instances and 1951 test instances. The average headline contains ∼8 words; the average source sentence contains ∼30 words. We use 500 held-out validation instances for hyperparameter tuning. Note that the training set is only used to train a language model and sent2vec embeddings. The summarization process itself is not trained in our approach. 5https://github.com/harvardnlp/NAMAS 4.2 Lead Baselines Lead baselines are a strong competitor that extracts the first few characters or words of the input sentence. The DUC2004 shared task includes a Lead baseline, which extracts the first 75 characters as the summary. We call it Lead-C-75. For the Gigaword dataset, the reference has 8 words on average, and it is common to compare with a Lead variant that chooses the first 8 words. We call this baseline Lead-N-n when we choose n words. For fair comparison with previous work (Baziotis et al., 2019; Fevry and Phang, 2018) in Section 5.2, we further introduce a new variant that returns the first p percent of source words as the summary. We denote this baseline by Lead-P-p. 4.3 ROUGE Scores Summarization systems are commonly evaluated by ROUGE scores (Lin, 2004). The ROUGE-1 (or ROUGE-2) score computes the unigram (or bigram) overlap of a generated summary and the reference. ROUGE-L calculates the longest common subsequence. Depending on the dataset, either ROUGE Recall or ROUGE F1 variant is adopted. Since the ROUGE Recall metric is not normalized with regard to length, DUC2004 standard evaluation truncates the summary at 75 characters. This procedure was also adopted by Rush et al. (2015) for the headline generation task, but later Chopra et al. (2016) proposed to report the “more balanced” ROUGE F1 metric for the Gigaword headline generation dataset and abandoned truncation. We follow previous work and use ROUGE F1 for headline generation and truncated ROUGE Recall for DUC2004. 4.4 Summary Length As mentioned, ROUGE F1 was introduced to the evaluation of sentence summarization to better compare models with different output lengths (Chopra et al., 2016; Nallapati et al., 2016). To investigate the effect of summary length on ROUGE F1, we calculate ROUGE F1 scores for the Lead-N-n and Lead-P-p baselines with different length parameters. Figure 2 shows that ROUGE F1 peaks at n ≈18 or p ≈50. The difference between the maximum performance at n ≈18 and the widely adopted baseline (Lead-N-8) is large: 4.2 ROUGE-1 F1 points. A similar effect is observed by Sun et al. (2019) for document summarization. This shows that ROUGE F1 is still sensitive to summary length, and this effect should be 5037 10 20 30 40 50 60 70 Lead-N n 0 5 10 15 20 25 30 Rouge F1 score Rouge-1 Rouge-2 Rouge-L 20 40 60 80 100 Lead-P p 0 5 10 15 20 25 30 Rouge F1 score Rouge-1 Rouge-2 Rouge-L Figure 2: ROUGE F1 scores on the test set of headline generation for Lead-N and Lead-P baselines with different number n and percentage p of leading words. considered during evaluation. We propose to report the average output length of a model and only compare models in the same length bracket. 5 Experiments 5.1 Setup We conduct experiments with two settings, dependent on how the scorers f←→ LM and fSIM are trained. In the first setting, we train the language model and sent2vec embeddings on the source (article) side of the Gigaword headline generation dataset. This complies with Fevry and Phang (2018) and Baziotis et al. (2019). In the second setting, we train the language model and sent2vec embeddings on the target (title) side like Zhou and Rush (2019). In both settings, we do not need parallel source-target pairs. For output length, our headline generation experiment sets the desired target length as 8 words, 10 words, and 50% of the input, as these mirror either the average reference summary length or the average output lengths of our competitors (Wang and Lee, 2018; Zhou and Rush, 2019; Fevry and Phang, 2018; Baziotis et al., 2019). For DUC2004, the desired summary length is set to 13 words, because the standard evaluation script truncates after the first 75 characters (roughly 13 words) in the summary. Our forward and backward language models use long short term memory units (Hochreiter and Schmidhuber, 1997) and are optimized for 50 epochs by stochastic gradient descent. Embeddings and hidden sizes are set to 1024 dimensions. We tune hyperparameters on the development data of the headline corpus, and set the weighting parameter γ to 12 for all models. The search steps and restarts are set to βT = 0.1 and βR = 0.035, respectively. We see a sharp performance improvement when we do more searching. Thus, we choose βT and βR at the critical values due to efficiency concerns. 5.2 Competing Models Besides the Lead baselines discussed in Section 4.2, we compare our models with state-of-the-art unsupervised sentence summarization systems. Wang and Lee (2018)6 use cycle-consistency to reconstruct source sentences from the headline generation corpus (Rush et al., 2015). The latent discrete representation, learned to be similar to (non-parallel) headlines, is used as the summary. Zhou and Rush (2019) optimize an objective function involving language fluency and contextual matching. Their language modeling scorer is trained on headlines of the Gigaword training set; their contextual matching scorer is based on ELMo embeddings (Peters et al., 2018) trained with the Billion Word corpus (Chelba et al., 2013). Their summary length is controlled by a soft length penalty during beam search. Fevry and Phang (2018)7 learn a denoising autoencoder (Vincent et al., 2008) to reconstruct source sentences of the Gigaword training set. Summary length is set to 50% of the input length and is controlled by length embeddings in the decoder. Baziotis et al. (2019)8 propose SEQ3 that uses cycle-consistency to reconstruct source sentences from the Gigaword training set. The length is also set to 50% of the input length, controlled by length embeddings in the intermediate decoder. For the DUC2004 dataset, TOPIARY (Zajic et al., 2004) is the winning system in the competition. They shorten the sentence by rule-based syntaxtree trimming (Dorr et al., 2003), but enhance the resulting summary with topics that are learned on 6Generated summaries are obtained via E-Mail correspondence. Scores differ because of evaluation setup. 7Retrained with official code (https://github.com/ zphang/usc_dae) because the authors use a private test set. 8Retrained with official code (https://github.com/ cbaziotis/seq3), because of different test data. The authors remove 54 noisy instances. Our replication thus achieves slightly lower scores than theirs. 5038 Model Data Len D ROUGE F1 Len O article title external R-1 R-2 R-L A Lead-N-8 ✓ 8 21.39 7.42 20.03 7.9 HC article 8 ✓ 8 23.09 7.50 21.29 7.9 HC title 8 ✓ 8 26.32 9.63 24.19 7.9 B Lead-N-10 ✓ 10 23.03 7.95 21.29 9.8 Wang and Lee (2018) ✓ ✓ 27.29 10.01 24.59 10.8 Zhou and Rush (2019) ✓ billion 26.48 10.05 24.41 9.3 HC article 10 ✓ 10 24.44 8.01 22.21 9.8 HC title 10 ✓ 10 27.52 10.27 24.91 9.8 HC title+twitter 10 ✓ twitter 10 28.26 10.42 25.43 9.8 HC title+billion 10 ✓ billion 10 28.80 10.66 25.82 9.8 C Lead-P-50 ✓ 50% 24.97 8.65 22.43 14.6 Fevry and Phang (2018) ✓ SNLI 50% 23.16 5.93 20.11 14.8 Baziotis et al. (2019) ✓ 50% 24.70 7.97 22.14 15.1 HC article 50p ✓ 50% 25.58 8.44 22.66 14.9 HC title 50p ✓ 50% 27.05 9.75 23.89 14.9 Table 1: Results for headline generation on the Gigaword test set. Data: data used during training (source article, target titles, external corpus). billion: the Billion Word Corpus (Chelba et al., 2013); twitter: the Twitter corpus (Pagliardini et al., 2018); SNLI: the Stanford Natural Language Inference dataset (Bowman et al., 2015). Len D: desired summary length. ROUGE F1 (R-1, R-2, R-L): ROUGE-1, ROUGE-2, ROUGE-L F1 scores. Len O: averaged output length. Best results in bold. Second best results underlined. A: Models with output length around 8 words. B: Models with output length around 10 words. C: Models with output length around 50% of the input. Our hill-climbing (HC) approaches are named in the format of HC data outputLength. Model ROUGE Recall R-1 R-2 R-L Lead-C-75 22.50 6.49 19.72 SEQ3 (Baziotis et al., 2019) 22.13 6.18 19.3 TOPIARY (Zajic et al., 2004) 25.12 6.46 20.12 BOTTLESUM EX (West et al., 2019) 22.85 5.71 19.87 HC article 13 24.21 6.63 21.24 HC title 13 26.04 8.06 22.90 HC title+twitter 13 27.41 8.76 23.89 Table 2: Results on the DUC2004 dataset. full articles. BOTTLESUM EX (West et al., 2019) uses the information bottleneck principle to predict the next sentence in an article. Their method employs a pretrained small GPT-2 model (Radford et al., 2019). 5.3 Results Results for Headline Generation. We first compare with Lead-N-8 (Group A, Table 1). This is a standard baseline in previous work, because the average reference summary contains eight words. Unfortunately, none of the previous papers consider output length during evaluation, making comparisons between their (longer) output summaries and the Lead-N-8 baseline unfair, as discussed in Section 4.4. Our approach, which explicitly controls summary length, considerably outperforms the Lead-N-8 baseline in a fair setting. Next, we compare with state-of-the-art unsupervised methods, whose output summary has roughly 10 words on average (Group B). In this case, we set our hard length constraint as 10 and include the Lead-N-10 baseline for comparison. Trained on the title side only, our HC title 10 model outperforms these competing methods in all ROUGE F1 scores. In particular, Zhou and Rush (2019) use the target side to train the language model, plus the Billion Word Corpus to pretrain embeddings used in the contextual matching scorer. With the same extra corpus to pretrain our sent2vec embeddings, our HC title+billion 10 variant achieves even better performance, outperforming Zhou and Rush (2019) by 2.32 ROUGE-1 and 1.41 ROUGE-L points. The Billion Word Corpus, however, includes complete articles, which implicitly yields unaligned parallel data. This could be inappropriate for an unsupervised method. Thus, we further train sent2vec embeddings on the Twitter corpus by Pagliardini et al. (2018). The HC title+twitter 10 also performs better than HC title 10 and other competitors. In Group C, we compare with the models whose summaries have an average length of 50% of the input sentence. We set our desired target length to 50% as well, and include the Lead-P-50 baseline. Previous studies report a performance improvement over the Lead-N-8 baseline, but in fact, Table 1 shows that they do not outperform the appropriate Lead baseline Lead-P-50. Our model is the only unsupervised summarization system that outperforms the Lead-P-50 baseline on this dataset, 5039 even though it is trained solely on the article side. It is noted that our models trained on the title side (HC title) consistently outperform those trained on the article side (HC article). This is not surprising because the former can generate headlines from the learned target distribution. This shows the importance of learning a summary language model even if we do not have supervision of parallel sourcetarget data. Results for DUC2004. Table 2 shows the results on the DUC2004 data. As this dataset is for test only, we directly transfer the models HC article and HC title from the headline generation corpus with the same hyperparameters (except for length). As shown in the table, we outperform all previous methods and the Lead-C-75 baseline. The results are consistent with Table 1, showing the generalizability of our approach. Human Evaluation. We conduct human evaluation via pairwise comparison of system outputs, in the same vein as (West et al., 2019). The annotator sees the source sentence along with the headline generated by our system and a competing method, presented in random order. The annotator is asked to compare the fidelity and fluency of the two systems, choosing among the three options (i) the first headline is better (ii) the second headline is better, and (iii) both headlines are equally good/bad. This task is repeated for 100 instances with 5 annotators each. The final label is selected by majority voting. The inter-annotator agreement (Krippendorff’s alpha) is 0.25 when our model is compared with Wang and Lee (2018) and 0.17 with Zhou and Rush (2019). We report the aggregated score of our system in Table 3. For each sample, we count 1 point if our model wins, 0 points if it ties, -1 point if it loses. The points are normalized by the number of samples. The results show an advantage of our model over Wang and Lee (2018), especially in fluency. Our model is also on par with Zhou and Rush (2019). Note again that we achieve this with fewer data. 5.4 Analysis In this section, we conduct an in-depth analysis of our model, based on HC title 10 for headline generation. Search Objective. Table 4 provides an ablation study on our objective function. It shows that both language fluency and semantic similarity play a Models Score (#wins/#ties/#loses) Fidelity Fluency HC vs. WL +0.18 (44/30/26) +0.30 (45/40/15) HC vs. ZR +0.05 (35/35/30) -0.03 (24/49/27) Table 3: Human evaluation in a pairwise comparison setting on 100 headline generation instances. We show the scores of our model (HC title 10) when it is compared with WL (Wang and Lee, 2018) and ZR (Zhou and Rush, 2019), in terms of average score of fidelity and fluency: 1 (wins), 0 (ties), and -1 (loses). Objective ROUGE F1 scores f = R-1 R-2 R-L f←→ LM · fSIM (full model) 27.52 10.27 24.91 f−→ LM · fSIM 27.50 10.15 24.79 f←→ LM 25.24 8.87 23.09 f−→ LM 25.18 8.72 22.93 fSIM 20.31 4.08 18.19 Table 4: Ablation study of the search objective. Model HC title 10 on the headline generation test set. Length constraint term omitted from notation. role in measuring the quality of a summary. The bi-directional language model is also slightly better than a uni-directional language model. Search Algorithm. In Figure 3, we compare our FCHC with the theoretical optimum on short sentences where exhaustive search is tractable. For only 3% of the instances with source sentence length between 25 and 30 words, our FCHC algorithm does not find the global optimum. In 21% of those cases, the better objective score leads to a higher ROUGE-L score. This shows that FCHC with restarts is a powerful enough search algorithm for word extraction-based sentence summarization. Positional Bias. We analyze the positional bias of each algorithm by plotting the normalized frequency of extracted words within four different areas of the source sentence. As shown in Figure 4, the extraction positions of words in the reference headlines are slightly skewed towards the beginning of the source sentence. Our hill-climbing algorithm performs distributed edits over the sentence, which is reflected in the flatter graph across the source sentence areas. By contrast, beam search (Zhou and Rush, 2019) is more biased towards the first quarter of the source sentence. Cycle consistency models (Wang and Lee, 2018; Baziotis et al., 2019) show a strong bias towards the first half of the source sentence. We suspect that the reconstruction decoder is easily satisfied with the beginning of the source sentence as the discrete latent variable, 5040 10 15 20 25 30 source sentence length 30 20 10 0 10 20 difference in Rouge-L score Rouge-L 0.0 0.2 0.4 0.6 0.8 1.0 difference in objective score objective score Figure 3: Orange crosses show the objective score optimized by exhaustive search minus the objective score optimized by FCHC. Blue pluses show the ROUGE-L difference between exhaustive search and FCHC. Plotted for the 1135 instances in the headline generation test set, where the source sentence has 30 words or fewer. 0% 25% 50% 75% 100% source sentence area 0.0 0.1 0.2 0.3 0.4 normalized extraction frequency reference HC_title_10 Wang and Lee (2018) Zhou and Rush (2019) Baziotis et al. (2019) Figure 4: Positional bias for different systems, calculated for the headline generation test set. The source sentence is divided into four areas: 0–25%, 25–50%, 50–75%, and 75-100% of the sentence. The y-axis shows the normalized frequency of how often a word in the summary is extracted from one of the four source sentence areas. because of its autoregressive decoding. Case Study. We show example summaries generated by our system in Figure 5. We see that the HC title models indeed learn the style of headlines, known as headlinese. As shown, HC title often uses simple tense and drops articles (e.g., “a” and “the”). The summaries generated by HC article tend to waste word slots by including an uninformative determiner. It is also seen that we can control the length in an explicit way. Comparing HC title with desired lengths of 8 and 10, we see that the additional two words are used to include more information, such as the day of the meeting in Example 2 or the gender of the injured person in Example 3. 1. Input: a german registered container ship ran aground at the entrance to the french port of le havre early tuesday , but authorities said there were no casualties . Reference: container ship runs aground in french port HC article 10: a container ship ran aground but there were no casualties HC title 10: container ship ran aground at french port but no casualties HC title 8: ship ran aground at french port no casualties 2. Input: fidel castro , cuba’s president of the council of state , met with a chinese delegation here tuesday . Reference: castro meets chinese official HC article 10: fidel castro cuba ’s president met with a chinese delegation HC title 10: fidel castro cuba ’s president met with chinese delegation tuesday HC title 8: fidel castro ’s president met with chinese delegation 3. Input: two grenades exploded near a national police station monday , slightly injuring one woman , news reports said . Reference: two grenades explode near spanish police station HC article 10: two grenades exploded near a police station injuring one woman HC title 10: two grenades exploded near a police station injuring one woman HC title 8: two grenades exploded near police station injuring one Table 5: Example summaries for headline generation test set. 6 Conclusion We proposed a novel word-extraction model for sentence summarization that generates summaries by optimizing an objective function of language fluency and semantic similarity. A hard length constraint is also imposed in our objective function. In a controlled experiment, our model achieves better performance than strong baselines on headline generation and DUC2004 datasets. Acknowledgments We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), under grant Nos. RGPIN-2019-04897, and RGPIN-2020-04465. Lili Mou is also supported by AltaML, the Amii Fellow Program, and the Canadian CIFAR AI Chair Program. This research was enabled in part by the support of Compute Canada (www.computecanada.ca). References Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary image captioning with constrained beam search. In EMNLP, pages 936–945. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben5041 gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, and Alexandros Potamianos. 2019. SEQ3: Differentiable sequence-to-sequence-to-sequence autoencoder for unsupervised abstractive sentence compression. In NAACL-HLT, pages 673–681. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP, pages 632–642. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In NAACL-HLT, pages 93–98. James Clarke and Mirella Lapata. 2006. Constraintbased sentence compression an integer programming approach. In COLING-ACL, volume 2, pages 144–151. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171–4186. Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In ACL, pages 3393– 3402. Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: A parse-and-trim approach to headline generation. In HLT-NAACL Workshop on Text Summarization, pages 1–8. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. JAIR, 22(1):457–479. Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A largescale multi-document summarization dataset and abstractive hierarchical model. In ACL, pages 1074– 1084. Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proc. 2nd Workshop on Neural Machine Translation and Generation, pages 45–54. Thibault Fevry and Jason Phang. 2018. Unsupervised sentence compression using denoising autoencoders. In CoNLL, pages 413–422. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In EMNLP, pages 4098–4109. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In ACL, pages 1631–1640. Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In HLT-NAACL, pages 362–370. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In NIPS, pages 820–828. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. In ACL, pages 1535–1546. Chiori Hori and Sadaoki Furui. 2004. Speech summarization: An approach through word extraction and a method for evaluation. IEICE Trans. Inf. & Syst., 87(1):15–25. Tom Kenter, Alexey Borisov, and Maarten de Rijke. 2016. Siamese CBOW: Optimizing word embeddings for sentence representations. In ACL, pages 941–951. Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In EMNLP, pages 1328–1338. Dhruv Kumar, Lili Mou, Lukasz Golab, and Olga Vechtomova. 2020. Iterative edit-based unsupervised sentence simplification. In ACL. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In ACL Workshop: Text Summarization Branches Out, pages 74–81. Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie Zhou, and Sen Song. 2020. Unsupervised paraphrasing by simulated annealing. In ACL. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. CGMH: Constrained sentence generation by Metropolis-Hastings sampling. In AAAI, pages 6834–6842. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In EMNLP, pages 404– 411. 5042 Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In AAAI, pages 3075–3081. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In CoNLL, pages 280–290. P Over and J Yen. 2004. An introduction to DUC-2004: Intrinsic evaluation of generic news text summarization systems. In Proc. Document Understanding Conference. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In NAACL-HLT, pages 528–540. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In ICLR. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT, pages 2227–2237. Dragomir R. Radev, Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based summarization of multiple documents: sentence extraction, utilitybased evaluation, and user studies. In NAACL-ANLP 2000 Workshop: Automatic Summarization. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog. Colin Raffel, Minh-Thang Luong, Peter J Liu, Ron J Weiss, and Douglas Eck. 2017. Online and lineartime attention by enforcing monotonic alignments. In ICML, pages 2837–2846. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In EMNLP, pages 379–389. Stuart J Russell and Peter Norvig. 2016. Artificial Intelligence: A Modern Approach. Pearson Education Limited. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In ACL, pages 1073–1083. Simeng Sun, Ori Shapira, Ido Dagan, and Ani Nenkova. 2019. How to compare summarizers without target length? Pitfalls, solutions and re-examination of the neural summarization literature. In Proc. Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 21–29. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In ICML, pages 1096–1103. Kai Wang, Xiaojun Quan, and Rui Wang. 2019. BiSET: Bi-directional selective encoding with template for abstractive summarization. In ACL, pages 2153–2162. Yaushian Wang and Hung-yi Lee. 2018. Learning to encode text as human-readable summaries using generative adversarial networks. In EMNLP, pages 4187–4195. Peter West, Ari Holtzman, Jan Buys, and Yejin Choi. 2019. BottleSum: Unsupervised and selfsupervised sentence summarization using the information bottleneck principle. In EMNLP-IJCNLP, pages 3750–3759. Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online segment to segment neural transduction. In EMNLP, pages 1307–1316. David Zajic, Bonnie Dorr, and Richard Schwartz. 2004. BBN/UMD at DUC-2004: Topiary. In HLT-NAACL Document Understanding Workshop, pages 112– 119. Hao Zheng and Mirella Lapata. 2019. Sentence centrality revisited for unsupervised summarization. In ACL, pages 6236–6247. Jiawei Zhou and Alexander M Rush. 2019. Simple unsupervised summarization by contextual matching. In ACL, pages 5101–5106. Liang Zhou and Eduard Hovy. 2004. Template-filtered headline summarization. In ACL Workshop: Text Summarization Branches Out, pages 56–60.
2020
452
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5043–5054 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5043 Exploring Content Selection in Summarization of Novel Chapters Faisal Ladhak1∗, Bryan Li2∗, Yaser Al-Onaizan3, Kathleen McKeown1,3 1Columbia University, 2University of Pennsylvania, 3Amazon AI [email protected], [email protected], [email protected], [email protected] Abstract We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides. This is a harder task than the news summarization task, given the chapter length as well as the extreme paraphrasing and generalization found in the summaries. We focus on extractive summarization, which requires the creation of a gold-standard set of extractive summaries. We present a new metric for aligning reference summary sentences with chapter sentences to create gold extracts and also experiment with different alignment methods. Our experiments demonstrate significant improvement over prior alignment approaches for our task as shown through automatic metrics and a crowd-sourced pyramid analysis. 1 Introduction When picking up a novel one is reading, it would be helpful to be reminded of what happened last. To address this need, we develop an approach to generate extractive summaries of novel chapters. This is much harder than the news summarization tasks on which most of the summarization field (e.g., (Cheng and Lapata, 2016; Grusky et al., 2018; Paulus et al., 2017)) focuses; chapters are on average seven times longer than news articles. There is no one-to-one correspondence between summary and chapter sentences, and the summaries in our dataset use extensive paraphrasing, while news summaries copy most of their information from the words used in the article. We focus on the task of content selection, taking an initial, extractive summarization approach given the task difficulty.1 As the reference sum∗Equal contribution. Work done while at Amazon. 1We tried two abstractive models (Chen and Bansal, 2018; Liu and Lapata, 2019) but ROUGE was low and the output was poor with many repetitions and hallucinations. maries are abstractive, training our model requires creating a gold-standard set of extractive summaries. We present a new approach for aligning chapter sentences with the abstractive summary sentences, incorporating weighting to ROUGE (Lin, 2004) and METEOR (Lavie and Denkowski, 2009) metrics to enable the alignment of salient words between them. We also experiment with BERT (Devlin et al., 2018) alignment. We use a stable matching algorithm to select the best alignments, and show that enforcing one-toone alignments between reference summary sentences and chapter sentences is the best alignment method of those used in earlier work. We obtain a dataset of summaries from five study guide websites paired with chapter text from Project Gutenberg. Our dataset consists of 4,383 unique chapters, each of which is paired with two to five human-written summaries. We experiment with generating summaries using our new alignment method within three models that have been developed for single document news summarization (Chen and Bansal, 2018; Kedzie et al., 2018; Nallapati et al., 2017). Our evaluation using automated metrics as well as a crowd-sourced pyramid evaluation shows that using the new alignment method produces significantly better results than prior work. We also experiment with extraction at different levels of granularity, hypothesizing that extracting constituents will work better than extracting sentences, since summary sentences often combine information from several different chapter sentences. Here, our results are mixed and we offer an explanation for why this might be the case. Our contributions include a new, challenging summarization task, experimentation that reveals potential problems with previous methods for creating extracts, and an improved method for creating gold standard extracts. 5044 2 Related Work Relatively little work has been done in summarization of novels, but early work (Mihalcea and Ceylan, 2007) provided a dataset of novel/summary pairs drawn from CliffsNotes and GradeSaver and developed an unsupervised system based on Meade (Radev et al., 2001) and TextRank (Mihalcea and Tarau, 2004) that showed promise. More recently, Zhang et al. (2019) developed an approach for summarizing characters within a novel. We hypothesize that our proposed task is more feasible than summarizing the full novel. Previous work has summarized documents using Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) to extract elementary discourse units (EDUs) for compression and more contentpacked summaries (Daum´e III and Marcu, 2002; Li et al., 2016; Arumae et al., 2019). Some abstractive neural methods propose attention to focus on phrases within a sentence to extract (Gehrmann et al., 2018). Fully abstractive methods are not yet appropriate for our task due to extensive paraphrasing and generalization. While previous work on semantic textual similarity is relevant to the problem of finding alignments between chapter and summary text, the data available (Cer et al., 2017; Dolan and Brockett, 2005) is not suitable for our domain, and the alignments we generated from this data were of a poorer quality than the other methods in our paper. 3 Data We collect summary-chapter pairs from five online study guides: BarronsBookNotes (BB), BookWolf (BW), CliffsNotes (CN), GradeSaver (GS) and NovelGuide (NG).2 We select summaries from these sources for which the complete novel text can be found on Project Gutenberg. Our initial dataset, for summaries with two or more sources, includes 9,560 chapter/summary pairs for 4,383 chapters drawn from 79 unique books. As our analysis shows a very long tail, two rounds of filtering were applied. First, we remove reference texts with >700 sentences, as these are too large to fit into mini-batches (∼10% of data). Second, we remove summaries with a compres2We do not have the rights to redistribute the data. To allow others to replicate the dataset, we provide a list of novel chapters we used at https://github.com/ manestay/novel-chapter-dataset Summary Src Mean (stdev) Median Total # CN 442 (369) 347 1,053 BB 517 (388) 429 1,000 GS 312 (311) 230 1,983 BW 276 (232) 214 182 NG 334 (302) 244 2,070 All Sources 373 (339) 279 6,288 Chapter Text 5,165 (3,737) 4,122 6,288 Table 1: Train Split Statistics: World count statistics with total number for summaries and chapter text. sion ratio of <2.0, as such wordy summaries often contain a lot of commentary (i.e. phrases that have no correspondence in the chapter, ∼5%). This results in 8,088 chapter/summary pairs, and we randomly assign each book to train, development and test splits (6,288/938/862 pairs respectively). After filtering, chapters are on average seven times longer than news articles from CNN/Dailymail (5,165 vs 761 words), and chapter summaries are eight times longer than news summaries (372 vs 46 words). Train split statistics are given in Table 1. These statistics reveal the large variation in length. Furthermore, we calculate word overlap, the proportion of vocabulary that overlaps between the summary and chapter. For novels, this is 33.7%; for CNN/DailyMail news, this is 68.7%. This indicates the large amount of paraphrasing in the chapter summaries in relation to the original chapter. In Figure 1, we show the first three sentences of a reference summary for Chapter 11, The Awakening which is paraphrased from several, nonconsecutive chapter sentences shown near the bottom of the figure. We also show a portion of the summaries from two other sources which convey the same content and illustrate the extreme level of paraphrasing as well as differences in detail. We show the full chapter and three full reference summaries in Appendix A.2. 4 Alignment Experiments To train models for content selection, we need saliency labels for each chapter segment that serve as proxy extract labels, since there are no gold extracts. In news summarization, these are typically produced by aligning reference summaries to the best matching sentences from the news article. Here, we align the reference summary sentences with sentences from the chapter. We address two questions for aligning chapter 5045 GS: In this chapter Mr. and Mrs. Pontellier participate in a battle of wills. When Mr. Pontellier gets back from the beach, he asks his wife to come inside. She tells him not to wait for her, at which point he becomes irritable and more forcefully tells her to come inside. NG: Mr. Pontellier is surprised to find Edna still outside when he returns from escorting Madame Lebrun home. ... although he asks her to come in to the house with him, she refuses, and remains outside, exercising her own will. BW: Leonce urges Edna to go to bed, but she is still exhilarated and decides to stay outside in the hammock... Chapter sentences: He had walked up with Madame Lebrun and left her at the house. ”Do you know it is past one o’clock? Come on,” and he mounted the steps and went into their room. “Don’t wait for me,” she answered. “You will take cold out there,” he said, irritably. “What folly is this? Why don’t you come in?” Figure 1: Portions of three reference summaries for The Awakening, Chapter 11 by Kate Chopin, along with chapter sentences they summarize. and summary sentences to generate gold standard extracts: 1) Which similarity metric works best for alignment (Section 4.1)? and 2) Which alignment method works best (Section 4.2)? 4.1 Similarity Metrics ROUGE is commonly used as a similarity metric to align the input document and the gold standard summary to produce gold extracts (Chen and Bansal, 2018; Nallapati et al., 2017; Kedzie et al., 2018). One drawback to using ROUGE as a similarity metric is that it weights all words equally. We want to, instead, assign a higher weight for the salient words of a particular sentence. To achieve this, we incorporate a smooth inverse frequency weighting scheme (Arora et al., 2017) to compute word weights. The weight of a given word is computed as follows: W(wi) = α α+p(wi) (1) where p(wi) is estimated from the chapter text and α is a smoothing parameter (here α = 1e−3). Ngram and Longest Common Subsequence (LCS) weights are derived by summing the weights of each of the individual words in the N-gram/LCS. We take the average of ROUGE-1, 2, L using this weighting scheme as the metric for generating extracts, R-wtd, incorporating a stemmer to match morphological variants (Porter, 1980). Similarity Metrics Results: We compare Rwtd against ROUGE-L (Chen and Bansal, 2018) (R-L), and ROUGE-1, with stop-word removal and stemming (Kedzie et al., 2018) (R-1), for sentence alignment. To incorporate paraphrasing, we average METEOR (Banerjee and Lavie, 2005) scores with ROUGE-1,2,L for both un-weighted (RM) and weighted scores (RM-wtd). Given the recent success of large, pre-trained language models for downstream NLP tasks, we also experiment with BERT (Devlin et al., 2019) to compute alignment, using cosine similarity between averaged chapter segment and summary segment vectors. We compare the generated gold extracts using RL F1 against reference summaries, to determine a shortlist for human evaluation (to save costs). For the human evaluation, we ask crowd workers to measure content overlap between the generated alignments, and the reference summary, on a subset of the validation data. For each summary reference, they are shown a generated alignment and asked to indicate whether it conveys each of up to 12 summary reference sentences. An example task is shown in Appendix Figure 7. We then compute precision and recall based on the number of summary sentences conveyed in the extract. Table 2 shows that humans prefer alignments generated using R-wtd by a significant margin.3 Sample alignments generated by R-wtd in comparison to the baseline are shown in Figure 2. Method RM R-wtd RM-wtd R-1 R-L BERT R-L F1 41.2 40.6 39.3 37.1 35.1 35.4 H-F1 33.7 44.8 38.8 – – – Table 2: ROUGE-L F1, and crowd-sourced F1 scores (H-F1) for content overlap. 4.2 Alignment Methods Some previous work in news summarization has focused on iteratively picking the best article sentence with respect to the summary, in order to get the gold extracts (Nallapati et al., 2017; Kedzie et al., 2018), using ROUGE between the set of selected sentences and the target summary. In contrast, others have focused on picking the best article sentence with respect to each sentence in the summary (Chen and Bansal, 2018). We investigate which approach yields better alignments. We refer 3We suspect incorporating METEOR by averaging didn’t work because the scale is different from ROUGE scores. 5046 to the former method as summary-level alignment and the latter method as sentence-level alignment. For sentence-level alignment, we note that the problem of finding optimal alignments is similar to a stable matching problem. We wish to find a set of alignments such that there exists no chapter segment a and summary segment x where both a and x would prefer to be aligned with each other over their current alignment match. We compute alignments based on the Gale-Shapley algorithm (1962) for stable matching and compare it with the greedy approach from prior work (Chen and Bansal, 2018). For summary-level alignment (Nallapati et al., 2017; Kedzie et al., 2018), we compare two variants: selecting sentences until we reach the reference word count (WL summary), and selecting sentences until the ROUGE score no longer increases (WS summary). Crowd-sourced evaluation results (Table 3) show that sentence-level stable matching is significantly better. We use this in the remainder of this work. These differences in alignments affect earlier claims about the performance of summarization systems, as they were not measured, yet have a significant impact.4 Method P R F1 Greedy Sent 48.4 48.7 48.5 Stable Sent 52.8 52.6 52.7 WL summary 34.5 36.6 36.7 WS summary 42.7 36.6 38.0 Table 3: Crowd sourced evaluation on content overlap for summary vs. sentence level on validation set. Ref summary: He says he will, as soon as he has finished his last cigar. R-L greedy: “You will take cold out there,” he said, irritably. R-L stable: He drew up the rocker, hoisted his slippered feet on the rail, and proceeded to smoke a cigar. R-wtd stable: “Just as soon as I have finished my cigar.” Figure 2: A reference summary sentence and its alignments. R-L greedy and R-L stable are incorrect because they weight words equally (e.g. said, cigar, ‘.’). 4Bold text indicates statistical significance with p < 0.05. 5 Summarization Experiments In order to assess how alignments impact summarization, we train three extractive systems – hierarchical CNN-LSTM extractor (Chen and Bansal, 2018) (CB), seq2seq with attention (Kedzie et al., 2018) (K), and RNN (Nallapati et al., 2017) (N). The target word length of generated summaries is based on the average summary length of similarly long chapters from the training set.5 We also experiment with aligning and extracting at the constituent level,6 given our observation during data analysis that summary sentences are often drawn from two different chapter sentences. We create syntactic constituents by taking sub-trees from constituent parse trees for each sentence (Manning et al., 2014) rooted with S-tags. To ensure that constituents are long enough to be meaningful, we take the longest S-tag when one Stag is embedded within others (see Appendix A.5). Summary quality is evaluated on F1 scores for R-{1,2,L}, and METEOR. Each chapter has 2-5 reference summaries and we evaluate the generated summary against all the reference summaries. Part of a generated summary of extracted constituents for Chapter 11, The Awakening, is shown in Figure 3. The full generated summaries for this chapter (both extracted constituents and extracted sentences) are shown in Appendix A.2. Generated Summary: |I thought I should find you in bed , ” ||said her husband , |when he discovered her |lying there . |He had walked up with Madame Lebrun and left her at the house . ||She heard him moving about the room ; |every sound indicating impatience and irritation .| Figure 3: System generated summary, extracted constituents in teal, and separated by |. 5.1 Results We compare our method for generating extractive targets (ROUGE weighted, with stable matching at the sentence level) against the baseline method for generating extractive targets for each of the systems. Table 4 shows three rows for each summarization system: using the original target summary labels, and using either constituent or sentence segments. We see our proposed alignment method performs significantly better for all mod5We do so by binning chapters into 10 quantiles by length. 6Prior work has used EDUs, but automated parsers such as (Ji and Eisenstein, 2014) perform poorly in this domain. 5047 Model Seg Method R-1 R-2 R-L METEOR CB sent baseline 33.1 5.5 30.0 13.9 sent R-wtd 35.8 6.9 33.4 15.2 const R-wtd 36.2 6.9 35.4 15.2 K sent baseline 34.3 6.4 31.6 14.6 sent R-wtd 35.6 6.9 33.2 15.0 const R-wtd 36.2 6.9 35.2 15.1 N sent baseline 34.6 6.4 31.9 14.6 sent R-wtd 35.7 7.0 33.3 15.1 const R-wtd 35.9 7.0 35.2 15.0 Table 4: ROUGE-F1, METEOR for generated summaries. ”Baseline” is the method used for that model. els. ROUGE-L in particular increases 10% to 18% relatively over the baselines. Moreover, it would seem at first glance that the K and N baseline models perform better than the CB baseline, however this difference has nothing to do with the architecture choice. When we use our extractive targets, all three models perform similarly, suggesting that the differences are mainly due to small, but important, differences in their methods for generating extractive targets. Human Evaluation: Given questions about the reliability of ROUGE (Novikova et al., 2017; Chaganty et al., 2018), we perform human evaluation to assess which system is best at content selection. We use a lightweight, sampling based approach for pyramid analysis that relies on crowd-sourcing, proposed by Shapira et al. (2019), and correlates well with the original pyramid method (Nenkova et al., 2007). We ask the crowd workers to indicate which of the sampled reference summary content units are conveyed in the generated summary.7 We evaluated our best system + alignment on extraction of sentences and of constituents (CB R-wtd), along with a baseline system (CB Kalign),8 using the crowd-sourced pyramid evaluation method. To produce readable summaries for extracted constituents, each extracted constituent is included along with the context of the containing sentence (black text in Figure 3). We find that CB Sent R-wtd has significantly higher content overlap with reference summaries in Table 5. 6 Discussion and Conclusion We present a new challenging task for summarization of novel chapters. We show that sentence7See the screen shot in Appendix A.4 8We use the best baseline alignment, Kedzie et al. (2018) with the CB model to keep model choice consistent. System Pyramid Score CB K-align 17.9 CB Sent R-wtd 18.9 CB Const R-wtd 18.1 Table 5: Crowd-sourced Pyramid Evaluation. level, stable-matched alignment is better than the summary-level alignment used in previous work and our proposed R-wtd method for creating gold extracts is shown to be better than other similarity metrics. The resulting system is the first step towards addressing this task. While both human evaluation and automated metrics concur that summaries produced with our new alignment approach outperform previous approaches, they contradict on the question of whether extraction is better at the constituent or the sentence level. We hypothesize that because we use ROUGE to score summaries of extracted constituents without context, the selected content is packed into the word budget; there is no potentially irrelevant context to count against the system. In contrast, we do include sentence context in the pyramid evaluation in order to make the summaries readable for humans and thus, fewer constituents make it into the generated summary for the human evaluation. This could account for the increased score on automated metrics. It is also possible that smaller constituents can be matched to phrases within the summary with metrics such as ROUGE, when they actually should not have counted. In future work, we plan to experiment more with this, examining how we can combine constituents to make fluent sentences without including potentially irrelevant context. We would also like to further experiment with abstractive summarization to re-examine whether large, pre-trained language models (Liu and Lapata, 2019) can be improved for our domain. We suspect these models are problematic for our documents because they are, on average, an order of magnitude larger than what was used for pretraining the language model (512 tokens). Another issue is that the pre-trained language models are very large and take up a substantial amount of GPU memory, which limits how long the input document can be. While truncation of a document may not hurt performance in the news domain due to the heavy lede bias, in our domain, truncation can hurt the performance of the summarizer. 5048 References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Kristjan Arumae, Parminder Bhatia, and Fei Liu. 2019. Towards annotating and creating summary highlights at sub-sentence level. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 64–69, Hong Kong, China. Association for Computational Linguistics. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Daniel Cer, Mona Diab, Eneko Agirre, I˜nigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 643–653, Melbourne, Australia. Association for Computational Linguistics. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Association for Computational Linguistics. Hal Daum´e III and Daniel Marcu. 2002. A noisychannel model for document compression. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 449–456. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). David Gale and Lloyd S Shapley. 1962. College admissions and the stability of marriage. The American Mathematical Monthly, 69(1):9–15. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719, New Orleans, Louisiana. Association for Computational Linguistics. Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 13–24. Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818–1828. Alon Lavie and Michael J Denkowski. 2009. The meteor metric for automatic evaluation of machine translation. Machine translation, 23(2-3):105–115. Junyi Jessy Li, Kapil Thadani, and Amanda Stent. 2016. The role of discourse units in near-extractive summarization. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 137–147, Los Angeles. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of 5049 the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3728–3738, Hong Kong, China. Association for Computational Linguistics. William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: toward a functional theory of text organization. Text: Interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenn˜y Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. R. Mihalcea and P. Tarau. 2004. TextRank: Bringing order into texts. In Proceedings of EMNLP-04and the 2004 Conference on Empirical Methods in Natural Language Processing. Rada Mihalcea and Hakan Ceylan. 2007. Explorations in automatic book summarization. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 380–389, Prague, Czech Republic. Association for Computational Linguistics. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Thirty-First AAAI Conference on Artificial Intelligence. Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. 2007. The pyramid method: Incorporating human content selection variation in summarization evaluation. ACM Trans. Speech Lang. Process., 4(2). Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Martin F Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130–137. Dragomir Radev, Sasha Blair-Goldensohn, and Zhu Zhang. 2001. Experiments in single and multidocument summarization using MEAD. In First Document Understanding Conference, New Orleans, LA. Ori Shapira, David Gabay, Yang Gao, Hadar Ronen, Ramakanth Pasunuru, Mohit Bansal, Yael Amsterdamer, and Ido Dagan. 2019. Crowdsourcing lightweight pyramids for manual summary evaluation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 682–687, Minneapolis, Minnesota. Association for Computational Linguistics. Weiwei Zhang, Jackie Chi Kit Cheung, and Joel Oren. 2019. Generating character descriptions for automatic summarization of fiction. In The ThirtyThird AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 7476–7483. 5050 A Appendix A.1 Acknowledgments We would like to thank Spandana Gella for her contributions to the project. We would like to thank Jonathan Steuck, Alessandra Brusadin, and the rest of the AWS AI Data team for their invaluable feedback in the data annotation process. We would finally like to thank Christopher Hidey, Christopher Kedzie, Emily Allaway, Esin Durmus, Fei-Tzin Lee, Feng Nan, Miguel Ballesteros, Ramesh Nallapati, and the anonymous reviewers for their valuable feedback on this paper. A.2 Example Chapter and Summaries We show the full text of Chapter 11, The Awakening by Kate Chopin in Figure 4. We show three reference summaries in Figure 5, and two generated summaries using our best alignment method in Figure 6. While there are differences in length and level of detail, there are also clearly similarities in covered content. A.3 Target Word Length for Summaries The target word length for generated summaries is a function of the input chapter word count (wcchapter). We divide the train set into 10 quantiles, and in each quantile (or bin), associate it to the mean compression ratio (CR): CR = wcchapter wcref summ (2) CRquantile = 1 n n X i=1 CRi (3) Where wcrefsumm is the word count of the reference summary, and CRi is the compression ratio of the i-th quantile item. The target word length for the generated summary (wcgen summ) is given by: wcgen summ = 1 CRquantile ∗wcchapter (4) Generated summaries are created by extracting segments with the highest model probability until this budget is reached (without truncation). Oracle summaries also use this target word length, but may be shorter if the original summary had few segments (as we extract one chapter segment for each summary segment). Quantile Min wc Max wc CR 1 44 1,232 6.67 2 1,233 1,711 9.09 3 1,712 2,174 9.09 4 2,175 2,758 10.00 5 2,579 3,361 11.11 6 3,362 4,165 12.5 7 4,166 5,374 14.29 8 5,375 7,762 14.29 9 7,763 13,028 16.67 10 13,029 70,436 20 Table 6: Quantiles: For each quantile (bin), we show its max and min word words, and its compression ratio. A.4 SCU Evaluation Task Setup To obtain the distractors, we sample 2 SCUs from different chapters from the same book. We insert one of them, the positive distractor, into the generated summary, as well as into the list of statements, so it will always be correct. We insert the other, the negative distractor, only into the list of statements, so it will always be incorrect. A.5 Constituent Extraction algorithm Algorithm 1 extracts subtrees from a constituent parse tree. These subtrees are constituents, and break down sentences into meaningful spans of text. Constituents are one of 1. A relative clause 2. The highest level S or SBAR node in its subtree with (NP, VP) children 3. The highest level VP node above 2) 4. The remaining nodes in the tree that were not extracted with 1), 2) or 3) 5051 “What are you doing out here, Edna? I thought I should find you in bed,” said her husband, when he discovered her lying there. He had walked up with Madame Lebrun and left her at the house. His wife did not reply. “Are you asleep?” he asked, bending down close to look at her. “No.” Her eyes gleamed bright and intense, with no sleepy shadows, as they looked into his. “Do you know it is past one o’clock? Come on,” and he mounted the steps and went into their room. “Edna!” called Mr. Pontellier from within, after a few moments had gone by. “Don’t wait for me,” she answered. He thrust his head through the door. “You will take cold out there,” he said, irritably. “What folly is this? Why don’t you come in?” “It isn’t cold; I have my shawl.” “The mosquitoes will devour you.” “There are no mosquitoes.” She heard him moving about the room; every sound indicating impatience and irritation. Another time she would have gone in at his request. She would, through habit, have yielded to his desire; not with any sense of submission or obedience to his compelling wishes, but unthinkingly, as we walk, move, sit, stand, go through the daily treadmill of the life which has been portioned out to us. “Edna, dear, are you not coming in soon?” he asked again, this time fondly, with a note of entreaty. “No; I am going to stay out here.” “This is more than folly,” he blurted out. “I can’t permit you to stay out there all night. You must come in the house instantly.” With a writhing motion she settled herself more securely in the hammock. She perceived that her will had blazed up, stubborn and resistant. She could not at that moment have done other than denied and resisted. She wondered if her husband had ever spoken to her like that before, and if she had submitted to his command. Of course she had; she remembered that she had. But she could not realize why or how she should have yielded, feeling as she then did. “Leonce, go to bed,” she said, “I mean to stay out here. I don’t wish to go in, and I don’t intend to. Don’t speak to me like that again; I shall not answer you.” Mr. Pontellier had prepared for bed, but he slipped on an extra garment. He opened a bottle of wine, of which he kept a small and select supply in a buffet of his own. He drank a glass of the wine and went out on the gallery and offered a glass to his wife. She did not wish any. He drew up the rocker, hoisted his slippered feet on the rail, and proceeded to smoke a cigar. He smoked two cigars; then he went inside and drank another glass of wine. Mrs. Pontellier again declined to accept a glass when it was offered to her. Mr. Pontellier once more seated himself with elevated feet, and after a reasonable interval of time smoked some more cigars. Edna began to feel like one who awakens gradually out of a dream, a delicious, grotesque, impossible dream, to feel again the realities pressing into her soul. The physical need for sleep began to overtake her; the exuberance which had sustained and exalted her spirit left her helpless and yielding to the conditions which crowded her in. The stillest hour of the night had come, the hour before dawn, when the world seems to hold its breath. The moon hung low, and had turned from silver to copper in the sleeping sky. The old owl no longer hooted, and the water-oaks had ceased to moan as they bent their heads. Edna arose, cramped from lying so long and still in the hammock. She tottered up the steps, clutching feebly at the post before passing into the house. “Are you coming in, Leonce?” she asked, turning her face toward her husband. “Yes, dear,” he answered, with a glance following a misty puff of smoke. “Just as soon as I have finished my cigar.” Figure 4: Full chapter text. Note that this is short at 847 words, as the median chapter length is 3168 words. 5052 BookWolf summary: L´eonce urges Edna to go to bed, but she is still exhilarated and decides to stay outside in the hammock. L´eonce stays up with her and smokes his cigars. Edna feels defiant towards her husband and resents his control over her life. Eventually tiredness overcomes Edna and she goes to bed. GradeSaver summary: In this chapter Mr. and Mrs. Pontellier participate in a battle of wills. When Mr. Pontellier gets back from the beach, he asks his wife to come inside. She tells him not to wait for her, at which point he becomes irritable and more forcefully tells her to come inside. Mrs. Pontellier resolves not to go in and thinks about how, on another occasion, she would have just done what her husband asked, simply because of inertia. Feeling stubborn and strong, she realizes that she had never taken such a stand against her husband before. Mr. Pontellier then decides to join her outside. He drinks glasses of wine and smokes a number of cigars. After awhile, Mrs. Pontellier feels like she is being awakened from a dream and realizes that she is quite fatigued. It is almost dawn. Finally getting up from the hammock, Mrs. Pontellier asks her husband if he’s going to join her. He replies that he will, after he finishes his cigar. NovelGuide summary: Mr. Pontellier is surprised to find Edna still outside when he returns from escorting Madame Lebrun home. In a small but no doubt significant exchange-considering the events of the evening, and the novel’s title-her distant and unperceiving husband asks her, ”Are you asleep?” Edna, with eyes ”bright and intense,” definitively replies, ”No.” Although he asks her to come in to the house with him, she refuses, and remains outside, exercising her own will. As if trying to outlast his wife, Mr. Pontellier smokes cigar after cigar next to her. Gradually, Edna succumbs to her need for sleep. She feels ”like one who awakens gradually out of a . . . delicious, grotesque, impossible dream . . . .” As described in Chapter VII, then, Edna is once again undergoing what might be called a ”negative” ”awakening”-an ”awakening” to the realities of her present life-as opposed to the ”positive” awakening to new possibilities and her own self-direction, to which the nighttime swim began to expose her. As if to underscore her failure to ”awaken” to herself, the chapter ends with a scene of tables being turned: as Edna goes in, she asks her husband if he will be joining her. He says he will, as soon as he has finished his last cigar. While the narrator does not record Mr. Pontellier’s tone of voice, the comments seem almost scornful, mockingly echoing Edna’s earlier self-assertion. Figure 5: Two reference summaries. Constituent R-wtd: |I thought I should find you in bed , ” ||said her husband , |when he discovered her |lying there . |He had walked up with Madame Lebrun and left her at the house . ||She heard him moving about the room ; |every sound indicating impatience and irritation . |“ This is more than folly , ” |he blurted out . ‘ I ca n’t |permit you to stay out there all night .||But she could not realize |why or how she should have yielded , feeling as she then did . |He smoked two cigars ; |then he went inside and drank another glass of wine . She tottered up the steps , |clutching feebly at the post before passing into the house . |she asked , |turning her face toward her husband . | Sentence R-wtd: | I thought I should find you in bed , ” said her husband , when he discovered her lying there . | | He had walked up with Madame Lebrun and left her at the house . | | His wife did not reply . | | “ This is more than folly , ” he blurted out . | | You must come in the house instantly . ” | | Edna began to feel like one who awakens gradually out of a dream , a delicious , grotesque , impossible dream , to feel again the realities pressing into her soul . | | She tottered up the steps , clutching feebly at the post before passing into the house . | | she asked , turning her face toward her husband . | | “ Just as soon as I have finished my cigar . ” | Figure 6: Two generated summaries. Extracted segments are highlighted in teal, and delineated with |. Constituents are presented with context, whereas sentences extract all text. 5053 Algorithm 1: CONSTITUENTSEGMENTS Input: sentence parse tree PT 1 const subtrees := [ ] // Store constituent subtrees here 2 PT, punct idxs := REMOVEPUNCT(PT) 3 foreach subtree ST in PT do // Find all constituent subtrees. 4 if {NP, VP} in ST.children then 5 STAG = ST /* Ascend as far as possible in tree, before root S tag */ 6 while STAG.parent in {SBAR, S, VP} and STAG.parent != PT.root do 7 STAG := STAG.parent 8 if STAG in {S, SBAR} and not STAG.children.intersection({VP, NP}) then 9 break /* If STAG is a VP, no need to break */ 10 const subtrees := const subtrees + [STAG] 11 else if ISRELATIVECLAUSE(ST) then 12 const subtrees := const subtrees + [STAG] /* Create words list for each constituent subtree. Avoid duplicating words by removing subtrees that we add from the original parse tree. */ 13 foreach subtree ST in const subtrees do 14 WORDS := [ ] // constituent word lists 15 foreach left in ST.left siblings do // Break up clauses of conjunctions. 16 if left = CC then 17 WORDS := WORDS + [left.words] 18 REMOVESUBTREE(PT, left) 19 WORDS := WORDS + [ST.words] 20 REMOVESUBTREE(PT, ST) 21 if PT.words then // Add any remaining words to another segment 22 WORDS := WORDS + [PT.words] 23 WORDS := SPLITNONCONTIGUOUS(WORDS) 24 WORDS := SORTBYINDEX(WORDS) 25 WORDS := INSERTPUNCTUATION(WORDS, punct idxs) 26 WORDS := CONCATENATESHORTSEGMENTS(WORDS) 27 constituents := JOINWORDLISTS(WORDS) Output: constituents c1, ..., cn 5054 Figure 7: An example HIT showing a segmented oracle summary, and two questions. Reading the summary, we see that we should answer ”Present” for both questions. There can be up to 12 questions – we omit here for brevity. Note that in our evaluation, we counted both ”Present” and ”Partially Present” as a match.
2020
453
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5055–5070 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5055 FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization Esin Durmus∗ Cornell University [email protected] He He New York University [email protected] Mona Diab The George Washington University [email protected] Abstract Neural abstractive summarization models are prone to generate content inconsistent with the source document, i.e. unfaithful. Existing automatic metrics do not capture such mistakes effectively. We tackle the problem of evaluating faithfulness of a generated summary given its source document. We first collected human annotations of faithfulness for outputs from numerous models on two datasets. We find that current models exhibit a trade-off between abstractiveness and faithfulness: outputs with less word overlap with the source document are more likely to be unfaithful. Next, we propose an automatic question answering (QA) based metric for faithfulness, FEQA,1 which leverages recent advances in reading comprehension. Given questionanswer pairs generated from the summary, a QA model extracts answers from the document; non-matched answers indicate unfaithful information in the summary. Among metrics based on word overlap, embedding similarity, and learned language understanding models, our QA-based metric has significantly higher correlation with human faithfulness scores, especially on highly abstractive summaries. 1 Introduction Abstractive summarization models must aggregate salient content from the source document(s) and remain faithful, i.e. being factually consistent with information in the source documents. Neural abstractive models are effective at identifying salient content and producing fluent summaries (See et al., 2017; Chen and Bansal, 2018; Gehrmann et al., 2018). However, the generated summary may not always contain faithful information, which is vital for real-world applications. ∗Most of the work is done while the authors were at Amazon Web Services AI. 1Faithfulness Evaluation with Question Answering. Source. The world’s oldest person has died a few weeks after celebrating her 117th birthday. Born on March 5, 1898, the greatgrandmother had lived through two world wars, the invention of the television and the first successful powered aeroplane flight by the wright brothers... Output sentence. The world ’s oldest person has died on March 5, 1898. Table 1: An example of unfaithful output (highlighted in red); generated by Gehrmann et al. (2018). Table 1 shows an example of unfaithful generation. Recent studies have shown that around 30% of generated summaries contain unfaithful information (Cao et al., 2018; Falke et al., 2019a; Kry´sci´nski et al., 2019), especially when the sentence combines content from multiple source sentences (Lebanoff et al., 2019). In this paper, we address the problem of evaluating faithfulness of generated summaries given their source documents. Our key insight is that current models are limited by a trade-off between abstractiveness and faithfulness (Section 2). On a wide range of systems and two datasets with varying levels of abstractiveness (CNN/DM and XSum), we show that the number of unfaithful sentences (annotated by humans) increases as the summary becomes more abstractive (i.e. less overlap with the source document). Next, we investigate a diverse set of existing automatic evaluation metrics such as ROUGE, BERTScore (Zhang et al., 2019a), and learned entailment models. We find that their correlations with human scores of faithfulness drop significantly on highly abstractive summaries, where deeper text understanding beyond surface similarity is needed. Recently, question answering (QA) based automatic metrics have been proposed for evaluating 5056 content selection in summarization (Eyal et al., 2019; Scialom et al., 2019; Chen et al., 2018). Specifically, cloze-style QA is used to evaluate whether important information in the source is recovered from the summary. Inspired by prior work, we use automatically generated QA pairs to represent information in the summary and validate it against the source. Concretely, we generate a set of “groundtruth” QA pairs from the summary, using a learned model that converts a declarative sentence and an answer span to a question (Section 3). Then, off-the-shelf reading comprehension models are evaluated on this set by extracting answer spans from the source documents. High accuracy means that the summary and the source document tend to produce the same answers, thus they are factually consistent with respect to the questions. Compared to prior approaches using cloze tests, our question generation approach enables evaluation with a broader range of QA models and answer types (e.g. extractive and generative), thus maximally taking advantage of progress in QA. Among automatic metrics based on n-gram overlap, word embeddings, and language understanding models (relation extraction and entailment), FEQA has significantly higher correlation with human scores of faithfulness and is the only metric that correlates with human scores on highly abstractive summaries from XSum. 2 The Abstractiveness-Faithfulness Tradeoff While extractive summarizers are largely faithful (since they copy sentences from the source document), current abstractive models struggle to produce faithful summaries without copying. Similar to Lebanoff et al. (2019), we observe that factual errors occur more frequently as models generate more abstractive summary sentences, i.e. less overlap with the source document. In this section, we analyze generated summaries along two dimensions: abstractiveness and faithfulness. Specifically, we aim to answer the following questions: (1) How to quantify abstractiveness of a summary? (2) Is abstractiveness encouraged more by the data or the model? (3) How does being abstractive affect faithfulness? 2.1 Characterizing Abstractiveness of a Summary Abstractive summarization involves rephrasing important content into brief statements, ranging from minor editing of a source sentence to condensing multiple sentences in new words. Given a source document and a summary, we want to measure the level of abstractiveness of the summary. Prior work measures abstractiveness by overlapped text spans between the summary and the document (Grusky et al., 2018; Zhang et al., 2018), or indirectly by the effectiveness of extractive baselines such as LEAD-3 (Nallapati et al., 2016a). While metrics such as extractive fragment coverage and density (Grusky et al., 2018) provide a continuous measure of the level of abstractiveness, we define a more fine-grained categorization of abstractiveness by analyzing how each sentence in the summary is formed. A more abstractive summary sentence aggregates content over a larger chunk of source text; consequently it must copy fewer words to maintain brevity. Therefore, we define the following abstractiveness types based on the amount of copying, e.g. copying a source sentence, one or more partial fragments from the source sentence, and individual words. 1. Sentence extraction: the summary sentence is exactly the same as one of the source sentences. 2. Span extraction: the summary sentence is a substring of one of the source sentences, e.g. “the plane was coming back from the NCAA final” is a span extracted from “the plane was coming back from the NCAA final, according to spokesman John Twork”. 3. Word extraction: the summary sentence is formed by a subset of the tokens in a source sentence, e.g. “Capybara Joejoe has almost 60,000 followers” is a result of deleting words in “Capybara Joejoe who lives in Las Vegas has almost 60,000 followers on Instagram”. 4. Perfect fusionk: the summary sentence is constructed by piecing together the substrings from k (k > 1) source sentences in their original order, e.g. “Capybara Joejoe has almost 60,000 followers” is a perfect fusion of the sentences “Capybara Joejoe lives 5057 in Las vegas.” and “He has almost 60,000 followers on Instagram.” To quantify the amount of abstractiveness of a set of summaries, we label each sentence with the first qualified type in the order above if it fits to one of these categories. We then define the score of each type as the percentage of sentences labeled by that category. The types are ordered by increasing levels of abstractiveness. For example, a summary with higher fusion scores and lower extraction scores is considered more abstractive. In addition, we compute the percentage of novel n-grams that do not appear in the source document as another metric for abstractiveness. 2.2 Is abstractiveness from the model or the data? Equipped with the metrics for abstractiveness above, we want to further understand how abstractive the generated summaries are, and whether the amount of abstractiveness is a result of the training data or the model. Therefore, we compute abstractiveness scores for both the reference summaries and summaries generated from a diverse set of models on two datasets. Datasets. We use the CNN/DailyMail (Hermann et al., 2015; Nallapati et al., 2016b) (CNN/DM) and the XSum (Narayan et al., 2018) datasets, which are both used for single-document news summarization tasks. CNN/DM consists of articles from the CNN and Daily Mail websites, where the summaries comprise highlights in bullet points. XSum consists of BBC articles, where the summaries comprise a single-sentence summary that is written as the opening introductory sentence for the article. XSum was released in particular to promote research on highly abstractive summarization systems. Appendix A provides statistics on CNN/DM and XSum datasets: they contain around 288k and 204k training examples, respectively; CNN/DM includes longer documents and summaries on average. Models. Most neural abstractive summarization models are based on sequence-to-sequence models. They differ in how summarization-specific operations such as copying/extraction are instantiated. We consider 5 prominent models and sumSystems Extractor Encoder Decoder PGC − LSTM LSTM+copy FASTRL sentences LSTM LSTM+copy BOTTOMUP words LSTM LSTM+copy TCONV − CNN+topic CNN BERTSUM − BERT-based Transformer Table 2: Comparison of summarization systems in terms of model architecture. marize their characteristics in Table 2.2 Details of each model can be found in Appendix B. PGC (See et al., 2017) uses the copy mechanism during decoding to allow extraction. FASTRL (Chen and Bansal, 2018) and BOTTOMUP (Gehrmann et al., 2018) decouple extraction and abstractive generation by learning to select sentences and words respectively in the first step; this model has been shown to generate more abstractive summaries compared to PGC. TCONV (Narayan et al., 2018) is initially designed for XSum, thus it does not include any explicit copying/extraction components and focuses on long text representation using convolutional neural networks. BERTSUM (Liu and Lapata, 2019) consists of a BERT-based encoder and a 6-layer Transformer decoder. It incorporates extraction implicitly by first fine-tuning the encoder on the extractive summarization task.3 Results. Our goal is to understand the level of abstractiveness of summaries generated by different models, and the influence on abstractiveness from the training data. Therefore, we analyzed summaries generated by the above models on CNN/DM and XSum. We computed the metrics described in Section 2.1 for both the generated summaries and the reference summaries on the test sets. The results are shown in Table 3. First, CNN/DM is more extractive than XSum. Extraction scores of the reference summaries in CNN/DM shows that almost half of the sentences are formed by deleting words in one of the source sentences. This shows that sentence compression (Knight and Marcu, 2002) is the main technique used for this dataset. In contrast, none of the summary sentences in XSum are formed by copying from a single source sentence. They are generated mostly by paraphrasing the input content, indicated by the large fraction of novel n-grams. 2We use state-of-the-art models proposed for each dataset at the time of writing. 3We use the BERTSUMEXTABS variation. 5058 Dataset Model Extraction Perfect fusion Novel n-grams Sentence Span Word k = 2 k ≥2 n = 1 n = 2 n = 3 CNN/DM Ref 1.39 2.14 9.27 12.92 14.87 12.40 51.03 71.22 PGC 35.45 34.18 15.45 10.90 1.61 0.62 3.33 7.42 FASTRL 8.94 40.06 39.64 4.22 0.84 0.82 10.89 20.74 BOTTOMUP 7.65 17.98 36.75 21.86 6.77 0.86 11.44 22.40 BERTSUM − 13.73 53.40 16.18 4.39 5.23 14.55 23.09 XSum Ref − − − 0.87 0.77 39.20 84.98 96.05 PGC − − − 0.41 3.47 30.08 74.27 91.27 TCONV − − − 0.35 2.31 34.07 80.62 95.12 BERTSUM − − − 0.33 3.15 28.93 75.85 91.41 Table 3: Abstractiveness measures of the models on CNN/DM and XSum datasets. The numbers for Extraction and Perfect fusion indicate % of sentences generated with these strategies. Numbers for novel n-grams indicate % of n-grams that are present in the output sentence but is not present in the source. Second, training data has a larger influence on the abstractiveness of model outputs. Similar to Zhang et al. (2018), we find that models trained on CNN/DM are near-extractive. However, the same models trained on XSum are significantly more abstractive. In fact, none of the models produced any sentence that copies words/phrases from a single source sentence, which is consistent with characteristics of the reference summaries in XSum. The content is more often rephrased in novel words/phrases. However, on both datasets, current models struggle to achieve the same level of abstractiveness as the reference summaries, indicating that additional inductive bias is needed to condense multiple sentences by rephrasing. Third, different models have different ways of doing extraction. When trained on CNN/DM, PGC generates the majority of sentences by copying complete source sentences, whereas FASTRL, BOTTOMUP and BERTSUM do simple compression by deletion more often. In addition, BOTTOMUP does more fusion compared to PGC, FASTRL and BERTSUM. 2.3 Annotating Summary Faithfulness4 To understand faithfulness of current systems and its relation to abstractiveness, we crowd-sourced human annotations on the output of each modeldataset pair described in Section 2.2. Since a nearextractive sentence is very likely to be grammatical and faithful, we focus on more abstractive cases by excluding output sentences that are either an exact copy or a substring of one of the source sentences. A key challenge to reliable human annotation is that the inter-annotator agreement on faithfulness is relatively low (Lebanoff et al., 2019). Our pi4We make our data and code available for reproducibility at: https://github.com/esdurmus/summary-faithfulness. lot study shows that workers often do not agree on incoherent sentences, e.g. whether “Chelsea beat Chelsea 5 −3 in the Premier League on Saturday.” is faithful or not. To standardize the annotation process, we design hierarchical questions to distinguish among failed generation that render a sentence meaningless, low-level grammatical errors that hardly affect semantic understanding, and faithfulness errors that convey incorrect (yet meaningful) information. Figure 1 shows the decision tree of our human annotation steps. We first evaluate the grammaticality of generated sentences (independent from the source document). We show annotators a summary sentence and ask them to choose whether the given sentence is meaningful or nonsensical to determine if the given sentence is structurally and semantically sound. If the annotator can make sense of the sentence, we then ask whether it is grammatical or has minor grammaticality problems which a person can easily correct. Next, for sentences labeled as meaningful in the first step, we ask workers whether they are faithful to the provided source document. In case the worker labels a sentence as unfaithful, we conduct a simple error analysis by asking them to indicate if the sentence contains information that is absent from or conflicting with the source document, which corresponds to hallucination and contradiction errors, respectively. More details about the annotation schema and guidelines are included in the Appendix C. Next, we describe our human evaluation results. 2.3.1 Human Annotation Results For each dataset-model pair described in Section 2.2, we randomly sampled 1000 sentencesource pairs eliminating output sentences that are either an exact copy or substring of a source sen5059 S1: S2: Chelsea and Manchester City are interested in signing Chelsea. A man has died after his car left the road and hit a tree in Surrey, police said. Source for S1: The man, in his 20s, was the only person in the BMW convertible, when the accident happened on the Aldershot road in Guildford. He was traveling east when his car left the road. Police closed the road while investigators were at the scene. Is it meaningful? Is it grammatical? Yes Is it faithful? Contradiction or Hallucination? Yes No Disregard Contradiction Yes Faithful Hallucination Both No Unfaithful Has Minor Issues Figure 1: The decision diagram of our human annotation process. Decision nodes are rectangular and outcome nodes are circular. We show the annotation path of two summary sentences, S1 (green arrows) and S2 (red arrows). S2 is annotated as nonsensical thus is not considered for faithfulness. S1 is annotated as unfaithful due to hallucinated content. Dataset Model Grammaticality Faithfulness Score Agreement Abstractiveness Score Agreement Abstractiveness CNN/DM PGC 93.34 94.04 10.05 70.05 77.28 13.35 FASTRL 83.06 88.05 44.46 68.27 77.45 49.74 BOTTOMUP 85.83 89.19 29.62 64.17 76.04 42.36 BERTSUM 97.53 97.65 29.44 95.03 95.14 39.16 XSum PGC 65.85 81.03 91.10 40.33 71.63 97.06 TCONV 70.85 85.03 94.94 38.96 69.90 98.81 BERTSUM 90.44 91.80 91.50 60.54 70.00 97.60 Table 4: Grammaticality and faithfulness results of human annotations. Score is computed by taking the percentage of annotators that selected “meaningful” and “faithful” for grammaticality and faithfulness annotation tasks, respectively, and then averaging these values across all the examples for the given annotation task. Agreement is computed by taking the percentage of the workers that annotate the majority class for the given example. Abstractiveness is measured by the percentage of novel trigrams in a given sentence. tence. We collected grammaticality annotations for these sentences from 5 annotators. We consider a sentence meaningful if at least 4 out of 5 annotators label it as meaningful in the first stage. We sampled 200 meaningful sentences randomly to collect annotations for faithfulness. Table 4 shows the results of the grammaticality and faithfulness human evaluations. Grammaticality. Overall, outputs from all models are scored high on grammaticality with high inter-annotator agreement. However, on more abstractive summaries (i.e. when trained on XSum), the grammaticality scores drop significantly. One exception is BERTSUM, which maintains good performance on XSum and achieves the highest grammaticality score on both datasets.5 Faithfulness. Near-extractive summaries generated from models trained on CNN/DM have significantly higher faithfulness scores than highly 5Majority of the sentences (> 70%) identified as “meaningful” are annotated as “perfectly grammatical” for each model-dataset pair. abstractive summaries from models trained on XSum. We find that PGC and TCONV has faithfulness errors in more than half of the sentences they generate when trained on XSum. Although BERTSUM generates fewer unfaithful sentences, it still suffers from performance drop on XSum. Interestingly, human agreement on faithfulness is also lower for abstractive summaries from XSum. This suggests that faithfulness errors are harder to catch for humans as well in more abstractive settings. We further observe conflicting information is more common among models trained on CNN/DM while hallucination is more common among models trained on XSum. Table 5 shows examples of meaningful but unfaithful sentences. 3 FEQA: Faithfulness Evaluation with Question Answering Our analysis above shows that the number of unfaithful sentences increases significantly as more abstractive summaries are generated. Thus the key challenge to faithfulness evaluation is to verify highly abstractive sentences against the source document, where surface similarity match5060 Source Output Sentence Domain Category ...However, Winger Ross Wallace (knee) and right-back Steven Reid (calf) could return for the Barclays premier league contest... Dean Marney and Steven Reid could return for the Barclays Premier League match. CNN/DM IC ....Odom also played for the US in the 2004 Athens Olympics, winning the bronze medal. His condition is unknown but well-wishers tweeted their support following the news... NBA basketball player Odom has been found dead in a helicopter crash in the US state of Nevada. XSum H Table 5: Examples of meaningful but unfaithful sentences. Category corresponds to the faithfulness error type for the output sentence. IC: Incorrect Concatenation, H: Hallucination. More examples are provided in Table 11. Summary sentence The home was built for inspection. Masked summary sentence The home was built for [MASK]. [MASK] was built for inspection. 1. Mask key information Generated questions Q1: What was the home built for? Q2: What was built for inspection 2. Generate QA examples from the summary Source …The home which was built for former australian prime minister malcolm fraser and his wife tamie has been opened for inspection just a day after his sudden passing… QA model 3. Evaluate the QA model given the document Answers from the document A1’: former australian prime minister malcolm fraser and his wife A2’: the home Answers from the summary A1: inspection A2: the home Faithfulness = F1 = 0.5 Figure 2: Overview of FEQA. Given a summary sentence and its corresponding source document, we first mask important text spans (e.g. noun phrases, entities) in the summary. Then, we consider each span as the “gold” answer and generate its corresponding question using a learned model. Lastly, a QA model finds answers to these questions in the documents; its performance (e.g. F1 score) against the “gold” answers from the summary is taken as the faithfulness score. ing would fail. If we have a good semantic representation of the sentence abstracting away its surface form (e.g. a list of facts about who did what to whom), we can simply compare the sentence representation to the document representation (e.g. check whether the fact list from the summary is a subset of the list from the document). Ideally, the representation should be domain-general and interpretable for easy error analysis. Motivated by the fast progress in reading comprehension (Chen, 2018; Gao et al., 2018) we propose to use QA pairs as a generic meaning representation of sentences for faithfulness evaluation. Given a summary sentence, we produce a list of questions asking about key information in the sentence and their corresponding answers. To verify this information against the source, we use a QA model to predict answers from the document. The questions and the QA model thus extract comparable information from two pieces of text. More matched answers from the document implies a more faithful summary since the information addressing these questions are consistent between the summary and the source document. Figure 2 shows the workflow of FEQA. Question generation. Prior work (Eyal et al., 2019; Scialom et al., 2019) uses cloze tests as questions by masking entities. To go beyond cloze-style QA and leverage more recent extractive (Rajpurkar et al., 2016) or even generative (Alec et al., 2019) QA models, we generate natural language questions from the summary sentence automatically. Specifically, we mask important text spans in a sentence, including noun phrases extracted by a constituency parser (Kitaev and Klein, 2018) and named entities extracted by the Stanford CoreNLP NER model (Finkel et al., 2005; Manning et al., 2014). We consider each span as the gold answer and generate its corresponding question by fine-tuning a pretrained BART language model (Lewis et al., 2019). To train the question generator, we adapt the QA2D dataset Demszky et al. (2018). The input is a declarative sentence with masked answers and the output is a question. A training example might look like: Input: Sally was born in <m> 1958 </m> Output: When was Sally born ? Since the transformation from declarative sen5061 tences to questions is almost rule-based without much paraphrasing, we expect the model to generalize to various domains. Answer verification. Given the QA pairs generated from a summary sentence, we run off-theshelf QA models to get answers to these questions from the source document. We then measure the average F1 score against the “gold” answers from the summary, which is our faithfulness score for the given sentence. This step does not have any constraint on the QA model. We experiment with the pretrained BERT-base model (Devlin et al., 2019) fine-tuned on SQuAD-1.1 (Rajpurkar et al., 2016) and SQuAD-2.0 (Rajpurkar et al., 2018). Note that in the case of SQuAD-2.0, the model may be able to hypothesize that a question is unanswerable. This case is equivalent to getting an answer incorrect (i.e. unfaithful). 4 Experiments We aim to understand to what extent the proposed QA-based metric and existing metrics capture faithfulness of a summary. Given pairs of documents and summary sentences without reference summaries, we measure correlations between human-annotated faithfulness scores (Section 2.3) and scores computed using each metric described below. 4.1 Automated Metrics for Faithfulness Word overlap-based metrics. A straightforward metric for faithfulness is the word overlap between the summary sentence and the document. We compute ROUGE (R), BLEU (B),6 between the output sentence and each of the source sentences (i.e. taking the source sentence as the reference). We then take the average scores and maximum score across all the source sentences. Since according to our analysis taking the average score consistently has higher correlation, we report only the correlation for the average. Embedding-based metrics. Word embeddings extend word overlap-based metrics beyond exact match. Recently, BERTScore (Zhang et al., 2019b) was proposed to compute the similarity between two sentences using contextual word embeddings from BERT. It has higher correlation 6We report only BLUE-4 since it performed the best for CNN/DM and no variation of BLEU has significant correlation with faithfulness for XSum. with human judgements on image captioning and machine translation than word overlap based metrics. We compute BERTScore (BERTSc) between each source sentence and the summary sentence.7 To get the final score, we experiment with both the average and the maximum scores computed from each source sentence and the summary sentence. We report results using the maximum score since it has better performance. Model-based metrics. In addition to QA, recent work has used relation extraction and textual entailment models for faithfulness evaluation (Falke et al., 2019a; Goodrich et al., 2019). For the relation extraction metric (RE), we compute the precision for the relation triplets extracted from the summary sentence and the source document using an off-the-shelf model (Angeli et al., 2015) from Stanford Open IE. For the textual entailment metric (ENT), we measure whether the summary sentence is entailed by the source using the pretrained ESIM model (Chen et al., 2017) from AllenNLP (Gardner et al., 2018). 4.2 Results Metric Comparison. We first compute scores for each metric on document and output sentence pairs on both CNN/DM and XSum datasets (748 and 286 pairs respectively). We then compute Pearson and Spearman correlation coefficients between scores given by each metric and humanannotated scores. Table 7 includes correlation coefficients for the examples from CNN/DM and XSum, respectively. We observe that for both CNN/DM and XSum, the score of QA-based evaluation has a higher correlation with faithfulness than other metrics. Although word-overlap based metrics are correlated with the faithfulness in more extractive settings (i.e. for CNN/DM), these metrics have no correlation with faithfulness in more abstractive settings (i.e. for XSum). We further notice that all the metrics have significantly lower correlation with human scores for XSum, suggesting that evaluating faithfulness is more difficult in highly abstractive settings; deeper understanding of the source and the summary sentence is necessary here. Consistent with the findings of Falke et al. (2019b), the entailment metric does not have a significant correlation with faithfulness in most cases. These models fail to distinguish entailed (faithful) 7https://github.com/Tiiiger/bert score. 5062 Source Sentence Output Sentence Metric Score Health Inspectorate Wales said Wrexham Maelor Hospital staff were under “considerable pressure” for long periods as ambulances waited outside. A hospital ward in Wrexham has been rated “inadequate” by inspectors after inspectors found patients at risk of harm. Entailment 72.83% The Black Poplar is one of the rarest native trees in the UK, with only 2,500 thought to be left. Northern Ireland’s first trees are among those recognised in the Welsh Architecture Trust’s list of the year’s best trees. BertScore 83.06% Table 6: Unfaithful examples missed by Entailment and BertScore. Score: Output score of the metrics; higher score indicates stronger entailment and similarity respectively. CNN/DM XSum Metric P S P S Word overlap-based R-1 12.02∗∗ 15.86∗∗ −2.57 0.07 R-2 13.25∗∗ 15.99∗∗ −5.78 −8.47 R-L 12.58∗∗ 16.49∗∗ −6.37 −9.68 B-4 12.09∗∗ 11.68∗∗ −6.76 −10.02 Embedding-based BERTSc 11.07∗ 10.70∗ 10.06 10.69 Model-based RE 8.58∗ 5.52 1.62 2.32 ENT 2.80 3.65 −5.62 −3.85 FEQA 32.01∗∗ 28.23∗∗ 26.31∗∗ 21.34∗∗ Table 7: Pearson (P) and Spearman (S) correlation between human-annotated faithfulness scores and the metric scores. *,** indicates p-values < 0.05,< 0.001, respectively. FEQA has the highest correlation with human scores for both CNN/DM and XSum. and non-entailed (unfaithful) summary sentences when both overlap largely with the source document, because models trained on current entailment datasets may rely on simple heuristics such as lexical overlap (McCoy et al., 2019). Similarly, BERTScore tends to give higher scores when there are overlapping concepts between the sentences even though the content is not the same. See Table 6 for examples. Content selection and faithfulness. Current evaluation metrics for summarization produce a single measure of the overall quality of the summary. Typically, the output summary is compared against the reference summary in terms of n-gram overlap. These metrics mainly evaluate content selection, i.e. whether the content of the output is similar to the content of the reference. In contrast, to evaluate faithfulness, we compare the output summary against the source document. One natural question that follows is whether high content matching sufficient for faithfulness. We compute the correlation coefficients between humanannotated faithfulness scores and ROUGE scores computed from the reference and the output sentence. As shown in Table 8, while there is a weak CNN/DM XSum Metric P S P S ROUGE-1 15.31∗∗ 14.92∗∗ 5.44 5.79 ROUGE-2 15.10∗∗ 16.39∗∗ 8.25 6.79 ROUGE-L 13.33∗∗ 13.35∗∗ 4.61 3.97 Table 8: Pearson (P) and Spearman (S) correlation between human-annotated faithfulness scores and ROUGE scores of content selection (computed between the reference and the output sentence). High content selection scores (typical ROUGE score for summarization) do not necessarily imply faithfulness of the summary. correlation between ROUGE scores of content selection and faithfulness on CNN/DM, the correlation is significantly lower than ROUGE scores of faithfulness (i.e. computed between the source and the output sentence). For XSum, there is no significant correlation between the content selection metrics and faithfulness. We provide unfaithful examples with high content selection scores in Appendix D.3. This suggests that content selection and faithfulness should be measured separately as opposed to using a unified score. Analysis and limitations of QA-based evaluation. Table 9 shows examples for a faithful and an unfaithful output sentence and the corresponding QA pairs. Note that the QA system is able to capture common errors such as conflicting information in the output sentence. To measure the reliability of FEQA, we further perform a manual error analysis using 100 randomly sampled QA pairs. We observe that around 94% of generated questions are mostly grammatical and correct given the mask. For 78% of the questions, the QA system has the correct behaviour: it answers the question correctly if the sentence is faithful to the article, otherwise it produces “unanswerable” or an incorrect answer. Majority of the errors of the QA system are because it either didn’t detect unanswerable questions or produces “unanswerable” when there exists an answer (14%). More5063 Source Output Sentence Question OA SA ...However, Winger Ross Wallace (knee) and right-back Steven Reid (calf) could return for the Barclays premier league contest... Dean Marney and Steven Reid could return for the Barclays Premier League match. Who and Steven Reid could return for the premier league match? Dean Marney Ross Wallace ...Miss Bruck, 22, from maybe has not been seen since the early hours of October 26, 2014. She has not been seen for six months... Miss Bruck, 22, from maybe has not been seen for six months. How long has Miss Bruck, 22 from not been seen for? six months six months Table 9: Examples detection results from FEQA. OA:Output Answer, SA:Source Answer. The output sentence in the first example is unfaithful, whereas the one for the second example is faithful. Bold text indicates the span that was masked to generate the question. over, when the article is long, QA system tends to make more mistakes. Especially for more abstractive settings, F1-score penalizes the correct answers when the answer from the article does not exactly match with the gold answer (i.e. “Donald Trump” vs. “the President of the United States Donald Trump”) (16%). 5 Related Work Problems in current neural generation models. Since the beginning of neural text generation, problems with repetition and generic responses have received lots of attention (Sordoni et al., 2015; Li et al., 2016; Holtzman et al., 2019). Recently, more work has focused on semantic errors in model outputs, such as adequacy in machine translation (Tu et al., 2017), faithfulness in summarization (Cao et al., 2018), and consistency in dialogue (Li et al., 2019). Our analysis on the abstractiveness-faithfulness tradeoff reveals additional limitation of current models, and suggests that we need new inductive bias on how to summarize beyond copying. QA as a proxy. Question answering is a broad format that subsumes many tasks (Gardner et al., 2019). To the best of our knowledge, Mani et al. (1999) first use QA as an extrinsic evaluation for summarization: A good summary should answer key questions a reader might have about an article. Later, QA is incorporated in human evaluation where one person writes questions and another person answers them based on the summary (Clarke and Lapata, 2010; Liu and Lapata, 2019). The closest to our work are recent efforts in automating this protocol, including rule-based approaches (Chen et al., 2018) and cloze-test QA (Eyal et al., 2019; Scialom et al., 2019). Our work is the first to apply automated question generation. While we focus on faithfulness, our QAbased metric is applicable to semantic comparison between any two pieces of text. Automated evaluation for NLG. Automated NLG evaluation is challenging as it often requires deep understanding of the text. Although metrics based on word overlap with the reference text are commonly used, it is widely known that they do not correlate well with human judgments (Novikova et al., 2017; Liu et al., 2016). Recently, more work has focused on model-based evaluation using discriminators (Lowe et al., 2017; Hashimoto et al., 2019), entailment models (Falke et al., 2019a), information extraction (Wiseman et al., 2017; Goodrich et al., 2019), and question answering (Chen et al., 2018; Eyal et al., 2019). 6 Conclusion We investigate the faithfulness problem in neural abstractive summarization and propose a QAbased metric for evaluating summary faithfulness. We show that current models suffer from an inherent trade-off between abstractiveness and faithfulness. They are good at copying important source content, but tend to concatenate unrelated spans and hallucinate details when generating more abstractive sentences. A new inductive bias or additional supervision is needed for learning reliable models. While our QA-based metric correlates better with human judgment and is useful for model development, it is limited by the quality of the QA model. The final evaluation should still rely on human annotation or human-in-the-loop methods (Chaganty et al., 2018). Acknowledgement We would like to thank Faisal Ladhak, the Lex and Comprehend groups at Amazon Web Services AI, and the anonymous reviewers for their feedback on this work. 5064 References R. Alec, W. Jeff, C. Rewon, L. David, A. Dario, and S. Ilya. 2019. Language models are unsupervised multitask learners. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 344–354, Beijing, China. Association for Computational Linguistics. Z. Cao, F. Wei, W. Li, and S. Li. 2018. Faithful to the original: Fact aware neural abstractive summarization. In Association for the Advancement of Artificial Intelligence (AAAI). A. Chaganty, S. Mussmann, and P. Liang. 2018. The price of debiasing automatic metrics in natural language evaluation. In Association for Computational Linguistics (ACL). Danqi Chen. 2018. Neural Reading Comprehension and Beyond. Ph.D. thesis, Stanford University. P. Chen, F. Wu, T. Wang, and W. Ding. 2018. A semantic QA-based approach for text summarization evaluation. In Association for the Advancement of Artificial Intelligence (AAAI). Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668, Vancouver, Canada. Association for Computational Linguistics. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In ACL. J. Clarke and M. Lapata. 2010. Discourse constraints for document compression. Computational Linguistics, 36. Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. ArXiv, abs/1809.02922. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. M. Eyal, T. Baumel, and M. Elhadad. 2019. Question answering as an automatic evaluation metric for news article summarization. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL). T. Falke, L. F. R. Ribeiro, P. A. Utama, I. Dagan, and I. Gurevych. 2019a. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Association for Computational Linguistics (ACL). Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019b. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2214–2220, Florence, Italy. Association for Computational Linguistics. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 363–370, Stroudsburg, PA, USA. Association for Computational Linguistics. Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 2–7, Melbourne, Australia. Association for Computational Linguistics. M. Gardner, J. Berant, H. Hajishirzi, A. Talmor, and S. Min. 2019. Question answering is a format; when is it useful? arXiv preprint arXiv:1909.11291. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1– 6, Melbourne, Australia. Association for Computational Linguistics. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics. B. Goodrich, V. Rao, P. J. Liu, and M. Saleh. 2019. Assessing the factual accuracy of generated text. In International Conference on Knowledge Discovery and Data Mining (KDD). M. Grusky, M. Naaman, , and Y. Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In North American Association for Computational Linguistics (NAACL). T. Hashimoto, H. Zhang, and P. Liang. 2019. Unifying human and statistical evaluation for natural language generation. In North American Association for Computational Linguistics (NAACL). 5065 Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 1693–1701, Cambridge, MA, USA. MIT Press. A. Holtzman, J. Buys, M. Forbes, and Y. Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics. K. Knight and D. Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artifical Intelligence, 139:91– 107. W. Kry´sci´nski, N. S. Keskar, B. McCann, C. Xiong, and R. Socher. 2019. Neural text summarization: A critical evaluation. In Empirical Methods in Natural Language Processing (EMNLP). L. Lebanoff, J. Muchovej, F. Dernoncourt, D. S. Kim, S. Kim, W. Chang, and F. Liu. 2019. Analyzing sentence fusion in abstractive summarization. arXiv preprint arXiv:1910.00203. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. ArXiv, abs/1910.13461. J. Li, M. Galley, C. Brockett, J. Gao, and W. B. Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL), pages 110–119. M. Li, S. Roller, I. Kulikov, S. Welleck, Y. Boureau, K. Cho, and J. Weston. 2019. Don’t say that! making inconsistent dialogue unlikely with unlikelihood training. arXiv preprint arXiv:1911.03860. C. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Empirical Methods in Natural Language Processing (EMNLP). Y. Liu and M. Lapata. 2019. Text summarization with pretrained encoders. In Empirical Methods in Natural Language Processing (EMNLP). R. Lowe, M. Noseworthy, I. V. Serban, N. AngelardGontier, Y. Bengio, and J. Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In Association for Computational Linguistics (ACL). I. Mani, G. Klein, L. Hirschman, T. Firmin, D. House, and B. Sundheim. 1999. The TIPSTER SUMMAC text summarization evaluation. In European Association for Computational Linguistics (EACL). Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60, Baltimore, Maryland. Association for Computational Linguistics. R. T. McCoy, E. Pavlick, and T. Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. arXiv preprint arXiv:1902.01007. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2016a. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In AAAI. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ a˘glar Gu`I‡lc¸ehre, and Bing Xiang. 2016b. Abstractive text summarization using sequence-tosequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. J. Novikova, O. Duˇsek, A. C. Curry, and V. Rieser. 2017. Why we need new evaluation metrics for NLG. In Empirical Methods in Natural Language Processing (EMNLP). Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784– 789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. 5066 T. Scialom, S. Lamprier, B. Piwowarski, and J. Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In Empirical Methods in Natural Language Processing (EMNLP). Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In ACL. A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J. Nie, J. Gao, and B. Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In North American Association for Computational Linguistics (NAACL). Z. Tu, Y. Liu, L. Shang, X. Liu, and H. Li. 2017. Neural machine translation with reconstruction. In Association for the Advancement of Artificial Intelligence (AAAI). Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76– 85, Berlin, Germany. Association for Computational Linguistics. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. S. Wiseman, S. M. Shieber, and A. M. Rush. 2017. Challenges in data-to-document generation. In Empirical Methods in Natural Language Processing (EMNLP). F. Zhang, J. Yao, and R. Yan1. 2018. On the abstractiveness of neural document summarization. In Empirical Methods in Natural Language Processing (EMNLP). T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi. 2019a. BERTSCORE: Evaluating text generation with BERT. arXiv preprint arXiv:1904.09675. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019b. Bertscore: Evaluating text generation with bert. ArXiv, abs/1904.09675. 5067 A Summarization Datasets All of our experiments are run on the CNN/DM and XSum datasets. We show basic statistics of the two datasets in Table 10. CNN/DM XSum # Training Documents 287,227 204,045 # Validation Documents 13,368 11,332 # Test Documents 11,490 11,334 Document: avg # of tokens 781.00 431.07 Document: avg # of sents. 40.00 33.00 Summary: avg # tokens 56.00 23.26 Summary: avg # of sents. 3.75 1.00 Table 10: Statistics of CNN/DM and XSum datasets. B Summarization Models The characteristics of each model used in our experiments are detailed below. Pointer Generator Model with Coverage (PGC) (See et al., 2017) uses the copy mechanism (Vinyals et al., 2015) to allow copying words from the source. The adapted coverage mechanism (Tu et al., 2016) is incorporated to alleviate repetition by keeping track of source words that have been summarized. This copy mechanism is widely adopted by subsequent models. Fast Abstractive Summarization with Reinforce (FASTRL) (Chen and Bansal, 2018) first uses an extractor agent to select salient sentences from the document, then condenses the extracted sentences using the Pointer-Generator summarizer. Bottom-up Summarization Model (BOTTOMUP) (Gehrmann et al., 2018) first selects words from the source document that are likely to appear in the summary, then generates using the Pointer-Generator model, where the copying mechanism is constrained to the previously selected words. It improves upon PGC by explicitly learning the selector to avoid copying long text spans. Topic-aware Convolutional Sequence-to-Sequence model (TCONVS2S) (Narayan et al., 2018) is a convolutional neural network-based model conditioned on the topics of the article. It is shown to be effective in capturing long-range dependencies in the documents. BERT-based model (BERTSUM) (Liu and Lapata, 2019) is a two-stage fine-tuning approach where the BERT-based encoder is first fine-tuned on the extractive summarization task and then on the abstractive sumarization task with the decoder (denoted as BERTSUMEXTABS in the original paper). C Details of Human Annotations C.1 Grammaticality Annotation Guidelines For grammaticality annotation, we present only the output sentence to the workers. We collect annotations from 5 workers for both of the tasks. For this task, given the output sentence, we provide workers the following guidelines: 1. First select whether the given sentence is “Nonsensical” or “Makes sense”. 2. If the given text is not a complete sentence, mark it as “Nonsensical”. 3. If you can understand the meaning of the sentence, despite grammaticality errors, and you are able to makes sense of it, select “Makes sense”. 4. If you did not select “Nonsensical”, evaluate whether the sentence is “Grammatical” or “Has Minor Grammaticality Issues”. C.2 Faithfulness Annotation Guidelines We present workers both the source and the output sentence and provide the following guidelines: 1. Read the sentence and the source fully. 2. If the information conveyed by the sentence is not expressed in the source, select “unfaithful”. 3. Avoid using general knowledge, and check if the sentence is consistent with the source. 4. If you select “unfaithful”, for the second part, select whether the information expressed by the sentence is not contained in the source or conflicting with the source. D Additional Analysis D.1 Examples for nonsensical sentences • Sandals, £34, office.co.uk, luluguinness.com. (generated by PGC for CNN/DM) 5068 Source Output Sentence Category ...Although her due date has not officially been confirmed, the duchess of Cambridge told wellwishers at a charity event last month: I am due mid-April, to the end of April... The duchess of Cambridge told wellwishers at a charity event last month: “The duke’s intention is to be at the commemorations”. IC ...Carragher spoke to a local TV starton during his time in Girona. Carragher posted a picture on his Instagram account of the opening ceremony... Carragher posted a picture on his son play in the famous youth tournament. IC A body was found by a member of the public on private land near Leighton, about 10 miles (16.09km) away from the centre of Shrewsbury, on Monday. Mr Bebbington’s family has been informed, West Mercia Police confirmed. The death of a man whose body was found in a river in Cumbria has been identified as murder. H The incident happened near Dr. Gray’s hospital shortly after 10:00. The man was taken to the hospital with what police said were serious but not life-threatening injuries. The a96 was closed in the area for several hours, but it has since reopened. A man has been taken to hospital after he was hit by a lorry in Dumfries. H Table 11: Examples of meaningful but unfaithful sentences. Category corresponds to the category of unfaithfulness error for the output sentence. IC: Incorrect Concatenation, H: Hallucination. Reference Output Sentence ... University of Nebraska researcher has revealed why stress is bad for you. Limited periods of stress are good, as they release cortisol... University of Nebraska researcher has revealed why stress is bad for you, stimulating your body to produce an important hormone called cortisol. ...Indian air force and Nepalese army medical team launch rescue mission to bring injured people to hospitals in Kathmandu. Forshani Tamang’s family carried her for four hours to reach help after she was wounded when their home was destroyed... Indian air crew and Nepalese army medical team were killed in Nepal’s Sindhupalchok quake. Table 12: Examples of unfaithful sentence with high content overlap (computed by ROUGE-L) with the reference. 5069 • He says easter triduum is a progression , although the word itself – triduum. (generated by FASTRL for CNN/DM) • Chelsea beat Chelsea 5 −3 in the Premier League on Saturday. (generated by FASTRL for CNN/DM) • 12 years a slave actress Lupita Woodley and oily vegetables. (generated by BOTTOMUP for CNN/DM) • A judge in Japan has ordered a judge to order a woman who has absconded from Japan to Japan. (generated by PGC for XSum) • Stoke City moved up to third in the Premier League with victory over Stoke City at Stoke. (generated by TCONV for XSum) • Johnny Depp’s management group is suing his management group over his “lavish lifestyle”. (generated by BERTSUM for XSum) D.2 Examples for meaningful but unfaithful sentences Table 11 includes examples that are annotated as meaningful but unfaithful. First three examples are picked from the models trained on CNN/DM, and last three are from the models trained on XSum. We observe that majority of sentences with faithfulness errors for CNN/DM dataset are generated by incorrect concatenation (IC). The models fuse two sentences from the source and generate a new sentence that is not consistent with the context of the source. Within this category, however, the models make a wide-range of mistakes such as copying the wrong entity, date, and quote. For XSum, the faithfulness mistakes are mostly hallucinations. Models tend to hallucinate information (e.g. entities, events, date) that is not present in the source. D.3 Examples for sentences with high content overlap with reference that are unfaithful Although current summarization models are evaluated with respect to the content overlap between the reference and the output, these metrics do not necessarily provide any guarantees for the faithfulness of the output. Table 12 includes examples with similar content overlap scores as the faithful examples but are unfaithful. We can see that although the output sentences include similar words and refer to similar topics, they include hallucinations and inaccurate information. D.4 Limitations of the datasets Since CNN/DM and XSum datasets are automatically crawled, we find that there is noise in the data. For example, source documents can include phrases such as “click here for the latest news”. We further observe that reference can carry information that is not in the source document since some of these one sentence highlights are written using additional world knowledge. Table 13 shows an example where the reference is unfaithful since it includes information that is not in the source (i.e. the fact that Ms. Wood’s first name is Leanne and she is Plaid Cymru leader.). 5070 Source Reference Ms Wood blamed the Conservatives in particular for claiming the SNP posed a threat to the future of the UK. She claimed ”progressive” parties like hers were offering a “collaborative” alternative to “combative” politics. “This election presents an opportunity for harmonious co-existence between our nations,” she said. Ms Wood’s comments followed Conservative claims that Labour dependence on support from the SNP to form a government after the election on 7 May would threaten the break-up of the UK. Campaigning in south Wales on Monday, she said: “The parties advocating progressive, inclusive nonpartisan cooperation in this election are not those who claim to cherish the political union above all others, but the national parties of Wales and Scotland. Along with the Greens in England, our parties have provided people across these islands with a collaborative alternative to the traditional combative Westminster politics.”. Ms Wood added that she had received “hundreds” of supportive messages from people in England following the televised debates. Plaid Cymru leader Leanne Wood has accused rival parties of ”dangerous and divisive rhetoric” in a ”desperate” attempt to win votes. Table 13: Example where reference includes information that is not in the source.
2020
454
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5071–5081 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5071 Fact-based Content Weighting for Evaluating Abstractive Summarisation Xinnuo Xu†, Ondˇrej Duˇsek‡, Jingyi Li†, Verena Rieser† and Ioannis Konstas† †The Interaction Lab, MACS, Heriot-Watt University, Edinburgh, UK ‡Charles University, Faculty of Mathematics and Physics, Prague, Czechia xx6, jl125, v.t.rieser, [email protected] [email protected] Abstract Abstractive summarisation is notoriously hard to evaluate since standard word-overlap-based metrics are biased towards specific words in the human reference. We introduce a new evaluation metric which abstracts away from the word-level and instead is based on factlevel content weighting, i.e. relating the facts of the document to the facts of the summary. We follow the assumption that a good summary will reflect all relevant facts, i.e. the ones present in the ground truth (human-generated reference summary). We confirm this hypothesis by showing that our weightings are highly correlated to human perception and compare favourably to the recent manual highlightbased metric of Hardy et al. (2019). 1 Introduction Text summarisation compresses long textual documents into short summaries while retaining the most important information from the source. In contrast to extractive summarisation, which directly copies the most relevant fragments, abstractive summarization retains the most important facts and expresses them via paraphrasing, aggregating and even inferring new facts. Recent advances in neural decoders led to a number of single-document summarisation systems that exhibit some level of abstraction in their outputs, usually in the simplest form of paraphrasing (See et al. (2017); Narayan et al. (2018); Liu and Lapata (2019), inter alia). Evaluating abstractive summarisation remains an open challenge (Schluter, 2017; Kry´sci´nski et al., 2019): First, decoders are amenable to pathogeniessuch as hallucination and/or omission of important information, which are hard to capture using existing evaluation metrics (Cao et al., 2018; Rohrbach et al., 2018; Duˇsek et al., 2020). Second, most datasets used for abstractive summarisation only contain a single reference summary, e.g. (Narayan et al., 2018; V¨olske et al., 2017), which most existing automatic metrics evaluate against, e.g. ROUGE using exact n-gram overlap (Lin, 2004), and thus tend to downvote paraphrases. We propose a new evaluation metric based on content weighting, where we abstract away from the particular surface form of the target summary, but represent it as facts using Semantic Role Labelling (SRL). In this way, we aim to better capture the semantic correctness of a summary, i.e. be more sensitive to hallucinations and omissions.1 In particular, we weight the facts present in the source document according to the facts selected by a human-written summary. This alignment is conducted using contextual, rather than token-level, embeddings, e.g., BERT (Devlin et al., 2019). For evaluation, we measure whether an automatically generated summary is able to capture the same facts as the target. We also show that the computed weights correlate well with human perception. Our code is available at https://github. com/XinnuoXu/CorrFA_for_Summarizaion. 2 Related Work The problem of reference bias has been addressed in several ways. First, metrics based on tokenlevel or wider context embedding similarities which aim to better capture paraphrases but remain largely word-oriented, e.g. (Sun and Nenkova, 2019; Zhang et al., 2019; Zhao et al., 2019; Clark et al., 2019). Goodrich et al. (2019) come close to our approach by using entity and relation extraction, but their approach is limited to texts that lend themselves to be represented by RDF triples. An alternative is manual evaluation against the source document. This entails selecting content either using domain experts, e.g., the PYRAMID method (Nenkova and Passonneau, 2004), factoids 1Note that we do not make any claims about fluency, which we assume is less of a problem for neural text generation. 5072 FACT1-tweet: [ARG0: the queen] has [V: tweeted] [ARG1: her thanks] [ARG2: to people who sent her 90th birthday messages on social media] FACT2-send: the queen has tweeted her thanks to [ARG0: people] [RARG0: who] [V: sent] [ARG1: her 90th birthday messages] [ARGM-LOC on social media] FACT1-tweet ARG0 V ARG1 ARG2 the queen had tweeted her thanks SRL Propositions Tree MR ARG0 V ARG1 ARGM-LOC people R-ARG0 who sent her 90th birthday messages on social media FACT2-send Figure 1: List of SRL propositions and corresponding tree MR with two facts for the sentence “The queen has tweeted her thanks to people who sent her 90th birthday messages on social media”. (Teufel and van Halteren, 2004), or via crowdsourcing (Shapira et al., 2019; Hardy et al., 2019). However, evaluation based on a small human-labelled test set is noisy, time consuming, and costly. Xenouleas et al. (2019) propose a referenceless metric, which only checks properties of the summary, not its relation to the original document. Sun and Nenkova (2019) compare average token and sentence ELMo embeddings against the document and claim good (system-level) correlations. Another option to avoid reference bias is question-based evaluation, either elicited manually (Clarke and Lapata, 2010; Narayan et al., 2018) or automatically (Scialom et al., 2019). However, it requires reference summaries as base for generating questions, thus only checking the summary contents indirectly. 3 Content Weighting 3.1 Fact Representation We represent facts in a sentence by adapting SRL (Palmer et al., 2005), which roughly captures “who did what to whom” in terms of predicates and their arguments. Given a list of parsed propositions for a sentence,2 each predicate-argument structure is considered as one separate fact, where the predicate stands for the event and its arguments are mapped to actors, recipients, time, place, etc (see Fig. 1). Following a simple observation that arguments can function as separate predicates themselves, we construct a hierarchical tree structure for the whole sentence. We create the tree meaning representa2We use the SRL implementation of He et al. (2018) found in https://allennlp.org with 86.49 test F1 on the Ontonotes 5.0 dataset. tion (MR) from the list of facts by choosing the fact with the largest coverage as the root and recursively build sub-trees by replacing arguments with their corresponding sub-facts (ARG2 in FACT1 is replaced by FACT2 in Fig. 1).3 3.2 Automatic Content Weighting We compute argument and fact weights by measuring the similarity of facts/arguments in the original document and the target summary based on their BERT word embeddings (for content words only) and their distance in the tree MR. We denote tokens of a document D and its summary S as tD =  tD 1 , tD 2 , · · · tD n and tS =  tS 1 , tS 2 , · · · tS m . To get their corresponding contextual embeddings eD k and eS k , we concatenate the two texts,4 feed them into a pre-trained BERT model (Devlin et al., 2019) and take the contextualized embedding output from its last Transformer layer. Argument-based weighting: We first represent the summary and the document as two sequences of leaf arguments5  AD 1 , AD 2 , · · · AD N and  AS 1 , AS 2 , · · · AS M respectively, and weight the i-th leaf argument in the document as: wa i = avg j=1...M cosdist  ED i , ES j  (1) i.e. the average embedding cosine distance to all arguments in the summary. Argument embeddings ED i and ES j are average embeddings of contentword tokens belonging to the arguments:6 E∗ i = avg k∈A∗ i ,k̸∈stops e∗ k (2) ∗∈{D, S}, “stops” denotes a list of stopwords. Fact-based weighting: We can represent the summary and the document as two sequences of facts  F D 1 , F D 2 , · · · F D N′ and  F S 1 , F S 2 , · · · F S M′ , and weight the i-th fact in the document by its average distance to facts in the summary: wf i = avg j∈1...M′df ij (3) 3We avoid using sentence-level MRs such as AMR (Banarescu et al., 2013), since current state-of-the-art performance of parsers is far behind compared to the simpler SRL task. 4By concatenating, the information in each text can be embedded in each other through self-attention. This is useful since the summary sometimes contains additional and/or common-sense knowledge not captured in the document. 5For example, in Fig. 1, ARG0, V, ARG1 in FACT1, and all the arguments in FACT2 are leaf arguments in the sentence, whereas ARG2 in FACT1 is not. 6For example, in Fig. 1, “her” and “thanks” are two tokens directly attached to the argument ARG1 of FACT1. Thus, the embedding for ARG1 of FACT1 is the average embedding of these two tokens. 5073 The fact-level distance df ij is defined on top of argument weighting: df ij = avg AD l ∈F D i ,AS k ∈F S j βilβjk⌊cosdist  ED l , ES k  ⌋>γ (4) It is computed as the average cosine distance over embeddings of all leaf arguments in the subtrees of fact F D i in the document and fact F S j in the summary, which is (1) filtered by a threshold γ to discard argument pairs with weak semantic relation7 and (2) weighted by MR tree distances of arguments to facts: βil = 1 √ treedist(Fi,Al).8 4 Content-weighting-based Metrics We now use these weights to introduce two metrics: Corr-F (fact-level) and Corr-A (argumentlevel). Let wf gold and wf cand denote the fact-level content weights calculated using the procedure from Section 3 based on human-reference and system-generated summaries, respectively. Similarly, wa gold, and wa cand denote the argument-level weights. Corr-F is then the Pearson Correlation Coefficient (PCC) between wf gold and wf cand. Corr-A is PCC between wa gold and wa cand. In other words, Corr-F and Corr-A indicate whether the generated summary focuses on the informative main points in the document (i.e. the same points as the reference summary), on two different levels of granularity. 5 Metrics Evaluation We validate our Corr-F and Corr-A metrics by collecting human judgements. In the following, we (1) collect content highlights from human judges using the Amazon Mechanical Turk platform9 and calculate manual content weighting based on them, (2) calculate correlations of the manual content weights with our automatic content weights, (3) compare our metrics against existing referencebased ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2019), as well as the referenceless manual HROUGE score (Hardy et al., 2019).10 We use the extreme summarisation dataset (XSum; Narayan et al., 2018), which consists of 7In this work, we set the threshold to 0.6. 8E.g., in Fig. 1, treedist(FACT1, “ARG1: her thanks”) = 1, treedist(FACT1, “ARG0: people”) = 2, treedist(FACT2, “ARG0: people”) = 1. 9Using the interface from https://github.com/ sheffieldnlp/highres. 10Note that Corr-F/A are calculated with content weighting with respect to the reference. Therefore, strictly speaking, Corr-F/A are different to all existing metrics but still share some properties with them. We show the correlation between Corr-F/A and existing metrics in terms of relative system ranking, rather than a head-to-head metrics comparison. BBC articles and accompanying single-sentence summaries, i.e. sub-headlines of the original articles, professionally written by the authors of the articles. Due to the abstractive nature of the summaries, factoid content selection on phrase level is required beyond sentence-level extraction or tokenlevel matching, making this dataset a popular test bed for abstractive summarisation. We use the outputs of three recent abstractive summarization systems as evaluation targets for our metrics: (i) the Pointer-Generator model (PTGEN; See et al., 2017); (ii) the Topic-aware Convolutional Sequence-to-Sequence model (TCONVS2S; Narayan et al., 2018) and (iii) the abstractive summarization model using pretrained BERT encoders (BERTSUMABS; Liu and Lapata, 2019).11 5.1 Manual Annotation Collection Manual Content Highlighting: By extending the framework of Hardy et al. (2019), we collect manual content highlights on fact and argument levels, where we present human judges with the source document and the gold summary, with one fact/argument typeset in bold. The judges are required to select phrases or sentences in the document that support the bolded fact/argument (see Figure 4-9 in Appendix B). In both cases, judges are allowed to select parts of the text with any granularity. We limit the number of allowed continuous chunks and the maximum number of words to encourage highlights of fact/argument level.12 We employ 3 judges per document in both cases. We use the same 50 articles and gold summaries sampled from the XSum test set as Hardy et al. (2019). Manual Content Weighting Calculation: Argument Level: Given a document D and a summary S, we define the weight of each token tD k with respect to a summary argument AS j as: wt kj = NumH tD k , AS j  NumA AS j  (5) NumH(tD k , AS j ) denotes the number of times token tk was selected and NumA(AS j ) is the total number of annotators who were shown AS j bolded. We use token weights to compute manual argument-level weights wa man (parallel to Eq. 1): wa man,i = avg j=1...M avg tD k ∈AD i wt kj (6) 11For the first two, we use candidate summaries provided by the authors. For the third, we generated summaries by training a model with code and data offered by the authors. 12We allow 4 chunks of max. 50 words total for fact-level and 5 chunks of max. 20 words for argument-level annotation. 5074 Granularity PCC-W PCC-S Argument-level 0.3326 0.4762 Fact-level 0.3129 0.7291 Table 1: Correlation of automatic content weighting and selection with human highlights. Fact Level: By adapting Eq. 5, we calculate a weight wt ki for each token in document D w.r.t. bolded fact F S i in the summary S. The weight wf ij between fact F D i in the document and F S j in its summary is calculated using Eq. 6. We use Eq. 3 to get the manual fact content weighting wf man. 5.2 Agreement with Manual Weighting Correlation: We evaluate how automatic content weighting wa gold and wf gold correlates with manual content weighting wa man and wf man. Using the Pearson Correlation Coefficient directly over the content weights (PCC-W), we evaluate the correlation between content weights assigned by human judges and automatically calculated weights – PCC(w∗ gold, w∗ man). As a more extreme form of weighting, we compute the correlation between content “selected” (i.e. ignoring computed weights) by human judges and the automatic mechanism (PCC-S); we set the value to 1 if the weight is over 0, meaning the fact/argument is selected. While content-weighting correlations are just moderate, content-selection correlations are strong, especially the fact-based (Table 1). In other words, the automatic method attends to facts human judges consider important, but weighs them differently. System-level Agreement: We check systemlevel agreement on Corr-F and Corr-A metrics when using automatic vs. manual content weighting (Table 2): We compute fact/argument-level content weights w∗ cand for each system (cf. Section 4). We then calculate Corr-F and Corr-A of w∗ cand against both w∗ man (manual weighting) and w∗ gold (automatic weighting) on the 50 articles with human annotation introduced in Section 5.1. The Corr-F metric shows the same system-level ordering for both manual and automatic content weighting. Furthermore, both manual and automatic content weighting agree that TCONVS2S and PTGEN achieve similar performance but are strongly outperformed by BERTSUMABS. 5.3 Comparison to existing metrics Corr-F/A vs. referenceless metrics: HROUGE score (Hardy et al., 2019) is a content-weightingbased referenceless evaluation metric. Unlike our Model Corr-F Corr-A Manual content weighting – w∗ cand vs. w∗ man TCONVS2S 0.2274 0.2464 PTGEN 0.2180 0.2433 BERTSUMABS 0.2508 0.2662 Automatic content weighting – w∗ cand vs. w∗ gold TCONVS2S 0.6203 0.6280 PTGEN 0.5822 0.5727 BERTSUMABS 0.6714 0.6533 Table 2: System-level scores for manual and automatic content weighting on 50 human-annotated documents. Model Unigram Bigram Pre Rec Pre Rec TCONVS2S 7.64 5.37 3.16 2.08 PTGEN 7.62 6.42 3.25 2.61 BERTSUMABS 8.24 6.25 3.29 2.41 Table 3: HROUGE on 50 human-annotated documents. approach, it operates on token level and is entirely based on manual annotation. The evaluation results in Table 3 show that Corr-F/A’s ranking is identical to HROUGE’s unigram and bigram precision, with Corr-F also assigning similar proportions.13 Corr-F/A vs. reference-based metrics: ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2019) are both reference-based metrics, which compute a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches as used in ROUGE, BERTScore computes token similarity using contextual embeddings. Comparing to ROUGE and BERTScore on the full XSum test set (see Table 4) shows full agreement on system ordering for both metrics. 6 Discussion 6.1 Error Analysis We now provide examples demonstrating the strength and weaknesses of Corr-F/A by analysing system outputs where BERTScore and Corr-F/A demonstrate different ordering. Strengths: (1) Corr-F/A are more sensitive to content-level hallucination than BERTScore. Summaries with facts/arguments never mentioned in the original document get much lower Corr-F/A scores than summaries with content that appears in the document verbatim or as a paraphrase. Example 1 in Table 5 shows Corr-F/A penalizing the incorrect fact “to become the next president” generated by BERTSUMABS, while giving higher scores to TCONVS2S which paraphrased “abdicate” with 13We computed HROUGE for BERTSUMABS using https://github.com/sheffieldnlp/highres. 5075 Model CorrF/A CorrF/A(L) ROUGE BERTScore Corr-F Corr-A Corr-F Corr-A R1 R2 RL P R F1 TCONVS2S 0.616 0.636 0.700 0.650 31.89 11.54 25.75 0.613 0.573 0.591 PTGEN 0.596 0.623 0.664 0.620 29.70 9.21 23.24 0.577 0.566 0.570 BERTSUMABS 0.655 0.683 0.715 0.670 38.53 16.09 30.80 0.628 0.616 0.621 Table 4: Summarisation models evaluated using Corr-F/A on full test set, with ROUGE and BERTScore scores. Note that Corr-F/A(L) is Corr-F/A calculated using a lower-performing SRL tool (He et al., 2017, see Section 6.2). # Source Summary Corr-F Corr-A BS-F1 1 Ground truth Japan’s emperor Akihito has expressed his desire to abdicate in the next few years, public broadcaster NHK reports. BERTSUMABS Japan’s emperor Akihito is considering whether to become the next president of the country, reports say. 0.68 0.68 0.67 TCONVS2S Japan’s emperor Akihito has announced that he will step down in the Japanese capital, Tokyo. 0.81 0.71 0.67 2 Ground truth Dick Advocaat has resigned as Sunderland boss, with the team yet to win in the Premier League this season. BERTSUMABS Sunderland manager Dick Advocaat has left the club by mutual consent after only eight games in charge. 0.60 0.66 0.65 PTGEN Sunderland have appointed former boss Dick Advocaat as manager at the end of the season to sign a new deal. 0.26 0.34 0.65 3 Ground truth A Chinese space capsule carrying three crew members has returned to Earth following a 13-day mission. BERTSUMABS China has successfully landed its first ever space flight, in a move hailed as a “historic moment”. 0.56 0.67 0.53 TCONVS2S China has successfully launched the first ever robotic mission to date for the first time in its history. 0.85 0.68 0.51 4 Ground truth A council plans to employ its own staff to help young people with mental health problems. BERTSUMABS A new academy to train people with mental health problems is to be set up in West Berkshire. 0.82 0.68 0.64 TCONVS2S A new academy for children with mental health problems is being launched in West Berkshire. 0.73 0.56 0.67 Table 5: Examples of system outputs where Corr-F/A and BERTScore-F1 disagree on system ordering. “step down”. (2) Corr-F/A better identify paraphrases, especially those containing extra content mentioned in the document but not in the groundtruth summary. Example 2 in Table 5 shows that Corr-F/A do not penalize BERTSUMABS for generating the argument “after only eight games in charge”, which is mentioned in the document. Weaknesses: (1) Corr-F is weaker in identifying token-level hallucination,14 as in Example 3 in Table 5. Corr-F gives a higher score to TCONVS2S output with one hallucinated token “robotic”. However, Corr-A’s more fine-grained approach works slightly better in this case. (2) Corr-F/A tend to under-score summaries containing content mentioned in the ground truth but only touched briefly in the document. In Example 4 in Table 5, Corr-F/A score the output of TCONVS2S lower, even though it correctly captures “an academy for children with mental health”, which is mentioned only once in the document. In sum, Corr-F/A is less dependent on the reference summary by also considering the source document, and thus has less of a reference bias than BERTScore. In addition, Corr-F/A helps to identify ungrounded facts, i.e. content-level hallucinations, which is important for identifying misinformation in automated news reporting. 6.2 Robustness of Corr-F/A As noted in Section 3.1, Corr-F/A is based on publicly available SRL tools. To demonstrate the robustness of our metrics, we evaluate the same sys14Token-level hallucination means an incorrect token within an otherwise correct fact structure. Content-level hallucination happens when whole facts or arguments are hallucinated. tem outputs with Corr-F/A calculated using a lowerperforming SRL tool (He et al., 2017).15 The results are shown as Corr-F/A(L) in Table 4 and show full agreement with Corr-F/A in terms of system ordering. However, the better performing original SRL system widens the margin between systems. 7 Conclusions and Future Work We present an automatic evaluation framework for abstractive summarisation, which is low-cost and robust, as it does not rely on expert annotators nor is susceptible to crowdsourcing noise. Using fact representations, we are able to capture semantically similar, but at the same time distant in surface form, content in the summary that aligns with arbitrarily far-apart parts of the input document, casting our metric to be directly interpretable. Our metric is more sensitive to perturbations of the facts in the target summary, which resemble common hallucination phenomena of neural decoders (see Figure 2-3 in Appendix A for examples). In the future, we intend to investigate different meaning representation formalisms, such as AMR (Banarescu et al., 2013) and Dynamic Syntax (Kempson et al., 2001) and extend to other datasets (e.g. multiplereference summarization) and tasks (e.g. response generation in dialogue). Acknowledgements This research received funding from the EPSRC project MaDrIgAL (EP/N017536/1) and Charles University project PRIMUS/19/SCI/10. We would like to acknowledge the AWS Cloud Credits for Research programme. 15F1 on the Ontonotes 5.0 dataset is 81.6% (δ = −4.89). 5076 References Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the Original: Fact Aware Neural Abstractive Summarization. In AAAI, New Orleans, LA, USA. ArXiv: 1711.04434. Elizabeth Clark, Asli Celikyilmaz, and Noah A Smith. 2019. Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts. In ACL, Florence, Italy. James Clarke and Mirella Lapata. 2010. Discourse constraints for document compression. Computational Linguistics, 36(3):411–441. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ondˇrej Duˇsek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG Challenge. Computer Speech & Language, 59:123 – 156. Ben Goodrich, Vinay Rao, Mohammad Saleh, and Peter J. Liu. 2019. Assessing The Factual Accuracy of Generated Text. In KDD, Anchorage, AK, USA. ArXiv: 1905.13322. Hardy, Shashi Narayan, and Andreas Vlachos. 2019. HighRES: Highlight-based reference-less evaluation of summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3381–3392, Florence, Italy. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 364–369, Melbourne, Australia. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473–483, Vancouver, Canada. Association for Computational Linguistics. Ruth M Kempson, Wilfried Meyer-Viol, and Dov M Gabbay. 2001. Dynamic syntax: The flow of language understanding. Blackwell Oxford. Wojciech Kry´sci´nski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540– 551, Hong Kong, China. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3728–3738, Hong Kong, China. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 145–152, Boston, Massachusetts, USA. Association for Computational Linguistics. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object Hallucination in Image Captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4035–4045, Brussels, Belgium. Natalie Schluter. 2017. The limits of automatic summarisation according to ROUGE. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 41–45, Valencia, Spain. Association for Computational Linguistics. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2019. Answers 5077 Unite! Unsupervised Metrics for Reinforced Summarization Models. In 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP) and 9th International Joint Conference on Natural Language Processing (IJCNLP), Hong Kong. ArXiv: 1909.01610. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073–1083, Vancouver, Canada. Ori Shapira, David Gabay, Yang Gao, Hadar Ronen, Ramakanth Pasunuru, Mohit Bansal, Yael Amsterdamer, and Ido Dagan. 2019. Crowdsourcing Lightweight Pyramids for Manual Summary Evaluation. In NAACL, Minneapolis, MN, USA. ArXiv: 1904.05929. Simeng Sun and Ani Nenkova. 2019. The Feasibility of Embedding Based Automatic Evaluation for Single Document Summarization. In 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP) and 9th International Joint Conference on Natural Language Processing (IJCNLP), Hong Kong. Simone Teufel and Hans van Halteren. 2004. Evaluating information content by factoid analysis: Human annotation and stability. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 419–426, Barcelona, Spain. Association for Computational Linguistics. Michael V¨olske, Martin Potthast, Shahbaz Syed, and Benno Stein. 2017. TL;DR: Mining Reddit to learn automatic summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 59–63, Copenhagen, Denmark. Association for Computational Linguistics. Stratos Xenouleas, Prodromos Malakasiotis, Marianna Apidianaki, and Ion Androutsopoulos. 2019. SUMQE: a BERT-based Summary Quality Estimation Model. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6007–6013, Hong Kong, China. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. arXiv preprint arXiv:1904.09675. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance. In 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP) and 9th International Joint Conference on Natural Language Processing (IJCNLP), Hong Kong. ArXiv: 1909.02622. A Fact-level Content Weighting Examples Fig. 2 and 3 show examples for documents weighted using Corr-F/Corr-A with respect to different summaries. In Fig. 2, the left column shows one document weighted by the reference summary and two system-generated summaries from BERTSUMABS and TCONVS2S respectively (summaries are shown in the right column). As we can see, there are 4 relatively important facts in the document weighted by the reference summary. BERTSUMABS and TCONVS2S capture 3 and 2 out of 4, respectively. Other than the important facts highlighted by the reference summary, TCONVS2S also assigns high weights to other facts; that leads to the hallucinated generation and lower Corr-F Corr-A scores. On the other hand, BERTSUMABS’s summary weighs facts in the document in a similar way to the reference summary, which lead to a strongly related summary and high Corr-F and Corr-A scores. In Fig. 3, there are 5 relatively important facts in the document weighted by the reference summary. BERTSUMABS and TCONVS2S capture 4 and 3 out of 5, respectively. Both systems miss the fact “Pope Francis, who has taken a more liberal stance on homosexuality”. However, the weight of this fact given by BERTSUMABS’s output is higher than with TCONVS2S’s. The Corr-F and Corr-A are lower for TConvS2S due to misweighting of informative facts in the document. B Annotation Interface We provide the following illustrations of the human annotation interface: • Annotation interface for manual content weighting examples, including the instructions, for fact-level (Fig. 4 and 5) and argument-level (Fig. 6 and 7) annotation, • Examples of human annotation results for fact (Fig. 9) and argument (Fig. 8) level. Please refer to the individual figure captions for detailed descriptions. 5078 An australian runner who suffered like threatening burns when she was trapped by a bushfire during a race has completed the hawaii ironman, seen as the world's toughest triathlon Reference: BertSumAbs: TConvS2S: An australian runner who suffered severe burns in a bushfire in hawaii has completed an ironman triathlon  Corr-F: 0.96 Corr-A: 0.88 An australian runner become the first person to win a race for the first time in almost 30 years Corr-F: 0.67 Corr-A: 0.73 Figure 2: A document (left) weighted with respect to a reference summary and two system outputs (right), with Corr-F/Corr-A scores. The colour represents the sum of argument- and fact-level weights for each token (Eqs. 3 and 4). The darker the colour, the more important the fact is. France has said it will not back down over its nomination of an openly gay ambassador to the Vatican. Reference: BertSumAbs: TConvS2S: France has said it is considering whether to appoint a French ambassador to the Vatican as a replacement for the right-wing politician. Corr-F: 0.73 Corr-A: 0.59 The Vatican has announced the appointment of a new ambassador to the Vatican. Corr-F: 0.69 Corr-A: 0.39 Figure 3: Another document (left) weighted with respect to a reference summary and two system outputs (right), with Corr-F/Corr-A scores (see Fig. 2 for details). 5079 Figure 4: The instruction for fact-level human highlight annotation. Figure 5: The human annotation interface for fact level. Human judges are required to highlight content in the document that is supporting the fact printed in bold “The Queen has tweeted her thanks” (FACT1 of the summary in Figure 1 in the paper). 5080 Figure 6: The instruction for argument-level human highlight annotation. Figure 7: The human annotation interface for argument level. Human judges are required to highlight content in the document that is supporting the phrase printed in bold “on social media” (argument ARGM-LOC of FACT2 of the summary in Figure 1 in the paper). 5081 Figure 8: Human highlight annotation for the argument ARG1 of FACT1 “her thanks” of the summary in Figure 1 in the paper. Figure 9: Human highlight annotation for the FACT1 “The Queen has tweeted her thanks” of the summary in Figure 1 in the paper.
2020
455
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5082–5093 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5082 Hooks in the Headline: Learning to Generate Headlines with Controlled Styles Di Jin,1 Zhijing Jin,2 Joey Tianyi Zhou,3∗Lisa Orii,4 Peter Szolovits1 1CSAIL, MIT, 2Amazon Web Services, 3A*STAR, Singapore, 4Wellesley College {jindi15,psz}@mit.edu, [email protected] [email protected], [email protected] Abstract Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from the text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines surpasses that of the state-ofthe-art summarization model by 9.68%, and even outperforms human-written references.1 1 Introduction Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure 1, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What’s That Chuckle You Hear? It May Be the New Frog From NYC.” ∗Corresponding author. 1Our code is available at https://github.com/ jind11/TitleStylist. New frog species discovered in New York City area. It has a distinctive croak, scientists find. Leopard frog species doesn't yet have a name. Ribbit! Frog Species Found in New York City Has a Croak of Its Own Original Headline: Article Summ: Leopard Frog Found in New York City HG Model Output: What's that Chuckle You Hear? It May be the New Frog from NYC Humorous: A New Frog with a Croak of Its Own Awaits its Name in the Roads of NYC Romantic: 3 Facts about the New Frog with a Croak of Its Own Click-Baity: Figure 1: Given a news article, current HG models can only generate plain, factual headlines, failing to learn from the original human reference. It is also much less attractive than the headlines with humorous, romantic and click-baity styles. To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others. SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title (Cao et al., 2018b,a; Lin et al., 2018; Song et al., 2019; Dong et al., 2019). These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style 5083 explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset (Rush et al., 2017)), obstructing the models from learning a distinct style. In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture (Vaswani et al., 2017), we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure 1. The main contributions of our paper are listed below: • To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data. • Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones. • Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box. 2 Related Work Our work is related to summarization and text style transfer. Headline Generation as Summarization Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules (Luhn, 1958; Edmundson, 1964; Mathis et al., 1973; Salton et al., 1997; Jing and McKeown, 1999; Radev and McKeown, 1998; Dorr et al., 2003). To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, Rush et al. (2015) proposed attentionbased summarization (ABS) to make Banko et al. (2000)’s framework of summarization more powerful. Many recent works extended ABS by utilizing additional features (Chopra et al., 2016; Takase et al., 2016; Nallapati et al., 2016; Shen et al., 2016, 2017a; Tan et al., 2017; Guo et al., 2017). Other variants of the standard headline generation setting include headlines for community question answering (Higurashi et al., 2018), multiple headline generation (Iwama and Kano, 2019), user-specific generation using user embeddings in recommendation systems (Liu et al., 2018), bilingual headline generation (Shen et al., 2018) and question-style headline generation (Zhang et al., 2018a). Only a few works have recently started to focus on increasing the attractiveness of generated headlines (Fan et al., 2018; Xu et al., 2019). Fan et al. (2018) focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMailstyle control shows a negligible improvement. Xu et al. (2019) utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers’ comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. Shu et al. (2018) proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles. Text Style Transfer Our work is also related to text style transfer, which aims to change the style attribute of the text while 5084 preserving its content. First proposed by Shen et al. (2017b), it has achieved great progress in recent years (Xu et al., 2018; Lample et al., 2019; Zhang et al., 2018b; Fu et al., 2018; Jin et al., 2019; Yang et al., 2018; Jin et al., 2020). However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem. 3 Methods 3.1 Problem Formulation The model is trained on a source dataset S and target dataset T. The source dataset S = {(a(i), h(i))}N i=1 consists of pairs of a news article a and its plain headline h. We assume that the source corpus has a distribution P(A, H), where A = {a(i)}N i=1, and H = {h(i)}N i=1. The target corpus T = {t(i)}M i=1 comprises of sentences t written in a specific style (e.g., humor). We assume that it conforms to the distribution P(T). Note that the target corpus T only contains stylecarrying sentences, not necessarily headlines — it can be just book text. Also no sentence t is paired with a news article. Overall, our task is to learn the conditional distribution P(T|A) using only S and T. This task is fully unsupervised because there is no sample from the joint distribution P(A, T). 3.2 Seq2Seq Model Architecture For summarization, we adopt a sequence-tosequence (Seq2Seq) model based on the Transformer architecture (Vaswani et al., 2017). As in Figure 2, it consists of a 6-layer encoder E(·; θE) and a 6-layer decoder G(·; θG) with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model (Song et al., 2019). MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG. 3.3 Multitask Training Scheme To disentangle the latent style from the text, we adopt a multitask learning framework (Luong et al., 2015), training on summarization and DAE simultaneously (as shown in Figure 3). Multi-Head Self-Attention Layer Norm MLP Layer Norm Emb Emb Emb Encoder Decoder Multi-Head Encoder-Attention MLP Multi-Head Self-Attention Style-Dependent Layer Norm Style-Dependent Query Transformation Style-Dependent Layer Norm Emb Emb Emb Figure 2: The Transformer-based architecture of our model. Figure 3: Training scheme. Multitask training is adopted to combine the summarization and DAE tasks. Supervised Seq2Seq Training for ES and GS With the source domain dataset S, based on the encoder-decoder architecture, we can learn the conditional distribution P(H|A) by training zS = ES(A) and HS = GS(zS) to solve the supervised Seq2Seq learning task, where zS is the learned latent representation in the source domain. The loss function of this task is LS(θES, θGS) = E(a,h)∼S[−log p(h|a; θES, θGS)], (1) where θES and θGS are the set of model parameters of the encoder and decoder in the source domain and p(h|a) denotes the overall probability of generating an output sequence h given the input article a, which can be further expanded as follows: p(h|a; θES, θGS) = L Y t=1 p(ht|{h1, ..., ht−1}, zS; θGS), (2) where L is the sequence length. DAE Training for θET and θGT For the target style corpus T, since we only have the sentence t without paired news articles, we train zT = ET (˜t) and t = GT (zT ) by solving an unsupervised re5085 construction learning task, where zT is the learned latent representation in the target domain, and ˜t is the corrupted version of t by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error LT : LT (θET , θGT ) = Et∼T [−log p(t|˜t)], (3) where θET and θGT are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss LS and the unsupervised denoised auto-encoding loss LT via multitask learning, so the total loss becomes L(θES, θGS, θET , θGT ) = λLS(θES, θGS) + (1 −λ)LT (θET , θGT ), (4) where λ is a hyper-parameter. 3.4 Parameter-Sharing Scheme More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as P(T|A) = GT (ES(A)). However, without samples from P(A, T), this is a challenging or even impossible task if ES and ET , or GS and GT are completely independent of each other. Hence, we need to add some constraints to the network by relating ES and ET , and GS and GT . The simplest design is to share all parameters between ES and ET , and apply the same strategy to GS and GT . The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus T, we share all parameters of the encoder between two domains, i.e., between ES and ET , whereas we divide the parameters of the decoder into two types: styleindependent parameters θind and style-dependent parameters θdep. This means that only the styleindependent parameters are shared between GS and GT while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below. Type 1. Style Layer Normalization Inspired by previous work on image style transfer (Dumoulin et al., 2016), we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation x into a normalized activation z specific to the style s: z = γs(x −µ σ ) −βs, (5) where µ and σ are the mean and standard deviation of the batch of x, and γs and βs are style-specific parameters learned from data. Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers. Type 2. Style-Guided Encoder Attention Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows: Q = query · W s q (6) K = key · Wk (7) V = value · Wv (8) Att(Q, K, V ) = Softmax  QKtr √dmodel  V , (9) where query, key, and value denote the triple of inputs into the multi-head attention module; W s q , Wk, and Wv denote the scaled dot-product matrix for affine transformation; dmodel is the dimension of the hidden states. We specialize the dot-product matrix W s q of the query for different styles, so that Q can be different to induce diverse attention patterns. 4 Experiments 4.1 Datasets We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in 5086 the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively. 4.1.1 Source Dataset The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set. We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus (Sandhaus, 2008) and treat the abstracts as the news articles. Following the standard preprocessing procedures (Kedzie et al., 2018),2 we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs. We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models (Hermann et al., 2015).3 We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages,4 and in total collected 90,236 news abstract-headline pairs. 4.1.2 Three Target Style Corpora Humor and Romance For the target style datasets, we follow (Chen et al., 2019) to use humor and romance novel collections in BookCorpus (Zhu et al., 2015) as the Humor and Romance datasets.5 We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets. Clickbait We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset.6 We collected 500K headlines for our use. Some examples from each style corpus are listed in Table 1. 2https://github.com/kedz/ summarization-datasets 3We use CNN instead of the DailyMail dataset since DailyMail headlines are very long and more like short summaries. 4https://cs.nyu.edu/˜kcho/DMQA/ 5https://www.smashwords.com/ 6https://www.kaggle.com/therohk/ examine-the-examiner Style Examples Humor - The crowded beach like houses in the burbs and the line ups at Walmart. - Berthold stormed out of the brewing argument with his violin and bow and went for a walk with it to practice for the much more receptive polluted air. Romance - “I can face it joyously and with all my heart, and soul!” she said. - With bright blue and green buttercream scales, sparkling eyes, and purple candy melt wings, it sat majestically on a rocky ledge made from chocolate. Clickbait - 11-Year-Old Girl and 15-Year-Old Boy Accused of Attempting to Kill Mother: Who Is the Adult? - Chilly, Dry Weather Welcomes 2010 to South Florida - End Segregation in Alabama-Bryce Hospital Sale Offers a Golden Opportunity Table 1: Examples of three target style corpora: humor, romance, and clickbait. 4.2 Baselines We compared the proposed TitleStylist against the following five strong baseline approaches. Neural Headline Generation (NHG) We train the state-of-the-art summarization model, MASS (Song et al., 2019), on our collected news abstracts-headlines paired data. Gigaword-MASS We test an off-the-shelf headline generation model, MASS from (Song et al., 2019), which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles.7 Neural Story Teller (NST) It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines (Kiros et al., 2015). In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website.8 7https://github.com/harvardnlp/ sent-summary 8https://github.com/ryankiros/ neural-storyteller 5087 Fine-Tuned We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training. Multitask We share all parameters between ES and ET , and between GS and GT , and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG. 4.3 Evaluation Metrics To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation. 4.3.1 Setup of Human Evaluation We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices. 4.3.2 Setup of Automatic Evaluation Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness. Summarization Quality We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), ROUGE (Lin, 2004) and CIDEr (Vedantam et al., 2015). For ROUGE, we used the Files2ROUGE9 toolkit, and for other metrics, we used the pycocoeval toolkit.10 Language Fluency We fine-tuned the GPT-2 medium model (Radford et al., 2019) on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.11 4.4 Experimental Details We used the fairseq code base (Ott et al., 2019). During training, we use Adam optimizer with an initial learning rate of 5 × 10−4, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of 0.2, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. λ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to λ. 5 Results and Discussion 5.1 Human Evaluation Results The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table 2, and the last criteria in Table 4. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section 5.2), thereby we removed them in human evaluation to save unnecessary work for human raters. Relevance We first look at the relevance scores in Table 2. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG 9https://github.com/pltrdy/files2rouge 10https://github.com/Maluuba/nlg-eval 11PPL on the development set is 42.5 5088 Style Settings Relevance Attraction Fluency None NHG 6.21 8.47 9.31 Human 5.89 8.93 9.33 Humor Multitask 5.51 8.61 9.11 TitleStylist 5.87 8.93 9.29 Romance Multitask 5.67 8.54 8.91 TitleStylist 5.86 8.87 9.14 Clickbait Multitask 5.67 8.71 9.21 TitleStylist 5.83 9.29 9.44 Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. “None” represents the original headlines in the dataset. are usually like an organic reorganization of several keywords in the source context (as shown in Table 3), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original humanwritten headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity. Attraction In terms of attraction scores in Table 2, we have three findings: (1) The humanwritten headlines are more attractive than those from NHG, which agrees with our observation in Section 1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers’ attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores. Fluency The human-annotated fluency scores in Table 2 verified that our TitleStylist generated headlines are comparable or superior to the humanwritten headlines in terms of readability. Style Strength We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table 4. 5.2 Automatic Evaluation Results Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability. Table 5 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table 5, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table 3 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body. From Table 5, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability. In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, 5089 News Abstract Turkey’s bitter history with Kurds is figuring prominently in its calculations over how to deal with Bush administration’s request to use Turkey as the base for thousands of combat troops if there is a war with Iraq; Recep Tayyip Erdogan, leader of Turkey’s governing party, says publicly for the first time that future of Iraq’s Kurdish area, which abuts border region of Turkey also heavily populated by Kurds, is weighing heavily on negotiations; Hints at what Turkish officials have been saying privately for weeks: if war comes to Iraq, overriding Turkish objective would be less helping Americans topple Saddam Hussein, but rather preventing Kurds in Iraq from forming their own state. Reunified Berlin is commemorating 40th anniversary of the start of construction of Berlin wall, almost 12 years since Germans jubilantly celebrated reopening between east and west and attacked hated structure with sledgehammers; Some Germans are championing the preservation of wall at the time when little remains beyond few crumbling remnants to remind Berliners of unhappy division that many have since worked hard to heal and put behind them; What little remains of physical wall embodies era that Germans have yet to resolve for themselves; They routinely talk of ’wall in the mind’ to describe social and cultural differences that continue to divide easterners and westerners. Human Turkey assesses question of Kurds The wall Berlin can’t quite demolish NHG Turkey’s bitter history with Kurds Construction of Berlin wall is commemorated Humor What if there is a war with Kurds? The Berlin wall, 12 years later, is still there? Romance What if the Kurds say “No” to Iraq? The Berlin wall: from the past to the present Clickbait For Turkey, a long, hard road East vs West, Berlin wall lives on Table 3: Examples of style-carrying headlines generated by TitleStylist. Style NHG Multitask TitleStylist Humor 18.7 35.3 46.0 Romance 24.7 34.7 40.6 Clickbait 13.8 35.8 50.4 Table 4: Percentage of choices (%) for the most humorous or romantic headlines among TitleStylist and two baselines NHG and Multitask. unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation. We find that in Table 5 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make GS focus more on summarization. It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the GT branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation. We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table 5. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines. 5.3 Extension to Multi-Style We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table 6. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table 6, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylistVersatile thus generates multiple headlines in different styles altogether, which is a novel and efficient 5090 Style Corpus Model BLEU ROUGE-1 ROUGE-2 ROUGE-L CIDEr METEOR PPL (↓) Len. Ratio (%) None NHG 12.9 27.7 9.7 24.8 0.821 0.123 40.4 8.9 Gigaword-MASS 9.2 22.6 6.4 20.1 0.576 0.102 65.0 9.7 Humor NST 5.8 17.8 4.3 16.1 0.412 0.078 361.3 9.2 Fine-tuned 4.3 15.7 3.4 13.2 0.140 0.093 398.8 3.9 Multitask 14.7 28.9 11.6 26.1 0.995 0.134 40.0 9.5 TitleStylist 13.3 28.1 10.3 25.4 0.918 0.127 46.2 10.6 TitleStylist-F 15.2 29.2 11.6 26.3 1.022 0.135 39.3 9.7 Romance NST 2.9 9.8 0.9 9.0 0.110 0.047 434.1 6.2 Fine-tuned 5.1 18.7 4.5 16.1 0.023 0.128 132.2 2.8 Multitask 14.8 28.7 11.5 25.9 0.997 0.132 40.5 9.7 TitleStylist 12.0 27.2 10.1 24.4 0.832 0.134 40.1 7.4 TitleStylist-F 15.0 29.0 11.7 26.2 1.005 0.134 39.0 9.8 Clickbait NST 2.5 8.4 0.6 7.8 0.089 0.041 455.4 6.3 Fine-tuned 4.7 17.3 4.0 15.0 0.019 0.116 172.0 2.8 Multitask 14.5 28.3 11.2 25.5 0.980 0.132 38.5 9.7 TitleStylist 11.5 26.6 9.8 23.7 0.799 0.134 40.7 7.3 TitleStylist-F 14.7 28.6 11.4 25.9 0.981 0.133 38.9 9.6 Table 5: Automatic evaluation results of our TitleStylist and baselines. The test set of each style is the same, but the training set is different depending on the target style as shown in the “Style Corpus” column. “None” means no style-specific dataset, and “Humor”, “Romance” and “Clickbait” corresponds to the datasets we introduced in Section 4.1.2. During the inference phase, our TitleStylist can generate two outputs: one from GT and the other from GS. Outputs from GT are style-carrying, so we denote it as “TitleStylist”; outputs from GS are plain and factual, thus denoted as “TitleStylist-F.” The last column “Len. Ratio” denotes the average ratio of abstract length to the generated headline length by the number of words. Style Model BLEU RG-L Pref. (%) None TitleStylist-Versatile 14.5 25.8 — Humor TitleStylist-Versatile 12.3 24.5 42.6 TitleStylist 13.3 25.4 57.4 Romance TitleStylist-Versatile 12.0 24.2 46.3 TitleStylist 12.0 24.4 53.7 Clickbait TitleStylist-Versatile 13.1 24.9 52.9 TitleStylist 11.5 23.7 47.1 Table 6: Comparison between TitleStylist-Versatile and TitleStylist. “RG-L” denotes ROUGE-L, and “Pref.” denotes preference. feature. 6 Conclusion We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models. Acknowledgement We appreciate all the volunteer native speakers (Shreya Karpoor, Lisa Orii, Abhishek Mohan, Paloma Quiroga, etc.) for the human evaluation of our study, and thank the reviewers for their inspiring comments. Joey Tianyi Zhou is partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045). References Michele Banko, Vibhu O Mittal, and Michael J Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 318–325. Association for Computational Linguistics. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018a. Retrieve, rerank and rewrite: Soft template based neural summarization. In ACL. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018b. Faithful to the original: Fact aware neural abstractive summarization. In Thirty-Second AAAI Conference on Artificial Intelligence. Cheng-Kuan Chen, Zhu Feng Pan, Ming-Yu Liu, and Min Sun. 2019. Unsupervised stylish image description generation via domain layer norm. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 8151–8158. AAAI Press. 5091 Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376–380. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. arXiv preprint arXiv:1905.03197. Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: A parse-and-trim approach to headline generation. In Proceedings of the HLTNAACL 03 on Text summarization workshop-Volume 5, pages 1–8. Association for Computational Linguistics. Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. 2016. A learned representation for artistic style. arXiv preprint arXiv:1610.07629. HP Edmundson. 1964. Problems in automatic abstracting. Communications of the ACM, 7(4):259–263. Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, NMT@ACL 2018, Melbourne, Australia, July 20, 2018, pages 45–54. Association for Computational Linguistics. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Thirty-Second AAAI Conference on Artificial Intelligence. Yidi Guo, Heyan Huang, Yang Gao, and Chi Lu. 2017. Conceptual multi-layer neural network model for headline generation. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 355–367. Springer. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693–1701. Tatsuru Higurashi, Hayato Kobayashi, Takeshi Masuyama, and Kazuma Murao. 2018. Extractive headline generation based on learning to rank for community question answering. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1742–1753. Kango Iwama and Yoshinobu Kano. 2019. Multiple news headlines generation using page metadata. In Proceedings of the 12th International Conference on Natural Language Generation, 2019. Association for Computational Linguistics. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Unsupervised domain adaptation for neural machine translation with iterative back translation. arXiv preprint arXiv:2001.08140. Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, and Enrico Santus. 2019. Unsupervised text attribute transfer via iterative matching and translation. In IJCNLP 2019. Hongyan Jing and Kathleen McKeown. 1999. The decomposition of human-written summary sentences. Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. arXiv preprint arXiv:1810.12343. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, pages 3294–3302. Guillaume Lample, Sandeep Subramanian, Eric Michael Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, and Y-Lan Boureau. 2019. Multiple-attribute text rewriting. In ICLR. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Junyang Lin, Xu Sun, Shuming Ma, and Qi Su. 2018. Global encoding for abstractive summarization. In ACL. Tianshang Liu, Haoran Li, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. 2018. Review headline generation with user embedding. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data - 17th China National Conference, CCL 2018, and 6th International Symposium, NLP-NABD 2018, Changsha, China, October 19-21, 2018, Proceedings, pages 324–334. Hans Peter Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of research and development, 2(2):159–165. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multitask sequence to sequence learning. CoRR, abs/1511.06114. Betty A Mathis, James E Rush, and Carol E Young. 1973. Improvement of automatic abstracts by the use of structural analysis. Journal of the American Society for Information Science, 24(2):101–109. 5092 Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Dragomir R Radev and Kathleen R McKeown. 1998. Generating natural language summaries from multiple on-line sources. Computational Linguistics, 24(3):470–500. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Alexander M Rush, SEAS Harvard, Sumit Chopra, and Jason Weston. 2017. A neural attention model for sentence summarization. In ACLWeb. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Gerard Salton, Amit Singhal, Mandar Mitra, and Chris Buckley. 1997. Automatic text structuring and summarization. Information processing & management, 33(2):193–207. Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Shi-Qi Shen, Yan-Kai Lin, Cun-Chao Tu, Yu Zhao, ZhiYuan Liu, Mao-Song Sun, et al. 2017a. Recent advances on neural headline generation. Journal of computer science and technology, 32(4):768–784. Shiqi Shen, Yun Chen, Cheng Yang, Zhiyuan Liu, and Maosong Sun. 2018. Zero-shot cross-lingual neural headline generation. IEEE/ACM Trans. Audio, Speech & Language Processing, 26(12):2319–2327. Shiqi Shen, Yu Zhao, Zhiyuan Liu, Maosong Sun, et al. 2016. Neural headline generation with sentence-wise optimization. arXiv preprint arXiv:1604.01904. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2017b. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6830–6841. Kai Shu, Suhang Wang, Thai Le, Dongwon Lee, and Huan Liu. 2018. Deep headline generation for clickbait detection. 2018 IEEE International Conference on Data Mining (ICDM), pages 467–476. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450. Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural headline generation on abstract meaning representation. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 1054–1059. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. From neural sentence summarization to headline generation: A coarse-to-fine approach. In IJCAI, pages 4109–4115. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575. Jingjing Xu, Xu Sun, Qi Zeng, Xiaodong Zhang, Xuancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. In ACL. Peng Xu, Chien-Sheng Wu, Andrea Madotto, and Pascale Fung. 2019. Clickbait? sensational headline generation with auto-tuned reinforcement learning. ArXiv, abs/1909.03582. Zichao Yang, Zhiting Hu, Chris Dyer, Eric P. Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discriminators. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 38 December 2018, Montr´eal, Canada, pages 7298– 7309. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, Huanhuan Cao, and Xueqi Cheng. 2018a. Question headline generation for news articles. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, pages 617–626. 5093 Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2018b. Style transfer as unsupervised machine translation. ArXiv, abs/1808.07894. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19– 27.
2020
456
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5094–5107 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5094 Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward Luyang Huang1 Lingfei Wu2 and Lu Wang1 1Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115 2IBM Research AI, IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 [email protected], [email protected] [email protected] Abstract Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive. We argue that, to address these issues, the summarizer should acquire semantic interpretation over input, e.g., via structured representation, to allow the generation of more informative summaries. In this paper, we present ASGARD, a novel framework for Abstractive Summarization with GraphAugmentation and semantic-driven RewarD. We propose the use of dual encoders—a sequential document encoder and a graphstructured encoder—to maintain the global context and local characteristics of entities, complementing each other. We further design a reward based on a multiple choice cloze test to drive the model to better capture entity interactions. Results show that our models produce significantly higher ROUGE scores than a variant without knowledge graph as input on both New York Times and CNN/Daily Mail datasets. We also obtain better or comparable performance compared to systems that are finetuned from large pretrained language models. Human judges further rate our model outputs as more informative and containing fewer unfaithful errors. 1 Introduction Abstractive summarization aims to produce concise and informative summaries with the goal of promoting efficient information consumption and knowledge acquisition (Luhn, 1958). Significant progress has been made in this area by designing sequence-to-sequence-based neural models for single-document abstractive summarization (Gehrmann et al., 2018; Liu et al., 2018; Liu and Lapata, 2019). However, due to the limitations of model structure and word prediction-based Input Article of New York Times: John M. Fabrizi, the mayor of Bridgeport, admitted on Tuesday that he had used cocaine and abused alcohol while in office. Mr. Fabrizi, who was appointed mayor in 2003 after the former mayor, Joseph P. Ganim, went to prison on corruption charges, said he had sought help for his drug problem about 18 months ago and that he had not used drugs since. About four months ago, he added, he stopped drinking alcohol. Constructed Graph: Summary by Human: The Week column. Mayor John Fabrizi of Brigeport, Conn, publicly admits he used cocaine and abused alcohol while in office; says he stopped drinking alcohol and sought help for his drug problem about 18 months ago. cocaine drinking alcohol alcohol John M. Fabrizi, he, ... had used abused stopped Figure 1: Sample knowledge graph constructed from an article snippet. The graph localizes relevant information for entities (color coded, e.g. “John M. Fabrizi”) or events (underlined) and provides global context. learning objectives, these models frequently produce unfaithful content (Cao et al., 2018) and nearextractive summaries (See et al., 2017; Kry´sci´nski et al., 2018). These observations suggest that existing models lack semantic interpretation over the input, which is critical for summarization. We argue that the generation of informative and succinct abstracts requires structured representation to facilitate the connection of relevant subjects, and the preservation of global context, e.g. entity interactions and topic flows. Take Fig. 1 as an ex5095 ample. Complex events related with the same entity may span multiple sentences, making it challenging for existing sequential models to capture. A graph representation, on the contrary, produces a structured summary and highlights the proximity of relevant concepts. To this end, we present ASGARD, a framework for Abstractive Summarization with GraphAugmentation and semantic-driven RewarD.1 Under the encoder-decoder framework, we enhance the regular document encoder with a separate graph-structured encoder to maintain the global context and local characteristics of entities by using the outputs from an open information extraction (OpenIE) system. Specifically, we experiment with two graph variants, one mainly capturing entities’ document-level interactions and the other reflecting such interactions within each paragraph plus topic shifts across paragraphs. Both graphs can capture interactions among entities that are positioned far from one another in the document and significantly reduce redundancy, as shown in Fig. 1. The document encoder and the graph encoder then cooperate during abstract generation, wherein the model is trained to identify salient content by aligning graphs with human summaries. Though structured representation has been studied before for summarization (Fernandes et al., 2019), to the best of our knowledge, we are the first to utilize graph neural networks to explicitly encode entity-centered information for abstractive summary generation. Moreover, we propose a novel multi-choice cloze reward to drive the model to acquire semantic understanding over the input. Concretely, we design cloze questions by removing pairwise entities that are connected with a predicate or co-occur in a human summary sentence, whereas prior work only considers single entities to construct questions (Eyal et al., 2019). In tandem with our graph encoding of knowledge, the cloze reward further facilitates the acquisition of global entity interactions with reinforcement learning. We carry out automatic and human evaluations on popular summarization datasets. Models based on ASGARD yield significantly better ROUGE scores (Lin and Hovy, 2003) than a variant without access to the knowledge graph on two popular news summarization datasets, New York Times 1Our code is available at https://github.com/luyanghuang96/GraphAugmentedSum. corpus and CNN/Daily Mail dataset. Moreover, ASGARD models attain performance better than or comparable to others that are fine-tuned from large pretrained language models, including BERTSum (Liu and Lapata, 2019), UniLM (Dong et al., 2019), and BART (Lewis et al., 2019). Human judges further confirm that our models generate more informative summaries with less unfaithful errors than their counterparts without the graph encoder. Importantly, we find that automatic evaluation metrics only weakly correlate with these errors, implying that new evaluation methods are needed to better gauge summary quality. The rest of the paper is organized as follows. We describe related work in the next section (§ 2). We then discuss the knowledge graph construction in § 3 and formulate our graph-augmented summarization framework in § 4. In § 5, we introduce reinforcement learning with cloze reward. Experiments and results are presented in § 6 and § 7. Finally, we conclude in § 8. 2 Related Work Graph-Augmented Summarization and Generation. Graph structures have long been used for extractive summarization, such as in Textrank (Mihalcea and Tarau, 2004) and Lexrank (Erkan and Radev, 2004). For neural models, Tan et al. (2017) design graph-based attention to identify important sentences. For generating abstractive summaries, Fernandes et al. (2019) enhance a sequence-based encoder with graph neural networks (GNNs) to consider token-level entity types, however, entity interactions are largely ignored. On multi-document summarization, Fan et al. (2019) demonstrate the usefulness of encoding a linearized knowledge graph from OpenIE outputs. In this work, we design a graph encoder, which improves upon Graph Attention Networks (GATs) (Veliˇckovi´c et al., 2018), to capture the global context in a more effective manner. Also related is the graph-to-sequence framework that has been adopted for text generation (Song et al., 2018). Both Gated Graph Neural Networks (GGNNs) (Beck et al., 2018) and Graph Convolutional Networks (GCNs) (Damonte and Cohen, 2019) are shown to be effective in generating sentences from AMR graphs. Since Graph Attention Networks can better handle sparse graphs, they are used by Koncel-Kedziorski et al. (2019) with a transformer model to create scientific paper ab5096 stracts from knowledge graphs. Here we use graphs in addition to document encoder, both carrying complementary information for summarization. Reinforcement Learning and QA Reward for Abstractive Summarization. As pointed out by Ranzato et al. (2016), word-level maximum likelihood training brings the problem of exposure bias. Recent work utilizes reinforcement learning to directly optimize the model to maximize the informativeness of summaries by using different forms of ROUGE scores (Paulus et al., 2018; Chen and Bansal, 2018; Sharma et al., 2019). However, ROUGE does not always distinguish good summaries from bad ones (Novikova et al., 2017), and ignores entity interactions. Since question answering (QA) has been used for summary evaluation (Narayan et al., 2018), and is shown to correlate with human judgment of summaries qualities (Eyal et al., 2019), QA-based rewards have been studied for summarization model training. Arumae and Liu (2019) demonstrate that using fill-in-the-blank questions by removing entities or root words leads to improved content selection. Scialom et al. (2019) consider a similar setup, but use both F1 score and QA system confidence as rewards in abstractive summarization. Previous work, however, mainly focuses on single entities or words in human-written summaries, thereby losing contexts and relations. Moreover, fill-in-the-blank questions by prior work give credits only when the answers exactly match the ground-truths, thus causing inaccuracies for rephrased answers and discouraging abstract content generation. In contrast, we design a semantic-driven cloze reward by measuring how well a QA system can address multiple choice cloze questions which better encode entity interactions and handle paraphrased answers. 3 Knowledge Graph Construction To construct a knowledge graph from an input document, we utilize Stanford CoreNLP (Manning et al., 2014) to first obtain outputs from coreference resolution and open information extraction (OpenIE) models (Angeli et al., 2015). Note that we do not conduct global entity linking across documents. Next, we take the ⟨subject, predicate, object⟩triples extracted by OpenIE and remove any triple whose argument (subject or object) has more than 10 words. If two triples differ only by one argument, and the arguments overlap, we keep the longer triple. Generated Summary Input Article Mayor 's Admission of Cocaine Use …. RoBERTa Layers Bi-LSTM Layer GAT Layers Attention Layer The column Week ... Attention Layer OpenIE <SOS> Ct Ct v Node Initialization Figure 2: Our ASGARD framework with documentlevel graph encoding. Summary is generated by attending to both the graph and the input document. We begin constructing the graph by treating subjects and objects as nodes connected by directed edges, with predicates as attributes. We further collapse coreferential mentions of the same entity into one node. With this, we can localize salient content related to each entity as well as make connections of spread-out entities through graph paths. 4 Summarization Model In this section, we describe our graph-augmented abstractive summarization framework, as displayed in Fig. 2. Our model takes as input a document, represented as a sequence of tokens x = {xk}, and a knowledge graph G consisting of nodes {vi}. x and G are separately consumed by a document encoder and a graph encoder, as presented in § 4.1. Importantly, we present two types of graphs: DOCGRAPH, focusing on the global context, and SEGGRAPH, which additionally captures topic shift. The summary decoder then generates an abstractive summary by attending to both the document and the graph (§ 4.2). In § 4.3, we formulate a maximum likelihood training objective which leverages the detection of salient nodes in the graph. 4.1 Encoders Document Encoder. We first feed input x to RoBERTa (Liu et al., 2019) and take the last layer 5097 output as token embeddings. We then employ a single-layer bidirectional LSTM (BiLSTM) over token embeddings, producing encoder hidden states hk at time step k. Graph Encoder. Built on the graph constructed in § 3, we create nodes for predicates as done in previous graph-to-sequence work (Beck et al., 2018) to reduce model parameters. Directed, unlabeled edges are added from subject to predicate, and from predicate to object. We further add reverse edges and self-loops to enhance the information flow, and this forms the graph G. Node Initialization. Each node often contains multiple mentions of an entity; we thus initialize node representation vi by using the average embedding of its tokens. We leverage document encoder hidden states hk as the contextual representation of tokens. Number of mentions in the node is added as an extra encoding to vi, to signify entity salience. Contextualized Node Encoding. Our graph encoder improves upon Graph Attention Networks (GATs) (Veliˇckovi´c et al., 2018) by adding residual connections between layers as discussed in KoncelKedziorski et al. (2019). Each node vi is represented by a weighted average of its neighbors: ˆvi = vi + ∥N n=1 X vj∈N (vi) αn i,jW0,nvj (1) αn i,j = softmax((W1,nvi)T (W2,nvj)) (2) where ∥N n=1 denotes the concatenation of N heads, each producing a vector of the same dimension as vi. We use N = 4 in our experiments with two layers of GATs. N(vi) denotes the neighbors of vi in graph G. W∗are trainable parameters. The graph encoder described above encodes document-level global context by merging entity mentions throughout the document and capturing their interactions with graph paths. It is henceforth denoted as DOCGRAGH. Encoder Extension to Capture Topic Shift (SEGGRAGH). Modeling topic transitions and recurrences enables the identification of notable content, thus benefiting summarization (Barzilay and Lee, 2004). Since paragraphs naturally divide a document into different topic segments, we extend DocGragh by first encoding each paragraph as a subgraph Gp (for the p-th paragraph) using the same graph encoder, and then connecting all subgraphs with a BiLSTM. If two nodes in separate subgraphs refer to the same entity, they are initialized with the same embedding (as in the first occurrence). Concretely, we first apply max-pooling over all nodes in subgraph Gp from the outputs of the final GAT layer; the max-pooling results are then used as inputs for a BiLSTM to produce the final subgraph representation hg p for Gp. 4.2 Summary Decoder Our summary decoder uses a single-layer unidirectional LSTM with a hidden state st at step t; it generates summary tokens recurrently by jointly attending to the input document and the graph. Attending the Graph. At each decoding step t, we compute a graph context vector cv t with the attention mechanism (Bahdanau et al., 2014): cv t = X i av i,tˆvi (3) av i,t = softmax(uT 0 tanh(W3st + W4ˆvi)) (4) where u∗are also trainable parameters. We omit bias terms for simplicity. Attending the Document. Similarly, the document context ct is computed over input tokens by additionally considering the graph context cv t : ct = X k ak,thk (5) ak,t = softmax( uT 1 tanh(W5st + W6hk + W7cv t )) (6) Token Prediction. Graph and document context vectors, treated as salient content summarized from both sources, are concatenated with the decoder hidden state st to produce the vocabulary distribution Pvocab: Pvocab = softmax(Wout[st|ct|cv t ]) (7) We use weight-sharing between the input embedding matrix and the matrix Wout to allow reusing linguistic knowledge as proposed by Paulus et al. (2018). We further add a copy mechanism similar to See et al. (2017), with copy probability as: Pcopy = σ(Wcopy[st|ct|cv t |yt−1]) (8) where yt−1 denotes the embedding for the token predicted at step t −1. Modified Hierarchical Attention for SegGraph. As mentioned in § 4.1, SegGraph captures content salience by modeling topic shift across paragraphs. We thus seek to leverage paragraph-level importance to redistribute the node attentions, e.g., giving 5098 more attentions to nodes in important paragraphs. In particular, we utilize hierarchical attention (Hsu et al., 2018), where we first calculate attention ag t over subgraphs as done in Eq. 3 by replacing ˆvi with subgraph representation hg p. We then combine subgraph attentions ag t with the previously calculated attentions av t for nodes in the subgraph using scalar multiplication and renormalization over all nodes in input. This results in the new attention weights ˆav t , which are used to obtain graph context vector cv t as done in Eq. 3 for SegGraph. 4.3 Training Objectives We first consider a maximum likelihood (ML) training objective that minimizes the following loss: Lseq = −1 |D| X (y,x)∈D log p(y | x; θ) (9) where x are documents and y are references from the training set D, and θ are model parameters. Node Salience Labeling. In addition to modeling local characteristics of nodes, we further enhance the model by adding an objective to label node salience, e.g., whether the entities in a node are mentioned in the reference summaries. We introduce a soft mask layer over each node before it is passed into the graph encoder, to signify its salience. This layer, serving as an information gate, predicts a real number mi in [0, 1] for each node vi and multiplies to itself, i.e. mivi. For node vi, the mask is calculated as ˆmi = sigmoid(u2vi). During training, the gold-standard mask mi for a node is set to 1 if it contains at least one content word in the reference summary; otherwise, 0. We add the following objective for all nodes in the dataset D: Lmask = −1 Nv X vi∈D mi log( ˆmi)+ (1 −mi) log(1 −ˆmi) (10) where Nv represents the number of nodes in the dataset. Finally, the ML training objective takes the following form: Lml = Lmask + Lseq. 5 Reinforcement Learning with Cloze After maximum likelihood training with Lml, we further design a multiple choice cloze reward in a second-stage reinforcement learning (RL), leading the model to generate more faithful and informative summaries. For RL, we use a self-critical policy gradient algorithm (Rennie et al., 2017). During training, two summaries are generated: first, a summary ys, sampling tokens based on the probability distribution p(ys| x; θ) at each decoding step; and second, a baseline summary ˆy which greedily selects the tokens of the highest probability at each step. The objective of RL is defined based on the rewards of the two summaries, R(ys) and R(ˆy), as follows: Lrl = − 1 |D| X (ys,x)∈D (R(ys) −R(ˆy)) log p(ys| x; θ) (11) Our reward function uses the combination of ROUGE and the multiple choice cloze score introduced below, i.e., R(y) = Rrouge(y) + γclozeRcloze. For ROUGE, it considers F1 scores of ROUGE-1, ROUGE-2, and ROUGE-L calculated against the reference summary, and takes the form of Rrouge(y) = γ1Rrouge−1(y) + γ2Rrouge−2(y) + (1 −γ1 −γ2)Rrouge−L(y). Multiple Choice Cloze Reward. Here, we present a novel multiple choice cloze reward to work with our knowledge graph and guide the summarization model towards improved awareness of entity interactions. We treat the systemgenerated summary as context. We provide a set of questions automatically constructed from the corresponding reference summary written by a human. We separately train a question answering (QA) model to address the questions by reading the context. Intuitively, if the system summary shares salient information with the reference, the QA model will assign the correct answers with high probability. We decide to use the average probability of the correct answers as our cloze reward. Below, we give details on how to construct the questions and candidate answers with examples shown in Fig. 3. Question Construction. We run the OpenIE tool on human-written summaries, retaining triples with arguments not longer than 5 words. For each triple of ⟨subject, predicate, object⟩, we create two types of questions: (1) argument pair questions, by removing the subject and object, and (2) predicate questions, by removing the predicate. Candidate Answer Construction. Because fill-inthe-blank style cloze may incorrectly penalize QA systems with answers paraphrased from the groundtruth, we opt for a multiple choice cloze. We construct three candidate answers in addition to the 5099 Reference Summary: Federal Reserve increases interest rates. IE Output: ⟨Federal Reserve, increases, interest rates ⟩ Salient Context: Federal Reserve signals positivity about the market. Fed increases benchmark interest rate again this May. American economy keeps the high growth rate. Jerome H. Powell discussed potential risks. IE Outputs: 1. ⟨Federal Reserve, signals, positivity ⟩ 2. ⟨American economy, keeps, the high growth rate ⟩ 3. ⟨Jerome H. Powell, discussed, potential risks ⟩ ⇓ Multiple Choice Cloze Questions: Argument Pair Question: increases . A. Federal Reserve, interest rates (D) B. interest rates, Federal Reserve (swapping args in A) C. American economy, interest rates (replacing arg using triple 2) D. Federal Reserve, potential risks (replacing arg using triple 3) Predicate Question: Federal Reserve interest rates. A. increases (D) B. signals C. keeps D. discussed Figure 3: Sample construction of multiple choice cloze questions and candidate answers from reference summary and salient context. Arguments and predicates in candidate answers are color-coded and italicized. gold-standard from the salient context, which are summary-worthy sentences selected from the input. Specifically, we use greedy search to select the best combination of sentences that maximizes ROUGE2 F1 with reference to human summary. We further include a sentence in the salient context if it has a ROUGE-L recall greater than 0.6 when compared with any sentence in the reference. We first select OpenIE triples from the salient context and filter out those that have any overlapping content word with the correct answer. For argument pair questions, we create one candidate answer by swapping the subject and the object (e.g. candidate B as in Fig. 3) and two candidates by replacing the subject or the object with another argument of the same role extracted from the salient context (e.g. candidates C and D). If not enough answers are created, we further consider randomly selecting sentences from the input. For predicate questions, we use predicates in other triples from the context as candidate answers. Among all candidates, we select the three that are able to construct the most fluent questions using perplexity predicted by BERT (Devlin et al., 2019). In case reference summaries do not yield OpenIE triples, we create additional entity pair questions. We remove two co-occurring entities from the summary and create three candidate answers in the same way as described above. QA Model. We fine-tune RoBERTa (Liu et al., 2019) to build our QA model. We use the salient context described above as the context for training. We then concatenate the context, the question, and each of the four candidate answers, and pass the final [CLS] representation through a fully-connected layer, from which the answer is predicted. 6 Experimental Setups Datasets. We experiment with two popular summarization datasets with summaries containing multiple sentences: the New York Times annotated corpus (NYT) (Sandhaus, 2008) and the CNN/Daily Mail dataset (CNN/DM) (Hermann et al., 2015). We follow the preprocessing steps and experimental setups from prior work (Paulus et al., 2018; See et al., 2017) for both datasets. For NYT, the training, validation, and test sets contain 588, 909, 32, 716, and 32, 703 samples. For CNN/DM, the numbers are 287, 188, 13, 367, and 11, 490. To train our cloze QA model for NYT, we construct 1, 414, 336 question-answer pairs from human-written summaries in the training set based on the method described in § 5. On CNN/DM, we collect 1, 361, 175 question-answer samples from the training set. For both datasets, we set aside 20, 000 samples as a validation set and 20, 000 samples as a test set. Our QA model achieves an accuracy of 97% on NYT and 95% on CNN. Training Details and Parameters. We use the base version of RoBERTa model to extract token features for all experiments. We truncate input articles to 1024 (NYT) and 512 (CNN/DM) BPEs. We employ LSTM models with 256-dimensional hidden states for the document encoder (128 each direction) and the decoder. For the residual connection of the graph encoder, we use 4 heads, each with a dimension of 72. For DocGraph training and inference, we prune isolated graphs with fewer than three nodes to increase robustness and reduce redundancy. We set γ1 = 0, γ2 = 0.75 on NYT and γ1 = 0.33, γ2 = 0.33 on CNN/DM after tuning on the validation set. For both datasets, we set γcloze = 0.05. More details about parameters and graph statistics are in the Appendices. Baselines and Comparisons. For both datasets, 5100 System ROUGE-1 ROUGE-2 ROUGE-L LEAD-3 32.59 16.49 29.17 POINTGEN+COV 41.06 25.71 37.28 DEEPREINFORCE 47.03 30.72 43.10 BOTTOMUP 47.38 31.23 41.81 DCA 48.08 31.19 42.33 SENECA 47.94 31.77 44.34 BART 53.25 36.61 48.78 Our Models NOGRAPH 47.15 32.02 43.65 +Rrouge 49.17 33.19 46.44 ASGARD-DOC 49.51 33.82 45.72 +Rrouge 50.18 33.91 46.84 +Rrouge + Rcloze 50.59 33.98 48.24 ASGARD-SEG 49.54 33.84 45.75 +Rrouge 50.47 33.95 47.43 +Rrouge + Rcloze 51.29 34.97 48.26 Table 1: Automatic evaluation with ROUGE on New York Times. Best results are in boldface. Best of our models are in italics. ASGARD-SEG+Rrouge+Rcloze yields significantly higher scores than our other models with approximate randomization test (p < 0.0005). we include an extractive baseline LEAD-3. We further add the following abstractive models for comparison: (1) a pointer-generator model with coverage (See et al., 2017) (POINTGEN+COV); (2) a deep reinforcement learning-based model (Paulus et al., 2018) (DEEPREINFORCE); (3) a bottom-up model (Gehrmann et al., 2018) (BOTTOMUP); (4) a deep communicating agents-based summarization model (Celikyilmaz et al., 2018) (DCA). We also report results by fine-tuning BART model (Lewis et al., 2019). In Lewis et al. (2019), fine-tuning is only performed on CNN/Daily Mail. We apply the same method for NYT. For NYT, we add results by SENECA model (Sharma et al., 2019) from our prior work, which previously achieved the best ROUGE-2. On CNN/Daily Mail, we include comparisons of a two-stage fine-tuned model (first on an extractor, then on an abstractor) with BERT (Liu and Lapata, 2019) (BERTSUMEXTABS), and a unified pretrained language model for generation (Dong et al., 2019) (UNILM). In addition to ASGARD-DOC and ASGARDSEG, which are trained with an ML objective, we report results trained with ROUGE as the reward (Rrouge), and with an additional cloze reward (Rcloze). Lastly, we consider a variant NOGRAPH by ablating the graph encoder. System ROUGE-1 ROUGE-2 ROUGE-L LEAD-3 40.23 17.52 36.34 POINTGEN+COV 39.53 17.28 36.38 DEEPREINFORCE 41.16 15.75 39.08 BOTTOMUP 41.22 18.68 38.34 DCA 41.69 19.47 37.92 BERTSUMEXTABS 42.13 19.60 39.18 UNILM 43.33 20.21 40.51 BART 44.16 21.28 40.90 Our Models NOGRAPH 39.55 17.89 36.75 +Rrouge 41.37 17.63 37.99 ASGARD-DOC 40.38 18.40 37.51 +Rrouge 43.10 17.58 39.41 +Rrouge + Rcloze 43.93 20.37 40.48 ASGARD-SEG 40.09 18.30 37.30 +Rrouge 42.94 17.93 39.36 +Rrouge + Rcloze 43.81 20.22 40.37 Table 2: Automatic evaluation with ROUGE on CNN/Daily Mail. Best results of our model variants are in italics. Both ASGARD-SEG+Rrouge+Rcloze and ASGARD-DOC+Rrouge+Rcloze obtain significantly better scores than other model variants (p < 0.0005). 7 Results 7.1 Automatic Evaluation Results on NYT. As displayed in Table 1, our ASGARD-SEG model trained with ROUGE and cloze rewards achieves better ROUGE scores (Lin and Hovy, 2003) than all other comparisons except the fine-tuned BART. However, our ASGARDSEG’s ROUGE-L score is comparable to BART. This indicates the effectiveness of our graphaugmented summarization framework. Moreover, both our ASGARD-DOC and ASGARD-SEG models yield significantly higher ROUGE scores than the variant without the graph encoder (NOGRAPH). This demonstrates the benefit of using structured representation to encode entity interactions. Furthermore, both ASGARD-DOC and ASGARD-SEG with cloze reward (Rcloze) obtain significantly higher scores compared to the models trained with ROUGE reward only. This signifies that our multi-choice cloze reward can guide better semantic interpretation of content, leading to the generation of more informative summaries. We also find that ASGARDSEG outperforms ASGARD-DOC, indicating that ASGARD-SEG better captures topic drift through multiple paragraphs. Results on CNN/DM. We observe similar trends on the CNN/DM articles as shown in Table 2. No5101 NYT CNN/DM 55 60 65 70 75 80 85 90 Cloze Score 91.1 90.8 68.3 66.7 72.7 75.9 71.0 75.7 Probability NYT CNN/DM 70 75 80 85 90 95 100 97.8 96.6 78.7 76.1 82.1 84.2 80.9 83.9 Accuracy Human NoGraph+Rrouge ASGARD-doc+Rrouge+Rcloze ASGARD-seg+Rrouge+Rcloze Figure 4: Evaluation with QA model prediction probability and accuracy on our multiple choice cloze test, with higher numbers indicating better summaries. ticeably, ASGARD-DOC trained with the combined ROUGE and cloze reward produces better ROUGE scores than BERTSUMEXTABS and UNILM, which are carefully fine-tuned from large pretrained language models, and the numbers are also comparable to the fine-tuned BART. Evaluation with Cloze Test. We further evaluate model-generated summaries with our proposed cloze test. Here, we report two scores in Fig. 4: the average probability of the correct answers output by our QA model, and its prediction accuracy. We first calculate one score per summary, then take the average over all summaries. We can see that our models with graph encoders perform better than the variant without it. 7.2 Human Evaluation We further conduct human evaluation to analyze the informativeness and fluency of the generated summaries, as well as to investigate the unfaithful errors made by different models. We sample 100 articles from the NYT test set and hire three native or fluent speakers of English to rate summaries generated by our two systems, NOGRAPH+Rrouge and ASGARD-SEG+Rrouge + Rcloze, along with outputs by BART and human-written summaries (presented in random order). After reading the articles, each judge scores summaries on a Likert scale from 1 (worst) to 5 (best) on informativeness—whether the summary covers important information from the input, and fluency—whether the summary is grammatically correct. We consider three types of unfaithful errors: (i) hallucination error—creating content not present in the input, (ii) out-of-context error—generating facts without including required context or within System Inf.↑Flu.↑Hal.↓Out.↓Del./Sub.↓ HUMAN 4.47 4.65 21% 10% 10% NOGRAPH +Rrouge 3.94 3.65 9%∗ 26% 22% ASGARD-SEG +Rrouge + Rcloze 4.12† 3.77† 23% 14%† 9%∗ BART 4.44∗4.66∗16% 15% 12% Table 3: Human evaluation on informativeness (Inf.) and fluency (Flu.) (1-to-5), and percentages of unfaithful errors of hallucination (Hal.), out-of-context (Out.) and deletion or substitution (Del./Sub.). ∗: significantly different from all other models. †: ASGARD-SEG is significantly better than NOGRAPH (p < 0.05). Interrater agreement with Krippendorf’s α for all columns: 0.61, 0.70, 0.57, 0.50 and 0.43. Summary by Human: Family Court in Burlington County, NJ, rules that lesbian couple can list both their names as parents on birth certificate of newborn; state attorney general’s office drops opposition to move; court ruling negates couple’s having to go through adoption proceedings to establish full parental rights for both. NoGraph+Rrouge: Lesbian couple in South Jersey wins court approval to have both of their names listed as parents on birth certificate of their newborn. it will no longer oppose such applications ASGARD-doc+Rrouge + Rcloze: Lesbian couple in South Jersey, won court approval to have both of their names listed as parents on birth certificate of their newborn. attorney general’s office says it will no longer oppose such applications ASGARD-seg+Rrouge + Rcloze: Lesbian couple in South Jersey wins court approval to have both of their names listed as parents on birth certificate of newborn and attorney general ’s office will no longer oppose such applications. decision stems from Oct 0 ruling by New Jersey Supreme Court holding that samesex couples are entitled to same legal rights and protections as heterosexual couples Figure 5: Sample summaries for an NYT article. Summaries by our models with the graph encoder are more informative than the variant without it. incorrect context, and (iii) deletion or substitution error—mistakenly deleting or substituting subjects, objects, or clauses. We ask the annotators to label each type as 1 for existence of errors, and 0 otherwise. Detailed guidelines are in the Appendices. From Table 3, we can see that our ASGARDSEG model obtains better scores in informativeness and fluency, compared to the variant without the graph encoder. This indicates the effectiveness of leveraging knowledge graph representation. Sample output summaries by our models can be found in Fig. 5. Meanwhile, fine-tuned BART model produces outputs with similar informativeness and fluency of human-constructed summaries, suggest5102 ing a future direction of building our model on top of a large-pretrained encoder-decoder model. For unfaithful errors, we report the percentage of errors calculated by majority voting (i.e., more than one annotator vote as incorrect). First, we find that our ASGARD-SEG model has a comparable error pattern as human summaries. Specifically, for out-of-context and deletion or substitution errors, our graph-enhanced model produces significantly fewer mistakes in these categories, compared to the model without graph information. This implies that knowledge graph-enhanced models can improve summary faithfulness. Interestingly, human-written summaries are also discerned to contain a nontrivial amount of hallucination errors. After inspection, we find that human tends to leverage world knowledge to include content that is not covered by the articles. For instance, for an article discussing events in “Boston”, the human writer may describe them as happening in “Massachusetts” in the summary. 7.3 Analyzing Automatic Metrics and Summary Errors We further plot the distributions of automatic evaluation scores regarding the three types of unfaithful errors based on majority voting in Fig. 6. First, summaries with out-of-context and deletion or substitution errors receive lower cloze and ROUGE scores overall. Nevertheless, with regard to hallucination errors, we do not see such pattern; there is even a slightly reversed relation with both cloze scores and ROUGE scores, wherein summaries with more hallucination errors tend to score higher. This echos our previous observation that human summaries can be hallucinatory too, where world knowledge is used for writing the summaries.2 Furthermore, we find a weak correlation between the three variants of ROUGE scores and three types of errors, e.g., the minimum and the maximum values of Pearson’s r are −0.19 and 0.14. This suggests that new metrics should be designed to better gauge summary quality. We plan to study this direction in future work. 2During human evaluation, we do not ask human judges to distinguish the source of hallucination errors, i.e. from world knowledge or out of fabrication, since this requires significant domain knowledge. True False 0.0 0.2 0.4 0.6 0.8 1.0 Cloze Score True False ROUGE-1 True False ROUGE-2 True False ROUGE-L True False 0.0 0.2 0.4 0.6 0.8 1.0 True False True False True False True False 0.0 0.2 0.4 0.6 0.8 1.0 True False True False True False Automatic Evaluation Hallucination Out-of-Context Deletion/Substitution Figure 6: Distribution of automatic summarization metrics with three types of unfaithful errors. “True” indicates summaries with such type of error. 8 Conclusion We presented a novel knowledge graph-augmented abstractive summarization framework, along with a novel multiple choice cloze reward for reinforcement learning. Our models capture both local characteristics and global interactions of entities from the input, thus generating summaries of higher quality. In tandem with the graph representation, our cloze reward further improves summary content. Human evaluation further confirms that our graphaugmented models trained with the cloze reward produce more informative summaries and significantly reduces unfaithful errors. Acknowledgements This research is supported in part by National Science Foundation through Grant IIS-1813341, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA865017-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. We thank the anonymous reviewers for their suggestions. 5103 References Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 344–354, Beijing, China. Association for Computational Linguistics. Kristjan Arumae and Fei Liu. 2019. Guiding extractive summarization with question-answering rewards. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2566–2577, Minneapolis, Minnesota. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 113–120. Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 273–283, Melbourne, Australia. Association for Computational Linguistics. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstractive summarization. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI). Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1662–1675, New Orleans, Louisiana. Association for Computational Linguistics. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Marco Damonte and Shay B Cohen. 2019. Structural neural encoders for amr-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3649–3658. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 13063–13075. Curran Associates, Inc. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457–479. Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question answering as an automatic evaluation metric for news article summarization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3938–3948, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Claire Gardent, Chlo´e Braud, and Antoine Bordes. 2019. Using local knowledge graph construction to scale Seq2Seq models to multidocument inputs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 4177–4187, Hong Kong, China. Association for Computational Linguistics. Patrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Structured neural summarization. In International Conference on Learning Representations. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, 5104 and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693–1701. Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 132–141, Melbourne, Australia. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR). Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, and Hannaneh Hajishirzi. 2019. Text Generation from Knowledge Graphs with Graph Transformers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2284–2293, Minneapolis, Minnesota. Association for Computational Linguistics. Wojciech Kry´sci´nski, Romain Paulus, Caiming Xiong, and Richard Socher. 2018. Improving abstraction in text summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1808–1817. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Chin-Yew Lin and Eduard Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Cooccurrence Statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, pages 71–78. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In International Conference on Learning Representations. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721–3731, Hong Kong, China. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Hans Peter Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of research and development, 2(2):159–165. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404–411. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. In 2017 Conference on Empirical Methods in Natural Language Processing, pages 2231–2242. Association for Computational Linguistics. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008–7024. Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3244–3254, Hong Kong, China. Association for Computational Linguistics. 5105 Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Eva Sharma, Luyang Huang, Zhe Hu, and Lu Wang. 2019. An entity-driven framework for abstractive summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3278–3289, Hong Kong, China. Association for Computational Linguistics. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for AMRto-text generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1616– 1626, Melbourne, Australia. Association for Computational Linguistics. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1181, Vancouver, Canada. Association for Computational Linguistics. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations. Accepted as poster. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. 5106 A Appendices A.1 Experiment Details Statistics of Knowledge Graphs. We show the statistics of knowledge graphs on two datasets in Table 4. On each dataset, we construct a large graph with abundant relations for each article. Note that on CNN/DM we have more arguments but fewer predicates in a document than those on NYT. This indicates CNN/DM has fewer coreferred entities. Dataset Doc DOCGRAPH SEGGRAPH # word # Arg. # Pre. # Arg. # Pre. # Para. NYT 795.9 131.6 87.3 6.40 3.74 23.5 CNN/DM 789.9 138.1 85.2 6.30 3.57 24.2 Table 4: Statistics of NYT and CNN/DM datasets. # Arg.: number of arguments in each document or paragraph. # Pre.: number of predicates in each document or paragraph. # Para.: number of paragraphs in each document. Two datasets have comparable graph size. Training Details. We utilize Adam (Kingma and Ba, 2015) with a gradient clipping of 2.0 and a batch size of 32 for all models. During ML training, a learning rate of 0.001 is used; during RL stage, it is reduced to 0.0001 (Paulus et al., 2018). We use the base version of BERT model (Devlin et al., 2019) to select candidate answers and we finetune the base version of RoBERTa model (Liu et al., 2019) to build our QA model. We take pretrained models from Wolf et al. (2019). A.2 Human Evaluation Guideline In our human evaluation, each human annotator is presented with 100 news articles. The annotators are asked to evaluate four summaries (in random order) for each article on two aspects (informativeness and fluency) on a scale of 1 to 5 (1 being very poor and 5 being very good). Furthermore, for unfaithfulness, we define three types of unfaithful errors and ask annotators to label whether summaries contain any type of error. Instructions in Table 5 are given to human judges. Here are descriptions of the aspects: • Informativeness: Whether the summary provides enough and necessary content coverage from the input article. • Fluency: Whether the summary is free of obvious grammatically incorrect sentences (e.g., fragments, missing components) that make the text difficult to read. • Faithfulness: Whether the summary accords with the facts expressed in the source. 5107 Article: With a Little Extra Cash. What to do with a bonus? The right thing, of course, is to pay off debts or save it for a time when there are not any bonuses. But in Albany, any financial windfall invites hordes of legislators hungrily seeking ways to spend it. This has already started to happen, with lawmakers eyeballing a projected budgetary surplus of just under $1 billion – not all that grand when you consider that the total state budget is in the neighborhood of $120 billion, but a healthy number nonetheless. But one essential part of the equation is different this year: a new governor guarding the state finances. Nobody knows quite yet how Gov. Eliot Spitzer will manage a Legislature that wants to add a lot of its favorite things to his budget before they return it for his approval. One suggestion: Mr. Spitzer should keep his fist as tightly closed as possible, especially on his new school aid formula and his Medicaid adjustments. (....) Informativeness: 1 Not relevant to the article e.g., “editorial on gov eliot spitzer ’s plan to spend it . of new governor guarding state finances . and to spitzer should keep his fist as tightly closed as possible , especially on new school aid formula and his medicaid adjustments .” 3 Relevant, but misses the main point of the article e.g., “editorial on new gov eliot spitzer ’s new governor guarding state finances . says spitzer should keep his new school aid formula and his medicaid adjustments” 5 Successfully captures the main point of the article e.g., “Editorial says New York Gov Eliot Spitzer , faced with projected $ 0 billion budget surplus , should be tight-fisted and cautious about overspending” Fluency: 1 Summary is full of garbage fragments and is hard to understand e.g., “of new governor guarding state finances . and to spitzer should keep his fist as tightly closed as possible , to” 2 Summary contains fragments, missing components but has some fluent segments e.g., “editorial on gov eliot spitzer ’s plan to spend it . of new governor guarding state finances . and to spitzer should keep his fist as tightly closed as possible , especially on new school aid formula and his medicaid adjustments.” 3 Summary contains some grammar errors but is in general fluent e.g., “editorial on any financial windfall invites hordes of legislators hungrily seeking ways to spend it . how gov eliot spitzer will manage legislature that wants to add lot of its favorite to his budget before they return it for his approval .” 4 Summary has relatively minor grammatical errors e.g., “article on in any financial windfall invites hordes of legislators hungrily seeking ways to spend it” 5 Fluent summary e.g., “editorial says new new jersey gov eliot spitzer guarding state finances . says spitzer should keep his new school aid formula and his medicaid adjustments” Faithfulness: We define three types of unfaithful errors. Each type is labeled as “0” or “1” independently. “0” means summary does not make this type of error and “1” suggests this type of error occurs. Three types of errors are : i Hallucination error: Fabricated content that does not occur in the original article e.g., “correction of dec 0 about new york column on state budget” ii Out-of-Context error: Fact occurs in the article, but fails without correct context e.g., “Editorial says one essential part of the equation is different this year: a new governor guarding the tate finances.” iii Deletion or Substitution error: Summary contains incorrectly edited, missing elements; or summary incorrectly concatenates elements from different sentences. e.g., “editorial says new new jersey gov eliot spitzer guarding state finances, keeping his new school aid formula adjustments.” Table 5: Sample summaries with explanations on human evaluation aspect scales, and the definition of three types of unfaithful errors.
2020
457
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5108–5120 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5108 Optimizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports Yuhao Zhang1, Derek Merck2, Emily Bao Tsai1, Christopher D. Manning1, Curtis P. Langlotz1 1Stanford University 2University of Florida {yuhaozhang, ebtsai, manning, langlotz}@stanford.edu [email protected] Abstract Neural abstractive summarization models are able to generate summaries which have high overlap with human references. However, existing models are not optimized for factual correctness, a critical metric in real-world applications. In this work, we develop a general framework where we evaluate the factual correctness of a generated summary by factchecking it automatically against its reference using an information extraction module. We further propose a training strategy which optimizes a neural summarization model with a factual correctness reward via reinforcement learning. We apply the proposed method to the summarization of radiology reports, where factual correctness is a key requirement. On two separate datasets collected from hospitals, we show via both automatic and human evaluation that the proposed approach substantially improves the factual correctness and overall quality of outputs over a competitive neural summarization system, producing radiology summaries that approach the quality of humanauthored ones. 1 Introduction Neural abstractive summarization systems aim at generating sentences which compress a document while preserving the key facts in it (Nallapati et al., 2016b; See et al., 2017; Chen and Bansal, 2018). These systems are potentially useful in many realworld applications. For example, Zhang et al. (2018) have shown that customized neural abstractive summarization models are able to generate radiology summary statements with high quality by summarizing textual findings written by radiologists. This task has significant clinical value because of its potential to accelerate the radiology workflow, reduce repetitive human labor, and improve clinical communications (Kahn Jr et al., 2009). Background: radiographic examination of the chest. clinical history: 80 years of age, male ... Findings: frontal radiograph of the chest demonstrates repositioning of the right atrial lead possibly into the ivc. ... a right apical pneumothorax can be seen from the image. moderate right and small left pleural effusions continue. no pulmonary edema is observed. heart size is upper limits of normal. Human Summary: pneumothorax is seen. bilateral pleural effusions continue. Summary A (ROUGE-L = 0.77): no pneumothorax is observed. bilateral pleural effusions continue. Summary B (ROUGE-L = 0.44): pneumothorax is observed on radiograph. bilateral pleural effusions continue to be seen. Figure 1: A (truncated) radiology report and summaries with their ROUGE-L scores. Compared to the human summary, Summary A has high textual overlap (i.e., ROUGE-L) but makes a factual error; Summary B has a lower ROUGE-L score but is factually correct. However, while existing abstractive summarization models are optimized to generate summaries that highly overlap with human references (Paulus et al., 2018), this does not guarantee factually correct summaries, as shown in Figure 1. Therefore, maintaining factual correctness of the generated summaries remains a critical yet unsolved problem. For example, Zhang et al. (2018) found that about 30% of the outputs from a radiology summarization model contain factual errors or inconsistencies. This has made such a system unusable in practice, as factual correctness is critically important in this domain to prevent medical errors. Existing attempts at improving the factual correctness of abstractive summarization models have seen very limited success. For example, Cao et al. (2017) augmented the attention mechanism of neural models with factual triples extracted with open information extraction systems; Falke et al. (2019) 5109 studied using natural language inference systems to rerank generated summaries based on their factual consistencies; Kry´sci´nski et al. (2019b) proposed to verify factual consistency of generated summaries with a weakly-supervised model. Despite these efforts, none of the existing work has focused explicitly on optimizing an abstractive summarization system with a correctness objective. As a result, even state-of-the-art systems trained with ample data still produce summaries with a substantial number of factual errors (Goodrich et al., 2019; Kry´sci´nski et al., 2019a). In this work we aim to optimize the factual correctness of existing neural summarization systems, with a focus on summarizing radiology reports. This task has several key properties that make it ideal for studying factual correctness in summarization models. First, the clinical facts or observations present in radiology reports have less ambiguity compared to open-domain text, which allows objective comparison of facts. Second, radiology reports involve a relatively limited space of facts, which makes automatic measurement of factual correctness in the generated text approachable. Lastly, as factual correctness is a crucial metric in this domain, improving factual correctness will directly lead to an ability to use the system. To this end, we design a framework where an external information extraction system is used to extract information in the generated summary and produce a factual accuracy score by comparing it against the human reference summary. We further develop a training strategy where we combine a factual correctness objective, a textual overlap objective and a language model objective, and jointly optimize them via reinforcement learning (RL). On two datasets of radiology reports collected from different hospitals, we show that our training strategy substantially improves the factual correctness of the summaries generated by a competitive neural summarization system. Moreover, we observe for the first time that, even in the absence of a factual correctness objective, optimizing a textual overlap-based metric substantially improves the factual correctness of the resulting system compared to maximum likelihood training. We further show via human evaluation and analysis that our training strategy leads to summaries with higher overall quality and correctness and which are closer to the human-written ones. Our main contributions are: (i) we propose a general framework and a training strategy for improving the factual correctness of summarization models by optimizing a multi-part objective via RL; (ii) we apply the proposed strategy to radiology reports, and empirically show that it improves the factual correctness of the generated summaries; and (iii) we demonstrate via radiologist evaluation that our system is able to generate summaries with clinical validity close to human-written ones. To our knowledge, our work represents the first attempt at directly optimizing a neural summarization system with a factual correctness objective via RL. 2 Related Work Neural Summarization Systems. Neural models for text summarization can be broadly divided into extractive approaches (Cheng and Lapata, 2016; Nallapati et al., 2016a) and abstractive approaches (Nallapati et al., 2016b; See et al., 2017). While existing models are often trained in an endto-end manner by maximizing the likelihood of the reference summaries, RL has been shown useful in recent work (Chen and Bansal, 2018; Dong et al., 2018). Specifically, Paulus et al. (2018) found that directly optimizing an abstractive summarization model on the ROUGE metric via RL can improve the summary ROUGE scores. Our work extends the rewards used in existing work with a factual correctness reward to further improve the correctness of the generated summaries. Factual Correctness in Summarization. Our work is closely related to recent work that studies factual correctness in summarization. Cao et al. (2017) proposed to improve summarization models by attending to fact triples extracted using open information extraction systems. Goodrich et al. (2019) compared different information extraction systems to evaluate the factual accuracy of generated text. Falke et al. (2019) explored using natural language inference systems to evaluate the correctness of generated summaries, and found models trained on existing datasets to be inadequate. Kry´sci´nski et al. (2019b) proposed to evaluate factual consistencies in the generated summaries using a weakly-supervised fact verification model. Despite these efforts, none of this work has shown success in directly optimizing a summarization system for factual correctness, and to our knowledge our work represents the first attempt in this direction. While our work is focused on improving neural summarization models, we note that the idea of 5110 using information extraction systems to evaluate the fidelity of generated text has also been explored for data-to-text generation (Wiseman et al., 2017; Dhingra et al., 2019). Summarization of Radiology Reports. Zhang et al. (2018) first studied the problem of automatic generation of radiology impressions by summarizing textual radiology findings, and showed that an augmented pointer-generator model achieves high overlap with human references. MacAvaney et al. (2019) extended this model with an ontologyaware pointer-generator and showed improved summarization quality. Li et al. (2019) and Liu et al. (2019) studied generating textual descriptions of radiology findings from medical images, and proposed RL-based approaches to tackle this problem. While Zhang et al. (2018) found that about 30% of the radiology summaries generated from neural models contain factual errors, improving factual correctness in radiology summarization remains unstudied. 3 Task & Baseline Pointer-Generator We start by briefly introducing the task of summarizing radiology findings. Given a passage of radiology findings represented as a sequence of tokens x = {x1, x2, . . . , xN}, with N being the length of the findings, the task involves finding a sequence of tokens y = {y1, y2, . . . , yL} that best summarizes the salient and clinically significant findings in x. In routine radiology workflow, an output sequence y is produced by the radiologist, which we treat as a reference summary sequence.1 To model the summarization process, we use the background-augmented pointer-generator network (Zhang et al., 2018) as the backbone of our method. This abstractive summarization model extends a pointer-generator (See et al., 2017) with a separate background section encoder and is shown to be effective in summarizing radiology notes with multiple sections. We briefly describe this model and refer readers to the original papers for details. At a high level, this model first encodes the input sequence x into hidden states with a Bi-directional Long Short-Term Memory (Bi-LSTM) network, and then generates an output sequence y with a separate LSTM decoder. To make the input information available at decoding time, an attention 1While the name “impression” is often used in clinical settings, we use “summary” and “impression” interchangeably. mechanism (Bahdanau et al., 2015) over the input hidden states is also added to the decoder. The baseline pointer-generator model by Zhang et al. (2018) adds two augmentations to this attentional encoder-decoder model to make it suitable for summarizing radiology findings: Copy Mechanism. To enable the model to copy words from the input, a copy mechanism (Vinyals et al., 2015; See et al., 2017) is added to calculate a generation probability at each step of decoding. This generation probability is then used to blend the original output vocabulary distribution and a copy distribution to generate the next word. Background-guided Decoding. As shown in Figure 1, radiology reports often consist of a background section which documents the crucial study background information (e.g., purpose of the study, patient conditions), and a findings section which documents clinical observations. While words can be copied from the findings section to form the summary, Zhang et al. (2018) found it worked better to separately encode the background section, and inject the representation into the decoding process by concatenating it with the input. 4 Fact Checking in Summarization Summarization models such as the one described in Section 3 are commonly trained with the teacherforcing algorithm (Williams and Zipser, 1989) by maximizing the likelihood of the reference, humanwritten summaries. However, this training strategy results in a significant discrepancy between what the model sees during training and test time, often referred to as the exposure bias issue (Ranzato et al., 2016), leading to degenerate output at test time. An alternative training strategy is to directly optimize standard metrics such as ROUGE scores (Lin, 2004) with RL and this was shown to improve summarization quality (Paulus et al., 2018). Nevertheless, this method still provides no guarantee that the generated summary is factually accurate and complete, since the ROUGE scores merely measure the superficial text overlap between two sequences and do not account for the factual alignment between them. To illustrate this, a reference sentence pneumonia is seen and a generated sentence pneumonia is not seen have substantial text overlap and thus the generated sentence would achieve a high ROUGE score, however the generated sentence conveys an entirely opposite fact. In this section 5111 Summarization Model Fact Extractor ROUGE cardiomegaly effusion edema pneumonia LNLL + λ1LR + λ2LC <latexit sha1_base64="lLBkQ0rZjAzW8tUJ2lyCmDXWTkA=">ACQXicbVBLSwMxGMz6rPV9eglWARBKLut+LgVe/FQpIrVQrcs2TRtQ7MPkm/Fsuxf8 +I/8ObdiwdFvHox2xaptgOBYWa+5Mu4oeAKTPFmJtfWFxazqxkV9fWNzZzW9u3KogkZXUaiEA2XKY4D6rAwfBGqFkxHMFu3P7ldS/u2dS8cC/gUHIWh7p+rzDKQEtObmG7RHoUSLiauLYwB4gvqxWE3yIbaFvaRPHwtOR68lAcUagkji5vFkwh8DTxBqTPBqj5uSe7XZAI4/5QAVRqmZIbRiIoFTwZKsHSkWEtonXdbU1 CceU6142EC97XSxp1A6uMDHqTEzHxlBp4rk6mu6r/XirO8poRdE5bMfDCJhPRw91IoEhwGmduM0loyAGmhAqud4V0x6RhIuPTs4SzF8e+Xp8ltsWCVCqWro3z5fFxHBu2iPXSALHSCyugC1VAdUfSIXtE7+jCejDfj0/gaReM8cwO+gPj+wcP6bFU</latexit> Severe cardiomegaly is seen. nsubj:pass ✓ … Background: patient with chest pain … Findings: persistent low lung volumes with enlarged heart. … x <latexit sha1_base64="WBQ6 po/p5LK1PKuvRDf0H0/wKrM=">AB6HicbVDLSsNAFL2pr1pfVZ duBovgqiRWfOyKbly2YB/QhjKZTtqxk0mYmYgl9AvcuFDErZ/kzr 9xkgZR64ELh3Pu5d57vIgzpW370yosLa+srhXSxubW9s75d29t gpjSWiLhDyUXQ8rypmgLc0p91IUhx4nHa8yXqd+6pVCwUt3oaU TfAI8F8RrA2UvNhUK7YVTsDWiROTiqQozEof/SHIYkDKjThWKme Y0faTbDUjHA6K/VjRSNMJnhEe4YKHFDlJtmhM3RklCHyQ2lKaJSp PycSHCg1DTzTGWA9Vn+9VPzP68Xav3ATJqJYU0Hmi/yYIx2i9Gs0 ZJISzaeGYCKZuRWRMZaYaJNKQvhMsXZ98uLpH1SdWrVWvO0Ur/ K4yjCARzCMThwDnW4gQa0gACFR3iGF+vOerJerbd5a8HKZ/bhF6z 3L/8BjTM=</latexit> rC = 0.75 <latexit sha1_base64="4iCR1DN+7qa/KI/msFy7npGgz0=">AB+nicbVDLSsNAFJ3UV62vVJduBovgKqS+qguh2I3LCvYBbQiT6aQdOnkw c6OW2E9x40IRt36JO/GJA2i1gMXDufcy73OKHgCkzUysLC4trxRXS2vrG5tbenm7rYJIUtaigQhk1yGKCe6zFnAQrBtKRjxHsI4zbqR+5ZJxQP/BiYhszwy9LnLKYFEsvWytPvA7iFuTPEFNo3aia1XTMPMgOdJNScVlKNp6x/9QUAj/lABVGqVzVDsGIigVPBpqV+pFhI6JgMWS+hPvGYsuLs 9CneT5QBdgOZlA84U39OxMRTauI5SadHYKT+eqn4n9eLwD2zYu6HETCfzha5kcAQ4DQHPOCSURCThBAqeXIrpiMiCYUkrVIWwnmK0+X50n70KgeGUfXx5X6ZR5HEe2iPXSAqiG6ugKNVELUXSHtEzetEetCftVXubtRa0fGYH/YL2/gWILJMA</latexit> Radiographs show severe cardiomegaly with plural effusions. y <latexit sha1_base64="NZ6boFnOmWjw9jRHpv0m4i6agHw=" >AB6HicbVDLSsNAFL2pr1pfVZduBovgqiQqPnZFNy5bsA9oQ5lMJ+3YySTMTIQ+gVuXCji1k9y5984SYOo9cCFwzn3cu89XsSZ0 rb9aZWldW18rlY3Nre2d6u5eR4WxJLRNQh7KnocV5UzQtma014kKQ48Trve9Cbzuw9UKhaKO51E1A3wWDCfEayN1EqG1Zpdt3O gReIUpAYFmsPqx2AUkjigQhOleo7dqTdFEvNCKezyiBWNMJkise0b6jAVumh86Q0dGSE/lKaERrn6cyLFgVJ4JnOAOuJ+utl 4n9eP9b+pZsyEcWaCjJf5Mc6RBlX6MRk5RonhiCiWTmVkQmWGKiTaVPISrDOfLy+SzkndOa2fts5qjesijIcwCEcgwMX0IBbaE IbCFB4hGd4se6tJ+vVepu3lqxiZh9+wXr/AgCUjTQ=</latexit> Severe cardiomegaly is seen. ˆy <latexit sha1_base64="pFOjU320xM+kmgOywAvmuUbI+g=" >AB7nicbVBNS8NAEJ3Ur1q/qh69BIvgqSQqftyKXjxWsB/QhrLZbtulm03YnQgh9Ed48aCIV3+PN/+NmzSIWh8MPN6bYWaeHwmu0 XE+rdLS8srqWnm9srG5tb1T3d1r6zBWlLVoKELV9YlmgkvWQo6CdSPFSOAL1vGnN5nfeWBK81DeYxIxLyBjyUecEjRSpz8hmCazQbX m1J0c9iJxC1KDAs1B9aM/DGkcMIlUEK17rhOhlxKFnAo2q/RjzSJCp2TMeoZKEjDtpfm5M/vIKEN7FCpTEu1c/TmRkDrJPBNZ0Bw ov96mfif14txdOmlXEYxMkni0axsDG0s9/tIVeMokgMIVRxc6tNJ0QRiahSh7CVYbz75cXSfuk7p7WT+/Oao3rIo4yHMAhHIMLF9 CAW2hCyhM4RGe4cWKrCfr1Xqbt5asYmYfsF6/wLKiZAB</latexit> rR = 0.36 <latexit sha1_base64="w/0w04TQoptmrWLXxrOAViAP7Mk=">AB+nicbVDLSsNAFJ3UV62vVJduBovgKqRWqi6EohuXVewD2hAm0k7dPJg 5kYtsZ/ixoUibv0Sd/6NSRpErQcuHM65l3vcULBFZjmp1ZYWFxaXimultbWNza39PJ2WwWRpKxFAxHIrkMUE9xnLeAgWDeUjHiOYB1nfJH6nVsmFQ/8G5iEzPLI0OcupwQSydbL0u4Du4f4eorPsGnU6rZeMQ0zA54n1ZxUI6mrX/0BwGNPOYDFUSpXtUMwYqJBE4Fm5b6kWIhoWMyZL2E+sRjyoqz 06d4P1EG2A1kUj7gTP05ERNPqYnJ0egZH6Xif14vAvfEirkfRsB8OlvkRgJDgNMc8IBLRkFMEkKo5MmtmI6IJBStEpZCKcp6t8vz5P2oVGtGbWro0rjPI+jiHbRHjpAVXSMGugSNVELUXSHtEzetEetCftVXubtRa0fGYH/YL2/gWa0JM</latexit> v = (0, 1, 1, 0) <latexit sha1_base64="aqxhCprdbQ8AxIyC/N2xQDHnFcY=">AB/3icbVDLSgMxFL1TX7W+RgU3boJFqCBlxoqPhVB047KCfUBbSibNtKGZB0mUMY u/BU3LhRx62+482/MTAfxdULgcM695OQ4IWdSWdaHkZubX1hcyi8XVlbX1jfMza2GDCJBaJ0EPBAtB0vKmU/rilOW6Gg2HM4bTqjq8RvjqmQLPBv1SkXQ8PfOYygpWeuZOx8Nq6LjxeIouUMk6tPWxDnpm0SpbKdBfYmekCBlqPfO90w9I5FfEY6lbNtWqLoxFoRTqeFTiRpiMkID2hbUx97VHbjNP8U7Wulj9xA6 OsrlKrfN2LsSTnxHD2ZpJW/vUT8z2tHyj3rxswPI0V9MnvIjThSAUrKQH0mKF8ogkmgumsiAyxwETpygpCecJTr6+/Jc0jsp2pVy5OS5WL7M68rALe1ACG06hCtdQgzoQuIMHeIJn4954NF6M19lozsh2tuEHjLdPrEWUDg=</latexit> ˆv = (0, 1, 0, 0) <latexit sha1_base64="JuA+nSCFu7RnRdzeoADhI+inKU0=">ACBXicbVDLSsNAFJ3UV62vqEtdDBahQikTKz4WQtGNywr2AU0ok+mkHTp5MDMplJ CNG3/FjQtF3PoP7vwbkzSIWg9cOJxzL/feYwecSYXQp1ZYWFxaXimultbWNza39O2dtvRDQWiL+NwXRtLyplHW4opTruBoNi1Oe3Y4+vU70yokMz37tQ0oJaLhx5zGMEqkfr6vjnCKjJdrEa2E03iGF7CqoaVRFR329jGoA5wnRk7KIEezr3+YA5+ELvU4VjKnoECZUVYKEY4jUtmKGmAyRgPaS+hHnaptKLsix geJsoAOr5IylMwU39ORNiVcuraSWd6rvzrpeJ/Xi9UzrkVMS8IFfXIbJETcqh8mEYCB0xQovg0IZgIltwKyQgLTFQSXCkL4SLF6fL86R9XDPqtfrtSblxlcdRBHvgAFSAc5A9yAJmgBAu7BI3gGL9qD9qS9am+z1oKWz+yCX9DevwCru5ba</latexit> Figure 2: Our proposed training strategy. Compared to existing work which relies only on a ROUGE reward rR, we add a factual correctness reward rC which is enabled by a fact extractor. The summarization model is updated via RL, using a combination of the NLL loss, a ROUGE-based loss and a factual correctness-based loss. For simplicity we only show a subset of the clinical variables in the fact vectors v and ˆv. we first introduce a method to verify the factual correctness of the generated summary against the reference summary, and then describe a training strategy to directly optimize a factual correctness objective to improve summary quality. 4.1 Evaluating Factual Correctness via Fact Extraction A convenient way to explicitly measure the factual correctness of a generated summary against the reference is to first extract and represent the facts in a structured format. To this end, we define a fact extractor to be an information extraction (IE) module, denoted as f, which takes in a summary sequence y and returns a structured fact vector v: v = f(y) = (v1, ..., vm) (1) where vi is a categorical variable that we want to measure via fact checking and m the total number of such variables. For example, in the case of summarizing radiology reports, vi can be a binary variable that describes whether an event or a disease such as pneumonia is present or not in a radiology study. Given a fact vector v output by f from a reference summary and ˆv from a generated summary, we further define a factual accuracy score s to be the ratio of variables in ˆv which equal the corresponding variables in v, namely: s(ˆv, v) = Pm i=1 1[vi = ˆvi] m (2) where s ∈[0, 1]. Note that this method requires a summary to be both precise and complete in order to achieve a high s score: missing out a positive variable or falsely claiming a negative variable will be equally penalized. Our general definition of the fact extractor module f allows it to have different realizations for different domains. For our task of summarizing radiology findings, we make use of the open-source CheXpert radiology report labeler (Irvin et al., 2019).2 At its core, the CheXpert labeler parses the input sentences into dependency structures and runs a series of surface and syntactic rules to extract the presence status of 14 clinical observations seen in chest radiology reports.3 It was evaluated to have over 95% overall F1 when compared against oracle annotations from multiple radiologists on a large-scale radiology report dataset. 4.2 Improving Factual Correctness via Policy Learning The fact extractor module introduced above not only enables us to measure the factual accuracy of a generated summary, but also provides us with an opportunity to directly optimize the factual accuracy as an objective. This can be achieved by viewing our summarization model as an agent, the actions of which are to generate a sequence of words to form the summary ˆy, conditioned on the input x.4 The agent then receives rewards r(ˆy) for its actions, where the rewards can be designed to measure the quality of the generated summary. Our goal is to learn an optimal policy Pθ(y|x) for the summarization model, parameterized by the network parameters θ, which achieves the highest expected reward under the training data. Formally, we minimize loss L, the negative ex2https://github.com/stanfordmlgroup/ chexpert-labeler 3For this study we used a subset of these variables and discuss the reasons in Appendix A. 4For clarity, we drop the bold symbol and use x and y to represent the input and output sequences, respectively. 5112 pectation of the reward r(ˆy) over the training data: L(θ) = −Eˆy∼Pθ(y|x)[r(ˆy)]. (3) The gradient can be calculated as (REINFORCE Williams, 1992): ∇θL(θ) = −Eˆy∼Pθ(y|x)[∇θ log Pθ(ˆy|x)r(ˆy)]. (4) In practice, we approximate this gradient over a training example with a single Monte Carlo sample and deduct a baseline reward to reduce the variance of the gradient estimation: ∇θL(θ) ≈−∇θ log Pθ(ˆys|x)(r(ˆys) −¯r), (5) where ˆys is a sampled sequence from the model and ¯r a baseline reward. Here we adopt the self-critical training strategy (Rennie et al., 2017), where we obtain the baseline reward ¯r by applying the same reward function r to a greedily decoded sequence ˆyg, i.e., ¯r = r(ˆyg). We empirically find that using this self-critical baseline reward helps stabilize the training of our summarization model. 4.3 Reward Function The learning strategy in Equation (5) provides us with the flexibility to optimize arbitrary reward functions. Here we decompose our reward function into two parts: r = λ1rR + λ2rC, (6) where rR ∈[0, 1] is a ROUGE reward, namely the ROUGE-L score (Lin, 2004) of the predicted sequence ˆy against the reference y; rC ∈[0, 1] is a correctness reward, namely the factual accuracy s of the predicted sequence against the reference sequence, as in Equation (2); λ1, λ2 ∈[0, 1] are scalar weights that control the balance between the two. To measure the similarity between the reference and the generation, we also experimented with more recent metrics that rely on neural representations of text, such as the BERTScore (Zhang et al., 2020). However, we found that these metrics, mostly trained on web and newswire data, generalize poorly to our domain of text. Paulus et al. (2018) found that directly optimizing a reward function without the original negative log-likelihood (NLL) objective as used in teacherforcing can hurt the readability of the generated summaries, and proposed to alleviate this problem by combining the NLL objective with the RL loss. Number of Examples Split Stanford RIH Train 89,992 (68.8%) 84,194 (60.3%) Dev 22,031 (16.8%) 25,966 (18.6%) Test 18,827 (14.4%) 29,494 (21.1%) Total 130,850 139,654 Table 1: Statistics of the Stanford and RIH datasets. Here we adopt the same strategy, and our final loss during training is: L = λ1LR + λ2LC + λ3LNLL, (7) where λ3 ∈[0, 1] is an additional scalar that controls the weight of the NLL loss. Our overall training strategy is illustrated in Figure 2. Our final loss jointly optimizes three aspects of the summaries: LNLL serves as a conditional language model that optimizes the fluency and relevance of the generated summary, LR controls the brevity of the summary and encourages summaries which have high overlap with human references, and LC encourages summaries that are factually accurate when compared against human references. 5 Experiments We collected two real-world radiology report datasets and describe our experiments using them as our main training and evaluation corpora. 5.1 Data Collection We collected anonymized chest radiographic reports within a certain period of time from two collaborating hospitals: the Stanford University Hospital and the Rhode Island Hospital (RIH).5 For both datasets, we ran simple preprocessing following Zhang et al. (2018). To test the generalizability of the models, instead of using random stratification, we stratified each dataset over time into training, dev and test splits. We include statistics of both datasets in Table 1 and preprocessing and stratification details in Appendix B. 5.2 Models As we use the augmented pointer-generator network described in Section 3 as the backbone of our method, we mainly compare against it as the 5Our retrospective study has been approved by the corresponding institutional review boards with waiver of consent. 5113 Stanford RIH System R-1 R-2 R-L Factual F1 R-1 R-2 R-L Factual F1 LexRank (Erkan and Radev, 2004) 26.8 16.3 23.6 — 20.6 10.7 18.3 — BanditSum (Dong et al., 2018) 32.7 20.9 29.0 — 26.1 14.0 23.3 — PG Baseline (Zhang et al., 2018) 48.3 38.8 46.6 55.9 54.1 44.7 52.2 69.3 PG + RLR 52.0 41.1 49.5 63.2 58.0 47.2 55.7 73.3 PG + RLC 50.7 39.7 48.0 65.9 55.2 45.4 52.9 75.4 PG + RLR+C 52.0 41.0 49.3 64.5 57.0 46.6 54.7 74.8 Table 2: Main results on the two datasets. R-1, R-2, R-L represent the ROUGE scores. PG Baseline represents our baseline augmented pointer-generator; RLR, RLC and RLR+C represent RL training with the ROUGE reward alone, with the factual correctness reward alone and with both. All the ROUGE scores have a 95% confidence interval of at most ±0.6. F1 scores for extractive models were not evaluated for the reason discussed in Section 5.3. baseline model (PG Baseline), and use the open implementation by Zhang et al. (2018). For the proposed RL-based training, we compare three variants: training with only the ROUGE reward (RLR), with only the factual correctness reward (RLC), or with both (RLR+C). All three variants have the NLL component in the training loss as in Equation (7). For all variants, we initialize the model with the best baseline model trained with standard teacher-forcing, and then finetune it on the training data with the corresponding RL loss, until it reaches the best validation score. To understand the difficulty of the task and evaluate the necessity of using abstractive summarization models, we additionally evaluate two extractive summarization methods: (1) LexRank (Erkan and Radev, 2004), a widely-used non-neural extractive summarization algorithm; and (2) BanditSum (Dong et al., 2018), a state-of-the-art RLbased neural extractive summarization model. For both methods we use their open implementations. We include other model implementation and training details in Appendix C. 5.3 Evaluation We use two sets of metrics to evaluate model performance at the corpus level. First, we use the standard ROUGE scores (Lin, 2004), and report the F1 scores for ROUGE-1, ROUGE-2 and ROUGEL, which compare the word-level unigram, bigram and longest common sequence overlap with the reference summary, respectively. For factual correctness evaluation, we use a Factual F1 score. While the factual accuracy score s that we use in the reward function evaluates how factually accurate a specific summary is, comparing it at the corpus level can be misleading, for the same reason that accuracy is a misleading measure in information retrieval (Manning et al., 2008). To understand this, imagine the case where a clinical variable v has rare presence in the corpus. A model which always generates a negative summary for it (i.e., v = 0; the disease is not present) can have high accuracy, but is useless in practice. Instead, for each variable, we obtain a model’s predictions over all test examples and calculate its F1 score. We then macro-average the F1 of all variables to obtain the overall factual F1 score of the model. Note that the CheXpert labeler that we use is specifically designed to run on radiology summaries, which usually have a different style of language compared to the radiology findings section of the reports (see further analysis in Section 7). As a result, we found the labeler to be less accurate when applied to the findings section. For this reason, we were not able to estimate the factual F1 scores on the summaries generated by the two extractive summarization models. 6 Results We first present our automatic evaluation results on the two collected datasets. We then present a human evaluation with board-certified radiologists where we compare the summaries generated by humans, the baseline and our proposed model. 6.1 Automatic Evaluation Our main results on both datasets are shown in Table 2. We first notice that while the neural extractive model, BanditSum, outperforms the non-neural extractive method on ROUGE scores, our PG baseline model substantially outperforms both of them, 5114 Variable PG Baseline RLR+C ∆ No Finding 77.3 81.5 +4.2∗ Cardiomegaly 29.5 40.4 +10.9∗ Airspace Opacity 64.6 74.9 +10.3∗ Edema 58.4 70.9 +12.5∗ Consolidation 46.3 53.2 +6.9∗ Pneumonia 46.7 46.8 +0.2 Atelectasis 48.8 56.3 +7.5∗ Pneumothorax 69.5 82.9 +13.4∗ Pleural Effusion 62.0 73.4 +11.4∗ Macro Avg. 55.9 64.5 +8.6∗ Table 3: Test set factual F1 scores for each variable on the Stanford dataset. ∗marks statistically significant improvements with p < .01 under a bootstrap test. suggesting that on both datasets abstractive summarization is necessary to generate summaries comparable to human-written ones. We further show that this difference is likely due to the different styles of language (see Section 7): while radiologists tend to use more compressed language when writing the summaries, extractive methods produce more verbose summaries that fail to capture this difference. On the Stanford dataset, training the pointergenerator model with ROUGE reward alone (RLR) leads to improvements on all ROUGE scores, with a gain of 2.9 ROUGE-L scores. Training with the factual correctness reward alone (RLC) leads to the best overall factual F1 with a substantial gain of 10% absolute, however with consistent decline in the ROUGE scores compared to RLR training. Combining the ROUGE and the factual correctness rewards (RLR+C) achieves a balance between the two, leading to an overall improvement of 2.7 on ROUGE-L and 8.6% on factual F1 compared to the baseline. This indicates that RLR+C training leads to both higher overlap with references and improved factual correctness. Most surprisingly, while ROUGE has been criticized for its poor correlation with human judgment of quality and insufficiency for evaluating correctness of the generated text (Chaganty et al., 2018), we find that optimizing ROUGE reward jointly with NLL leads to substantially more factually correct summaries than the baseline, shown by the notable gain of 7.3% factual F1 from the RLR training. All of our findings are consistent on the RIH dataset, with RLR+C achieving an overall improveStanford Dataset Background: radiographic examination of the chest ... Findings: continuous rhythm monitoring device again seen projecting over the left heart. persistent low lung volumes with unchanged cardiomegaly. again seen is a diffuse reticular pattern with interstitial prominence demonstrated represent underlying emphysematous changes with superimposed increasing moderate pulmonary edema. small bilateral pleural effusions. persistent bibasilar opacities left greater than right which may represent infection versus atelectasis. Human: increased moderate pulmonary edema with small bilateral pleural effusions. left greater than right basilar opacities which may represent infection versus atelectasis. PG Baseline (s = 0.33): no significant interval change. RLR+C (s = 1.00): increasing moderate pulmonary edema. small bilateral pleural effusions. persistent bibasilar opacities left greater than right which may represent infection versus atelectasis. RIH Dataset Background: history: lobar pneumonia, unspecified organism ... Findings: lines/tubes: none. lungs: ::: right:::: middle::: lobe::::: airspace::::: disease seen on prior radiographs from <date> and <date> is:: no ::: longer::::: evident. bilateral lungs appear clear. pleura: there is no pleural effusion or pneumothorax. heart and mediastinum: no cardiomegaly. thoracic aorta appears calcified and mildly tortuous. bones: ... Human: no acute cardiopulmonary abnormality. PG Baseline (s = 0.75): ::: right :::: middle::: lobe ::::: airspace :::: disease could represent atelectasis, aspiration or pneumonia. RLR+C (s = 1.00): no acute cardiopulmonary abnormality. Figure 3: Truncated examples from the test sets along with human, PG baseline and RLR+C outputs. Factual accuracy scores (s) are also shown for the model outputs. For the Stanford example, clinical observations in the summaries are marked for clarity; for RIH, :a ::::::: wrongly ::::: copied:::::::::: observation is marked. ment of 2.5 ROUGE-L and 5.5% factual F1 scores. Fine-grained Correctness. To understand how improvements in individual variables contribute to the overall improvement, we show the fine-grained factual F1 scores for all variables on the Stanford dataset in Table 3 and include results on the RIH dataset in Appendix D. We find that on both datasets, improvements in RLR+C can be observed on all variables tested. We further find that, as we change the initialization across different training runs, while the overall improvement on factual F1 stays approximately unchanged, the distribution of the improvement on different variables can vary substantially. Developing a training strategy for fine-grained control over different variables is an interesting direction for future work. Qualitative Results. In Figure 3 we present two example reports along with the human references, the PG baseline outputs and RLR+C outputs. In the first example, while baseline output seems generic and does not include any meaningful observation, the summary from the RLR+C model aligns well with the reference, and therefore achieves a higher 5115 Metric Win Tie Lose Our Model vs. PG Baseline Fluency 7% 60% 33% Factual Correctness 31% 55% 14% Overall Quality 48% 24% 28% Our Model vs. Human Reference Fluency 17% 54% 29% Factual Correctness 23% 49% 28% Overall Quality 44% 17% 39% Table 4: Results of the radiologist evaluation. The top three rows present results when comparing our RLR+C model output versus the baseline model output; the bottom three rows present results when comparing our model output versus the human-written summaries. factual accuracy score. In the second example, the baseline model wrongly copied an observation from the findings although the actual context is no longer evident, while the RLR+C model correctly recognizes this and produces a better summary. 6.2 Human Evaluation To study whether the improvements in the factual correctness scores lead to improvement in summarization quality under expert judgment, we run a comparative human evaluation following previous work (Chen and Bansal, 2018; Dong et al., 2018; Zhang et al., 2018). We sampled 50 test examples from the Stanford dataset, and for each example we presented to two board-certified radiologists the full radiology findings along with blinded summaries from (1) the human reference, (2) the PG baseline and (3) our RLR+C model. We shuffled the three summaries such that the correspondence cannot be guessed, and asked the radiologists to compare them based on the following three metrics: (1) fluency, (2) factual correctness and completeness, and (3) overall quality. For each metric we asked the radiologists to rank the three summaries, with ties allowed. After the evaluation, we converted each ranking into two binary comparisons: (1) our model versus the baseline model, and (2) our model versus human reference. The results are shown in Table 4. Comparing our model against the baseline model, we find that: (1) in terms of fluency our model is less preferred, although a majority of the results (60%) are ties; (2) our model wins more on factual correctness and overall quality. Comparing our model against System Stanford pplx. RIH pplx. Human 6.7 5.5 LexRank 10.8 36.9 BanditSum 9.9 40.9 PG Baseline 4.8 3.8 PG + RLR+C 6.5 4.8 Table 5: Perplexity scores as evaluated by the trained radiology impression LM on the test set human references and model predictions. human references, we find that: (1) human wins more on fluency; (2) factual correctness results are close, with 72% of our model outputs being at least as good as human; (3) surprisingly, in terms of overall quality our model was slightly preferred by the radiologists compared to human references. Lastly, when comparing the baseline model against human references, we find that outputs from the baseline model are much less correct and lowerquality than human summaries. 7 Analysis & Discussion Fluency and Style of Summaries. Our human evaluation results in Section 6.2 suggest that in terms of fluency our model output is less preferred than human reference and baseline output. To further understand the fluency and style of summaries from different models at a larger scale, we trained a neural language model (LM) for radiology summaries following previous work (Liu et al., 2018). Intuitively, radiology summaries which are more fluent and consistent with humans in style should be able to achieve a lower perplexity under this in-domain LM, and vice versa. To this end, we collected all human-written summaries from the training and dev split of both datasets, which in total gives us about 222,000 summaries. We then trained a strong Mixture of Softmaxes LM (Yang et al., 2018) on this corpus, and evaluated the perplexity of test set outputs for all models. The results are shown in Table 5. We find that while extractive models can achieve non-trivial overlap with references, their perplexity scores tend to be much higher than humans. We conjecture that this is because radiologists are trained to write the summaries with more compressed language than when they are writing the findings, therefore sentences directly extracted from the findings tend to be more verbose than needed. 5116 0 1 2 3 4 Top 10 trigrams (most frequent on the left) Ratio in outputs (%) Human RLR+C PG Baseline Figure 4: Distributions of the top 10 most frequent trigrams from model outputs on the Stanford test set. We further observe that the baseline model achieves even lower perplexity than humans, and our proposed method leads to a perplexity score much closer to human references. We hypothesize that this is because models trained with teacherforcing are prone to generic generations which are fluent and relevant but may not be factually correct. Training with the proposed rewards alleviates this issue, leading to summaries more consistent with humans in style. For example, we find that no significant interval change is a very frequent generation from the baseline, regardless of the actual input. This sentence occurs in 34% of the baseline outputs on the Stanford dev set, while the number for RLR+C and human are only 24% and 17%. This hypothesis is further confirmed when we plot the distribution of the top 10 most frequent trigrams from different models in Figure 4: while the baseline heavily reuses the few most frequent trigrams, our model RLR+C tends to have more diverse summaries which are closer to human references. The same trends are observed for 4-grams and 5-grams. Limitations. While we showed the success of our proposed method on improving the factual correctness of a radiology summarization model, we also recognize several limitations of our work. First, our proposed training strategy crucially depends on the availability of an external IE module. While this IE module is relatively easy to implement for a domain with a limited space of facts, how to generalize this method to open-domain summarization remains unsolved. Second, our study was based on a rule-based IE system, and the use of a more robust statistical IE model can potentially improve the results. Third, we mainly focus on key factual errors which result in a flip of the binary outcome of an event (e.g., presence of disease), whereas factual errors in generated summaries can occur in other forms such as wrong adjectives or coreference errors (Kry´sci´nski et al., 2019a). We leave the study of these problems to future work. 8 Conclusion In this work we presented a general framework and a training strategy to improve the factual correctness of neural abstractive summarization models. We applied this approach to the summarization of radiology reports, and showed its success via both automatic and human evaluation on two separate datasets collected from hospitals. Our general takeaways include: (1) in a domain with a limited space of facts such as radiology reports, a carefully implemented IE system can be used to improve the factual correctness of neural summarization models via RL; (2) even in the absence of a reliable IE system, optimizing the ROUGE metrics via RL can substantially improve the factual correctness of the generated summaries. We hope that our work draws the community’s attention to the factual correctness issue of abstractive summarization models and inspires future work in this direction. Acknowledgments The authors would like to thank the anonymous reviewers, Peng Qi and Urvashi Khandelwal for their helpful comments, and Dr. Jonathan Movson for his help with obtaining the RIH data used in this study. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. The 2015 International Conference on Learning Representations. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2017. Faithful to the original: Fact aware neural abstractive summarization. The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-2017). Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). 5117 Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, and Jackie Chi Kit Cheung. 2018. BanditSum: Extractive summarization as a contextual bandit. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). G¨unes Erkan and Dragomir R Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457–479. Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Ben Goodrich, Vinay Rao, Peter J. Liu, and Mohammad Saleh. 2019. Assessing the factual accuracy of generated text. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 19). Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. 2019. CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison. The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019). Charles E Kahn Jr, Curtis P Langlotz, Elizabeth S Burnside, John A Carrino, David S Channin, David M Hovsepian, and Daniel L Rubin. 2009. Toward best practices in radiology reporting. Radiology, 252(3):852–856. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In The 2015 International Conference for Learning Representations. Wojciech Kry´sci´nski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019a. Neural text summarization: A critical evaluation. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP 2019). Wojciech Kry´sci´nski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019b. Evaluating the factual consistency of abstractive text summarization. arXiv preprint arXiv:1910.12840. Christy Y. Li, Xiaodan Liang, Zhiting Hu, and Eric P. Xing. 2019. Hybrid retrieval-generation reinforced agent for medical image report generation. Advances in neural information processing systems. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. Text Summarization Branches Out: ACL Workshop. Guanxiong Liu, Tzu-Ming Harry Hsu, Matthew McDermott, Willie Boag, Wei-Hung Weng, Peter Szolovits, and Marzyeh Ghassemi. 2019. Clinically accurate chest X-ray report generation. arXiv preprint arXiv:1904.02633. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. The 2018 International Conference for Learning Representations. Sean MacAvaney, Sajad Sotudeh, Arman Cohan, Nazli Goharian, Ish Talati, and Ross W. Filice. 2019. Ontology-aware clinical abstractive summarization. Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 19). Christopher D Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to information retrieval. Cambridge University Press. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2016a. SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-2017). Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016b. Abstractive text summarization using sequence-to-sequence RNNs and beyond. Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. 5118 Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. The 2016 International Conference on Learning Representations. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008–7024. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In The 2017 Annual Meeting of the Association of Computational Linguistics (ACL 2017). Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256. Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2):270– 280. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. 2018. Breaking the softmax bottleneck: A high-rank RNN language model. The 2018 International Conference for Learning Representations. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In The 2020 International Conference for Learning Representations. Yuhao Zhang, Daisy Yi Ding, Tianpei Qian, Christopher D. Manning, and Curtis P. Langlotz. 2018. Learning to summarize radiology findings. In EMNLP 2018 Workshop on Health Text Mining and Information Analysis. 5119 Time Coverage Split Stanford RIH Train 2009/01 – 2014/04 2017/11 – 2018/06 Dev 2014/05 – 2014/08 2018/07 – 2018/09 Test 2014/09 – 2014/12 2018/10 – 2018/12 Table 6: Time coverage of different splits in the Stanford and RIH datasets. A Clinical Variables Inclusion Criteria While the CheXpert labeler that we use is able to extract status for 14 clinical variables, we found that several variables are very rarely represented in our corpora and therefore using all of them makes the calculation of the factual F1 score very unstable. For example, we found that training the same model using different random initializations would result in highly varying F1 scores for these variables. For this reason, for both datasets we removed from the factual F1 calculation all variables which have less than 3% positive occurrences on the validation set. We further removed the variables “Pleural Other” and “Support Devices” due to their ambiguity. This process results in a total of 9 variables for the Stanford dataset and 8 for the RIH dataset. Additionally, apart from the positive and negative status, the CheXpert labeler is also able to generate an uncertain status for a variable, capturing observations with uncertainty, such as in the sentence “pneumonia is likely represented”. While we can modify the factual accuracy score to take uncertainty into account, for simplicity in this work we do not make the distinction between a positive status and an uncertain status. B Dataset Preprocessing and Stratification Details We preprocessed both the Stanford and the RIH datasets following Zhang et al. (2018). All reports were first tokenized with Stanford CoreNLP (Manning et al., 2014). We then filtered the datasets by excluding reports where (1) no findings or impression (i.e., summary) section can be found; (2) multiple findings or impression sections can be found but cannot be aligned; or (3) the findings have fewer than 10 words or the impression has fewer than 2 words. Lastly, we replaced all date and time mentions with special tokens (e.g., <DATE>). For both datasets, we stratified them over time Variable PG Baseline RLR+C ∆ No Finding 91.0 92.0 +1.0∗ Cardiomegaly 21.1 33.8 +12.7∗ Airspace Opacity 80.4 83.5 +3.1∗ Edema 73.4 80.2 +6.8∗ Pneumonia 63.5 69.2 +5.7∗ Atelectasis 60.5 66.5 +6.0∗ Pneumothorax 89.7 93.2 +3.5∗ Pleural Effusion 74.3 79.9 +5.6∗ Macro Avg. 69.3 74.8 +5.5∗ Table 7: Test set performance for each variable on the RIH dataset. All numbers are F1 scores. ∗marks statistically significant improvements with p < .01 under a bootstrap test. into training, dev and test splits. We employed this stratification strategy to test whether our model generalizes to future data when trained on historical data. We show the time coverage of each split in Table 6. C Model Implementation and Training Details For the baseline background-augmented pointergenerator model, we use its open implementation.6 We use a 2-layer LSTM as the findings encoder, 1-layer LSTM as the background encoder, and a 1layer LSTM as the decoder. For all LSTMs we use a hidden size of 200. For the embedding layer we use 100-dimensional GloVe vectors (Pennington et al., 2014) which we pretrained on about 4 million radiology reports. We apply dropout (Srivastava et al., 2014) with p = 0.5 to the embeddings. At decoding time, we use the standard beam search with a beam size of 5 and a maximum decoding length of 50. For the training and finetuning of the models, we use the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 1e−3. We use a batch size of 64 and clip the gradient with a norm of 5. During training we evaluate the model on the dev set every 500 steps and decay the learning rate by 0.5 whenever the validation score does not increase after 2500 steps. Since we want the model outputs to have both high overlap with the human references and high factual correctness, for training we always use the average of the dev ROUGE score 6https://github.com/yuhaozhang/ summarize-radiology-findings 5120 Stanford Dataset Background: radiographic examination of the chest: <date> <time>. clinical history: <age> years of age, with concern for pulmonary edema. procedure comments: 3 single views of the chest... Findings: in the first chest radiograph from <date> at <time> there is interval intubation. left arm-picc line remains in place. grossly unchanged persistent cardiomegaly, bilateral pleural effusion, and mild pulmonary edema. severe djd of the left gh joint is noted. in the second chest radiograph there is interval placement of a trialysis catheter in the left ij. no other significant changes are noted. in the third chest radiograph from <date> at <time> there is an increased left basilar opacity likely reflecting basilar consolidation, atelectasis or aspiration. Human: in the final chest radiograph there is increased left basilar opacity likely reflecting basilar consolidation, atelectasis or aspiration. PG Baseline: interval intubation with placement of a trialysis catheter in the left ij. grossly unchanged cardiomegaly, bilateral pleural effusion, and mild pulmonary edema. RLR+C: interval placement of a trialysis catheter in the left ij. an increased left basilar opacity likely reflecting basilar consolidation, atelectasis or aspiration or aspiration. RIH Dataset Background: post op cardiac surgery - check lines and tubes. technique: single view of the chest obtained at <time> <date>... Findings: lines/tubes: right ij sheath with central venous catheter tip overlying the svc. on initial radiograph, endotracheal tube between the clavicular heads, and enteric tube with side port at the ge junction and tip below the diaphragm off the field-of-view; these are removed on subsequent film. mediastinal drains and left thoracostomy tube are unchanged. lungs: low lung volumes. retrocardiac airspace disease, slightly increased on most recent film. pleura: small left pleural effusion. no pneumothorax. heart and mediastinum: postsurgical widening of the cardiomediastinal silhouette. aortic arch calcification. bones: intact median sternotomy wires. Human: left basilar airspace disease and small left pleural effusion. lines and tubes positioned as above. PG Baseline: lines and tubes as above. retrocardiac airspace disease, which may:::::: represent :::::: atelectasis,:::::: aspiration,:: or :::::: pneumonia. RLR+C: lines and tubes as described above. retrocardiac airspace disease, slightly increased on most recent film. small left pleural effusion. Figure 5: More examples from the test splits of both datasets along with human, PG baseline and RLR+C summaries. In the first example, the baseline output successfully copied content from the context, but missed important observations. In the second example, the baseline output included some :::::: spurious::::: facts that were not mentioned, and again neglected some important observations. In neither examples the RLR+C outputs make perfect summaries, but they represent better summaries than the baseline outputs. and the dev factual F1 score as the stopping criteria. We tune the scalar weights in the loss function on the dev sets and use weights of λ1 = 0.97, λ2 = 0.97 and λ3 = 0.03 for both datasets. For the extractive LexRank and BanditSum models, we use their open implementations.7 For the BanditSum extractive summarization model, we use default values for all hyperparameters as in Dong et al. (2018). For both models we select the top 3 scored sentences to form the summary, which yields the highest ROUGE-L scores on the dev sets. For ROUGE evaluation, we use the Python 7https://github.com/miso-belica/sumy; https://github.com/yuedongP/BanditSum ROUGE implementation released by Google Research.8 We empirically find it to provide very close results to the original Perl ROUGE implementation by Lin (2004). D Fine-grained Correctness Results on the RIH Dataset We show the fine-grained factual F1 scores for all variables on the RIH dataset in Table 7. E More Examples with Baseline and System Generations In Figure 5 we present more examples from both datasets along with the generations from the baseline system and our approach. 8https://github.com/google-research/ google-research/tree/master/rouge
2020
458
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5121–5134 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5121 Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset Revanth Rameshkumar Microsoft [email protected] Peter Bailey Microsoft [email protected] Abstract This paper describes the Critical Role Dungeons and Dragons Dataset (CRD3) and related analyses. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an openended role-playing game. The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail, and semantic ties to the previous dialogues. In addition, we provide a data augmentation method that produces 34,243 summarydialogue chunk pairs to support current neural ML approaches, and we provide an abstractive summarization benchmark and evaluation. 1 Introduction Artificial intelligence applied to human conversation remains an incredibly challenging task in computer science. Task-oriented dialogues, which are more narrowly scoped and information dense than conversational dialogue, have been the focus of recent progress in dialogue understanding (Budzianowski et al., 2018). A difficulty for hypothesis testing on non-task oriented dialogues is a lack of large datasets that are fully representative of the spontaneity and noise of real world conversation, especially in the areas of storytelling and narrative beyond long-form text or monologue. Many potential dialogue processing tasks involve multi-speaker dialogues where narrative elements are conveyed through interaction between two or more speakers. These narrative elements can include changes in the states of narrative objects, Sample Dialogue Chunk 0 TRAVIS: “i felt like i almost died and i had n’t taken care of any of the shit that got me here in the first place . i was so worried about trying to learn about these new abilities that – i felt like i got distracted . i have people i want to find and things i want to remedy .” 1 MARIHSA: “yeah . how did jester do ? no offense , but she seems like she ’s a little bit more willfully stronger than you are .” 2 TRAVIS: “i mean , fuck , it ’s really disturbing . like , she came out of there like a little kettle of popcorn , just no problem . i mean – can i see jester ? is she nearby ?” 3 MATT: “jester , are you nearby ?” 4 LAURA: “i ’m across the bar just fucking dancing alone . -lrb- laughter -rrb- .” 5 LIAM: “just sixteen candles-ing it .” 6 MARIHSA: “yep .” 7 TRAVIS: “i was worried . there were really dark times . i would hear jester singing to herself at night and then she ’d change lyrics , and then my name would be in the lyrics sometimes . every morning , she would try and cheer everybody up that was around her , but she had the muffle ? so i could n’t tell if my brain was playing tricks on me , or if she was just – i do n’t think there ’s much that gets her down . it ’s kind of inspiring .” Aligned Summary Chunk 0 “beau asks about jester .” 1 “fjord says he is amazed but disturbed at how well jester seems to be doing .” 2 “he says jester would try to cheer everyone up and sing , even though her mouth was gagged most of the time .” 3 “he looks over to see jester dancing alone by the end of the bar .” Figure 1: A tokenized dialogue chunk and the associated human written summary chunk after the text alignment process. Jester, Beau, and Fjord are the aliases for Laura, Marisha, and Travis respectively. descriptions of events, or changes in the states of speakers themselves. Some explored sub-tasks for narrative understanding are topic understanding, character state tracking, and abstractive summarization. Though progress has been made in these areas, it has been on datasets where conversation has been constrained to specific topics, constrained by 5122 medium of communication, or scripted (in the case of television or movies) (Forchini, 2009). With datasets that involve naturally occurring dialogue, the small amount of data per narrative or speaker makes modeling challenging. 1.1 Critical Role Episodes and Wiki The Critical Role show1 is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game. Critical Role is set in a fictional world created by the Dungeon Master (DM) Matthew Mercer. Separate from Matthew, there are eight other players who participate in his world as role-played characters; whose actions in the game influence the fictional world (as per the DM) along with their own character’s state. There are multiple objectives to the game, both hidden and explicitly stated by both parties. For example, the DM might explicitly state a quest for the players to complete or a player’s character might have an explicit personal goal that needs to be met. Examples of implicit objectives are non-player characters objectives created by the DM, and a player’s character’s backstory that influence their actions. This definition and expansion of the fictional world, the interaction with the world, and the development of the narrative is done entirely through unscripted spoken dialogue between the DM and the other players. Fans have maintained dialogue transcriptions for each episode as well as an online knowledge base (the Fandom wiki2) where details about the players, characters, world, and game sessions are continuously added to. By extracting dialogues from the Critical Role transcripts, CRD3 aims to provide the community with a narrative-centered dataset that is unscripted, noisy, and spontaneous; while being coherent, consistent in latent speaker attributes and personalities, and considerably longer in dialogue length than similar conversational dialogue datasets. From the wiki, we obtain humanauthored, structured summaries for each episode that support tasks of narrative understanding and extraction, topic understanding and segmentation, and summarization from conversational dialogue. 1.2 Contributions We make five contributions in this paper. First, we produce a cleaned and structured dialogue dataset 1critrole.com 2criticalrole.fandom.com extracted from the Critical Role transcripts (CRD3Dialogues)3. Second, we provide corresponding structured abstractive summaries for each episode, mined from the Fandom wiki (CRD3-Summaries). Third, we analyze the dataset and compare it to similar datasets. Fourth, we describe our method of data augmentation via text alignment to make this data scale-appropriate for neural ML approaches, and provide these summary-dialogue chunk pairs (CRD3-SD-pairs). Finally, we construct an abstractive summarization baseline from these pairs and discuss its evaluation (CRD3-Baseline). We believe that better abstractive summarization tools to distill information is essential given the ongoing growth of unscripted, multi-person dialogues in entertainment and business scenarios. We hope that CRD3 will support research and development for such tools. 2 Related Work The Critical Role Dungeons and Dragons Dataset is a combination of story-telling dialogues structured around the game-play of Dungeons and Dragons and corresponding abstractive summarizations for each dialogue. As such, it can be compared to existing dialogue datasets and summarization datasets. 2.1 Dialogue Datasets There are currently many existing dialogue datasets (disregarding machine-to-machine) that can be roughly grouped into task-oriented, conversational, scripted, constrained, and spontaneous dialogues (Serban et al., 2015). Task-oriented datasets address specific tasks and are constrained by an ontology (Budzianowski et al., 2018). If the task is sufficiently constrained, even a human-to-human task-oriented dialogue can lack spontaneity and noise of open domain conversation (Haber et al., 2019), (Vaidyanathan et al., 2018), (Lison and Tiedemann, 2016). Agents trained on such datasets cannot be expected to model spontaneous conversational dialogue. Scripted dialogue datasets are closer to conversational dialogue. Popular scripted dialogues come from TV shows, movies, and novels; sometimes featuring further annotations (Poria et al., 2019a), (Lison and Tiedemann, 2016), (Banchs, 2012). Though the lack of noise can be helpful in training a dialogue system, they do contain artificialities in their linguistic properties (Forchini, 2009). With datasets that do have 3github.com/RevanthRameshkumar/CRD3 5123 natural conversation, either with provided topics (Rashkin et al., 2019), (Godfrey et al., 1992), (Carletta et al., 2006) or truly naturally occurring (Ritter et al., 2010),(Schrading et al., 2015), (Li et al., 2017), (Leech, 1992), (Misra et al., 2015), the larger scope and noise along with the small amount of data for individual domains, latent speaker attributes, and linguistic attributes make tasks like response generation, abstractive summarization, and speaker personality modeling more difficult (Vinyals and Le, 2015), (Black et al., 2011), (Stent et al., 2005), (Poria et al., 2019b). Story-telling and game-playing dialogues can have properties from both task-oriented and conversational dialogues, as they have specific topics or tasks and are primarily human-to-human (Gratch et al., 2007), (Hung and Chittaranjan, 2009), (Afantenos et al., 2012), (Djalali et al., 2012), (Hu et al., 2016). In storytelling dialogues there is a clear topic constraint and purpose of conveying narratives. In game-play dialogues, there are clear tasks that the speakers try to complete, to either win or progress the game. This helps reduce topic noise and increase information density, but retains natural noise like disfluencies, false starts, fragments, and spontaneity. CRD3 has extensive storytelling and narrative building through dialogue, as well as game-playing since Dungeons and Dragons is the show’s focus. The episodes are unscripted and live-streamed, so the dialogue is naturally occurring and contains a large amount of context-switching and chit-chat. Since it is spoken then transcribed to text, there exists linguistic noise as usually present in naturally spoken dialogue. Finally, the large amount of turns combined with consistent cast and persistent environments make modelling based on latent speaker and linguistic attributes more feasible. 2.2 Abstractive Summarization Datasets Most of the recent abstractive summarization research is conducted on document datasets (news, scientific papers, and patents) (Hermann et al., 2015), (Cohan et al., 2018), (Sharma et al., 2019). However, the methods used to perform well in these domains are less effective in dialogue (movies, personal-interviews, multi-person dialogues, etc) (Kedzie et al., 2018). As (Narayan et al., 2018) noted, many of the current summarization datasets highly reward extractive approaches due to the large amount of phrasal overlap in document and summary. Dialogue summarization is underexplored in datasets. For abstractive summarization, the most popular spoken dialogue datasets are AMI and Switchboard. Others exist, but are more constrained or purely textual, (Zhou et al., 2018), (Gella et al., 2018), (Misra et al., 2015), (Louis and Sutton, 2018), (Pan et al., 2018). Notably, (Gorinski and Lapata, 2015), (Gorinski and Lapata, 2018) combine movie scripts with Wikipedia plot summaries and other metadata. Though this brings us closer to longer form abstractive dialogue summarization data, there is significant information about the plot conveyed through script notes and descriptions, and not spoken dialogue. 3 Data Collection and Preprocessing 3.1 Dungeons and Dragons Briefly, Dungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Players create characters to participate in a fictional world created by the Dungeon Master (DM). They interact with the world entirely through dialogue with the DM and use dice rolls as a way to introduce randomness to the consequences of their actions. Actions can include exploring the environment, talking to fictional characters (role played by the DM), battle, and puzzle solving.4 3.2 Critical Role Video Stream Transcripts The CRD3 dataset consists of 159 episodes (dialogues) from two campaigns. Campaign 1 has 113 episodes and Campaign 2 has 46 episodes, with new episodes being actively added. The episodes are unscripted and live-streamed, then archived and transcribed; they are usually several hours long. Detailed episode information can be found on the Fandom wiki5. The episodes usually start with some out-of-narrative logistics, then proceed to the actual D&D game where the players communicate character action by in-character role-playing or by describing the characters’ actions in third person. There is also substantial out of narrative chit-chat and context switching. For each episode, we extract the names and turns from the dialogue transcript and clean the data as much as possible. We try to resolve the inconsistencies in spelling of speaker names, use of quotes, onomatopoeia, speaker aliases (and character aliases), parse multiple speakers for turns if needed, and others that exist due to the transcripts 4dnd.wizards.com/dungeons-and-dragons 5criticalrole.fandom.com/wiki/List of episodes 5124 Metric CRD3 MELD M. WOZ AMI CNN DailyMail Dialogue Count 159 190 10438 142 92465 219506 Turn Count 398682 13708 143048 79672 3074340 6189038 Total token count in dialogues 5056647 120913 1886018 706803 60476397 154282948 Unique token count in dialogues 42509 6251 20197 9958 341451 596032 Avg. turns per dialogue 2507.4 72.2 13.7 561.1 33.4 28.2 Avg. tokens per turn 12.7 8.82 13.2 8.9 19.7 24.9 Total token count in summaries 327899 22965 3897045 11308821 Avg. tokens per summary 2062.3 161.7 42.1 51.5 Avg. summary:dialogue token ratio 0.065 0.038 0.085 0.087 Table 1: We compare CRD3 with other similar datasets. MELD, Multi-WOZ, and AMI are dialogue datasets. We use the subset of the AMI dialogues with available abstractive summaries. CNN and Daily Mail are abstractive summarization datasets for news articles (we treat an article as a dialogue and a sentence as a turn). being written over time by fans. We also replace all instances of character aliases in the speaker field with the real speakers’ names to reduce noise. Along with the cleaned data, we provide the raw transcription data to document the changes via diff. 3.3 Critical Role Episode Summaries The summaries for each episode were mined from the Critical Role Fandom wiki. The summaries are unique in that they are structured and offer different levels of summarization. Most episodes have a (1) wiki opening blurb, which offers briefest level of summarization. This is followed by a synopsis section which is (usually) comprised of several parts: (2) pre-show and announcements, where some logistical information is mentioned; (3) recap, where the previous episode is summarized (usually done by Matt in the episode and is narrative focused); and (4) the episode’s plot which is the largest part and summarizes the narrative developments of the episode. The plot sections are also usually divided into sub-sections aligned to narrative topics. Sometimes the wiki also has a break and post-episode sections (usually non-narrative), which we include in the dataset. 3.4 Analysis and Comparison Refer to Table 1 for turn and token count comparisons. CRD3’s total turn count, turns per dialogue, and unique token count are substantially larger than MELD (Poria et al., 2019a) (scripted Friends TV show dataset), Multi-WOZ (Budzianowski et al., 2018) (unscripted task-oriented dialogue dataset), and AMI (Carletta et al., 2006) (unscripted meetings dataset). For AMI, we only consider the dialogues with available abstractive summaries 6. Multi-WOZ is dyadic while AMI, MELD, and CRD3 have multiple speakers per dialogue. 6github.com/gcunhase/AMICorpusXML We extract 72 total speakers from the entire CRD3 dataset; 9 of which are the main cast (players and DM) and make up 99.48% of the total turns; the DM alone makes up 111,994 turns. In comparison, the 6 main cast of MELD make up 83.27% of the total turns. In addition to real (human) speakers, there are also purely in-game characters role-played by the DM. The indication of the DM role-playing through the use of quotes seem to be mostly consistent in the transcripts. As a loose measure of role-playing, we find the turns that contain quotes from the DM (≈21383) and compare to all other players (≈2497). A core aspect of the game is players querying the DM, so we also measure the instances of questions from a player (turn ending in ‘?’) followed by a DM response; a mean of 199 per dialogue with 58 standard deviation. Finally, we apply the spaCy English NER model on all dialogues as a loose measure of named entity presence. We get a mean of 1275 entities per dialogue with standard deviation of 344.5. For the summaries, we measure the token counts per summary and compare to AMI, CNN, and Daily Mail (Table 1). Again, CRD3 is substantially larger (though smaller in total tokens than the news datasets). The news datasets also feature more summary-article pairs, making them more amenable to current neural ML approaches; we address this for CRD3 in Section 4. We also measure the compression of the original text to summary via ratio of tokens per summary to tokens per original text and find they correspond to the ratios of total tokens to unique tokens. Finally, we measure the average token count and standard deviation of each section of the structured summaries for the CRD3 dataset (outlined in Section 3.3): (1) Wiki opening blurb: 50 ± 16.7; (2) pre-show and announcements: 183 ± 254; (3) recap: 335 ± 123.9; and (4) episode plot: 1544 ± 1553.7. 5125 4 Scaling up the Dialogue Summaries The CRD3 dataset can be applied to many tasks, but we find abstractive dialogue summarization the most compelling task to explore in this paper. Due to the extensive length of the dialogues and summaries, and the frequent context switching and noise, we are presented with challenges that are poorly addressed by the current modeling and evaluation methods: 1. The dataset has relatively few episodes (159); as is, this is not enough samples to train, test, and validate using current neural approaches. 2. The current, most successful summarization approaches do not explicitly attempt to capture coreference, semantics, and pragmatics in very long documents or conversations. 3. Current automatic summarization evaluation methods have specific failures in evaluating narrative summarization. We do not attempt to propose a solution for either the second or third challenges, as they are beyond the scope of this paper. Instead, we address the first challenge by proposing a novel data augmentation method to dramatically scale up the number of available summary-dialogue turn sequence pairs. That outcome enables the community to start modeling and evaluation for the dialogue summarization task and we discuss initial benchmark results over this augmented set in Section 5. 4.1 Data Augmentation via Text Alignment We found that the summaries written by fans on the wiki are detailed, mostly ordered with respect to the corresponding episode, and mostly non-repetitive. Due to the large number of sentences in the summaries, we can break up the summaries into chunks and align each chunk to some continuous segment of the dialogue. Formally, given dialogue D consisting of T turns {ti|i ∈1 . . . T} and summary S split into n contiguous chunks {si|i ∈1 . . . n}, we try to determine A = {ai|i ∈1 . . . n} where ai is a contiguous set of turns from D (ai = tj:k) and where tj and tk (j ≤k) are the earliest and latest turns in D to align to si; refer to Figure 2. To determine A, we try two approaches. Greedy Algorithm We make an independence assumption for all s and t and try to maximize an alignment score, α(A; S, β), where β(s, a) calculates an alignment score between a single s and a. Figure 2: Chunking and mapping of C contiguous summary sentences onto the T turns of the dialogue. The greedy approach (left) has no order or contiguity constraint. The Needleman-Wunsch approach (right) has strict order and contiguity constraints. α(A; S, β) = n X i=0 max 0≤c≤T 0≤w≤14 (β(s, tc−w:c+w)) (1) where bounds for w are determined empirically. For several dialogues, we tested 0 ≤w ≤T, but this had no change in the final assignments A and greatly increased computation time. To choose β, we tried several scoring functions including variations of ROUGE (Lin, 2004), variations of TF-IDF (Jones, 1988), and other n-gram overlap scorings. We selected a scaled version of ROUGE-F1 score: β(s, a) = |τ(s) ∩τ(a)| ∗ROUGEF1 = 2 ∗|τ(s) ∩τ(a)|2 |τ(s)| + |τ(a)| (2) where τ is a tokenization function for the given text. The scaling via |τ(s) ∩τ(a)| term gives extra importance to the absolute token overlap count. To calculate the tokens, we found just unigrams and bigrams gave us the least noisy alignments. We also found lemmatization and stop-word removal greatly reduces the alignment quality because of the large number of n-grams (≥2) from the turn windows that are directly used in the summaries. In Figure 3(a), we plot the turn indices as a function of the summary chunk indices. We notice the greedy alignment approach can largely preserve the order of the summary chunks relative to the dialogue turns, without any ordering constraints. However, there are some issues with this method. First, it allows out-of-order alignments of summary chunks, which we have assessed as almost always erroneous in this dataset. Second, the recall can 5126 Figure 3: (a) Midpoints of turn sequences as a function of the summary chunk indices for campaign 2 ep. 31, determined by the greedy approach. The plot is generally monotonic, with the out of order points verified as misalignments. After assessing many dialogue and summary pairs, we determined a strong monotonic assumption for this dataset. (b) For the same summary sentence chunk indices as in graph (a), we plot the new turn sequence midpoints as determined by the Needleman-Wunsch approach. The plot is now perfectly monotonic due to the ordering constraint and captures previously missed turn sequences. be low due to early cutoffs at boundaries, generally because of extensive chit-chat in between two salient utterances. Forcing boundaries between ai and ai+1 to be contiguous leads to lower precision due to salient utterances being incorrectly assigned near the borders of the turn windows. Needleman-Wunsch Algorithm The recursive approach to determining A involves imposing strict order constraints using the sequence alignment algorithm Needleman-Wunsch (Needleman and Wunsch, 1970), similar to (Nelken and Shieber, 2006). The algorithm imposes order by forcing ai and ai+1 to be assigned to contiguous turn windows. We can also forgo the maximization over some window w as the algorithm does this by virtue of its score maximization function. We tried several functions for β, including the TF-IDF function proposed by (Nelken and Shieber, 2006) and found (2) still performs best. To use the algorithm, we first apply β independently for each turn (of size 1) and summary chunk to generate a match-score matrix M of size T × n. We then build an alignment score matrix H of size (T + 1) × (n + 1) using: Hxy = max    Hy−1,x−1 + My−1,x−1 Hy−1,x + My−1,x−1 Hy,x−1 + My−1,x−1 (3) with My−1,x−1 = β(sx−1, ty−1); 1 ≤y ≤T; and 1 ≤x ≤n and the first column and row of H initialized to −y and −x respectively. We perform the traceback from HT+1,n+1 to H0,0 to generate Figure 4: Visualization of the traceback along the H matrix in the Needleman-Wunsch alignment approach. Each vertical line for si is the corresponding ai = tj:k. the alignment A where each a ∈A can be seen as a vertical line in the traced path (Figure 4). We exclude gap penalties when generating H, since we want to allow multiple turns to be assigned to a summary chunk and we want to allow a single turn to overlap several summary chunks. We also notice that column-wise normalization on M reduced the quality of the alignments substantially because large scores can act as an anchor for the algorithm to localize erroneous alignments. It forces the algorithm to ‘catch up’ or ‘pull back’ the turn alignments to include the high My,x in the final path. Normalization also reduces incentives to keep the path going down a column and heavily favors moving to the next column (summary chunk). We can visualize the improvements in Figure 3(b), where we also notice the algorithm captures turns past t1833 (upto t1878) that were previously ignored, leading to higher recall – we manually verified this. The strong ordering constraint is also the source of some noise. For example, if a summary alignment overshoots the correct turn window by a large margin, it is likely that the subsequent summaries will also be misaligned due to the contiguity constraint. However, the localization effect due to large M scores help mitigate this. Another source of noise is the forced alignment of the first and last turns in dialogues that continue past the summary. We also analyze the distribution of the scores along the paths (each path normalized to 1) traced on M with respect to the nine main players (Table 2). This gives us the distribution of the player contributions to the summaries. Matt’s turns contribute most to the summaries since he contributes the most salient narrative points. As the Dungeon Master, he is responsible for world building and the narrative’s interaction with the other players. We 5127 Player β MATT 0.0307±.0008 ORION 0.0086±.0014 LIAM 0.0083±.0005 TALIESIN 0.0074±.0005 SAM 0.0070±.0004 MARIHSA 0.0058±.0003 TRAVIS 0.0057±.0004 LAURA 0.0056±.0003 ASHLEY 0.0048±.0006 Table 2: Mean (± 0.95 conf. interval) summary contribution scores for each player calculated from the normalized paths traced on M as determined by the algorithm on H. Chunk Size w/o Filtering w/ Filtering 2 18569 11124 3 18438 11635 4 18378 11484 Table 3: number of si, ai pairs generated for each chunk size with and without filtering. can see the other players have much lower mean scores. One explanation for this is that they engage in more non-narrative chit chat than Matt, which leads to a lower mean β. Data Augmentation Running the NeedlemanWunsch algorithm for a dialogue D will give us N s, a pairs. We can extend this by calculating S as S0 . . . SC−1 where C is the chunk size and Sx is the shift in the starting point of the contiguous chunking windows. For each of these Sx, we can then determine an Ax pair. This method increases our s, a pairs by a factor of C. We can go further by running this for different chunk sizes. For our experiment, we chose to run this algorithm for C=2, 3, and 4 sentences. We remove dialogues with |S| ≤10 chunks (since there are some incomplete wikis) and get 55385 s, a pairs. To reduce noise, we also: (1) impose 2 < |tj:k| ≤100; and (2) strip out pairs where si contains “Q: ” (signifies a differently formatted question answer segment in an episode). We end up with 34243 pairs (Table 3), a substantial increase from the original 159 summary, dialogue pairs. Refer to Figure 1 and to the Appendix for examples of the summaries and examples. These are then split as 26232 training, 3470 validation, and 4541 testing s, a pairs; refer to Appendix for details. We calculate precision and recall with respect to the turns on a random sample of 100 pairs from the training split of these 34243 pairs and obtain a precision of 0.8692 and recall of 0.9042. Refer to Appendix for precision and recall calculation Summary “The Mighty Nein make their way up the ladder and through the hatch into the Keystone Pub proper, where they order breakfast. A hooded female wearing a long green cloak covering her left face and side approaches and asks if they’re heading into the swamp today– she’s desperate to go there herself. Calianna apologizes for bothering them, but she couldn’t help but overhear their conversation last night.” Factoid Question 1. Who was overhearing the Mighty Nein’s conversation the previous night? Multiple Choice Question 2. What do the Mighty Nein have at the Keystone Pub? (A) drinks (B) dinner (C) lunch (D) breakfast Figure 5: Example of questions constructed for a human-written summary chunk aligned to a set of turns. method. We find precision errors are mostly from extraneous trailing or leading turns attached to the properly aligned set of turns, and almost never from complete misalignment. We find recall errors are from turn sequences that start too late or end too early, and also almost never from complete misalignment. In most cases where a contains a recall error, we notice the precision for that a is 1.0, because a ends up being a subset of the correct tj:k. We posit this is due to the strong order constraints of the algorithm and our post-alignment filtering, which removes the pairs with the highest risk of complete misalignment. As a measure of quality of the human written summaries, we also perform a question-answering task on a random sample of 50 si, ai pairs from the filtered set. First the questioner records two questions and answers per pair, with the questions and answers coming only from the summaries si. For each pair, there is one factoid question with an open-ended answer and one multiple choice question with four possible answers. The factoid question can be answered by yes—no responses, entity names, or short text. The multiple choice question has at most one correct answer of the four contained in the summary chunks. (Figure 5). The questions are then answered by another person, using only the aligned turns ai from the pair. The scores are recorded in Table 4. Out of the 19 incorrect answers, we found that 17 of them were due to summary alignment errors. This is where the correct information was in the dialogue, but not in the aligned set of turns. The other 2 were due to misinterpretation of the question when answering. This indicates, with perfect alignment, all questions 5128 Question Type Correct Incorrect Precision Free Form 39 11 78% Multiple Choice 42 8 84% Total 81 19 81% Table 4: Correct and incorrect answers for the Q&A evaluation method, for measuring precision w.r.t. the human written summaries in the si, ai pairs. could have been answered correctly; meaning what is in the summaries is an accurate reflection of what is in the transcript. However, we recognize all the information in the transcripts is not necessarily in the summaries; for example, out-of-game information. We also notice that multiple choice questions have a higher accuracy due to easier questions and additional context provided by the set of answers themselves, and not due to random guessing. We also found that 12 incorrect answers were due to no answer, meaning the answerer did not feel they had enough information to attempt an answer. For the other 7, the answerer felt that at least some information pertaining to the question was available in the aligned turns. Unlike ROUGE precision, which relies on word overlap, this evaluation can incorporate latent semantic and contextual information. It is important to note that latent information used when answering varies greatly between people, making this method subjective with respect to the answerer. In future work, it would be interesting to measure variance of accuracy and information in the answers using a large number of people. 5 Summarization Benchmark Results 5.1 Benchmarking Approach We establish a baseline for abstractive summarization by using the neural summarization architecture introduced by (Chen and Bansal, 2018)7. The generated data has noise due to imperfections in the alignment method and due to potentially broken coreference, so we use the model in a semisupervised fashion. We choose this architecture as a baseline for several reasons: (1) The paradigm for narrative summarization from noisy dialogue is close to the paradigm assumed by Chen and Bansal. Namely, first extract salient sentences, then abstractively rewrite them with an included copy mechanism to deal with OOV words. (2) The ability to analyze the extractor behavior separately from the abstrac7github.com/ChenRocks/fast abs rl R1 R2 RL M Extractive (rnn-ext + RL) P 20.83±.34 7.34±0.28 18.38±.32 R 44.59±.66 17.42±.62 39.22±.61 16.58 F1 25.20±.34 9.23±.32 22.20±.32 Reported Metrics on CNN/DM F1 41.47 18.72 37.76 22.35 Abstractive (rnn-ext + abs + RL + rerank) P 27.38±.34 5.91±.20 25.18±.32 R 22.65±.27 4.75±.16 20.74±.26 8.33 F1 23.35±.23 4.91±.16 21.41±.23 Reported Metrics on CNN/DM F1 40.88 17.80 38.54 20.38 Table 5: ROUGE (Precision, Recall, F1 ± 0.95 conf. interval) and METEOR (M) metrics on the CRD3 test set using the purely extractive and extractive+abstractive architecture proposed by Chen and Bansal. We show the metrics on the CNN/Daily Mail dataset for the same models as reported by Chen and Bansal. tor due to the independence of training (before connection by the reinforcement learning mechanism). (3) The speed of training due to the shortened inputtarget pairs. We briefly describe the model: First, the model optimizes a sentence extraction module and an abstractive rewrite module independently using maximum-likelihood objectives. Then, end-to-end training is achieved by applying policy gradient methods (due to the “non-differentiable hard extraction” performed by the extractor). The extractor uses a temporal convolutional model to obtain hierarchical sentence representations, then selects sentences using a pointer network. The abstractor is an encoder-aligner-decoder network with a copy mechanism for OOV words. Due to the large amount of non-narrative chit-chat turns between salient turns, we train the extractor on a sequence of turns rather than individual sentences. 5.2 Evaluation and Analysis We use precision, recall, and F-1 scores of ROUGE1, 2, and L, along with METEOR (Denkowski and Lavie, 2014) to evaluate the generated summaries (Table 5). We run these metrics on the test set, using both the combined extractive-abstractive model and the purely extractive model for analysis on what turns are considered salient. The purely extractive model significantly outperforms the combined model in recall and in F-1, due to the much higher recall. In the validation set, we notice the recall measures are improved by the ngrams in summary chunks that have indirect speech (“fjord says”, “he says”, etc). In the validation 5129 Generated Abstractive Summary he says he feels worried about trying to learn about these abilities and abilities . he asks if she could try and cheer . the group then heads to the tavern . she asks if she can see jester , and she says she ’s really disturbing . Figure 6: Extractor+Abstractor output for the dialogue sample in Figure 1 set, the mean ratio of unique overlapping summary n-grams to total unique summary n-grams are: 1gram= 0.679, 2-gram= 0.336, and 3-gram= 0.205. This high rate of 3-gram overlap motivates changes to the modeling architecture that are more lenient towards phrasal copy instead of just enabling word copy and depending on the learned language model and the word level copy probability. The grammatical person shift and significant paraphrasing of turns lower the precision of the purely extractive model, leading to a higher precision in the combined model. For example in Figure 1, “beau asks about jester .” from the humanauthored summary is entirely from turn 1, but the only overlapping word is “jester”. From Figure 6, we can see the encoder-decoder model learns the grammatical shift behavior but doesn’t include the proper nouns, so the resulting summary misses important speaker information that is included in the human generated summaries. For example, Beau is the character alias for Marisha, which is latent information that was not available to the model at the time of decoding/generation. We also note the encoder-decoder module’s learned language model is biased by the narrative elements present in the training dialogue chunks. This causes decoding of similar, but fundamentally different, narrative focused turns to be noisy and nonfactual. Compared to news summarization metrics with the same model architectures, the dialogue summarization metrics are substantially lower. The disparity in model performance can be attributed to content selection differences between news – where effective summary information is available early in an article (position bias) – and dialogue – where the positional effects are not observed. Other factors include the grammatical and stylistic differences explored earlier. Our findings also confirm the findings of (Kedzie et al., 2018), which compares content selection methods for summarization across various domains (CNN/DM, NYT, DUC, Reddit, AMI, and PubMed). They find a similar disparity in R-2 (recall) and METEOR scores between the news domain and the AMI meeting dialogue domain. They also include an oracle measurement as a performance ceiling; it achieves a max METEOR score of 17.8 and R-2 recall of 8.7 on the AMI corpus. Though ROUGE and METEOR are more useful for relative measurements than absolute, we find the current evaluation methods in summarization lead to skewed and less informative scores in dialogue domains. The problem is compounded in narrative summarization due to narrative specific lexical information, including speaker aliases. For example, METEOR specifically considers synonyms, paraphrases, and function words; all of which can change a lot from narrative to narrative. 6 Conclusion and Future Work Dialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. In this paper, we contribute the Critical Role Dungeons and Dragons Dataset (CRD3), a linguistically rich dataset with dialogue extracted from the unscripted, livestreamed show Critical Role and long, abstractive summaries extracted from the Critical Role Fandom wiki. We provide a data augmentation method to help the community start modeling and evaluation for the dialogue summarization task and discuss the initial modeling benchmark results. We find current paradigms in summarization modeling to have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. We hope CRD3 offers useful, unique data for the community to further explore dialogue modeling and summarization. We also hope that the dataset can be added to in the future with multi-modal extractions, more granular annotations, and deeper mining of the wiki. Acknowledgments First and foremost, we thank the Critical Role team8 for creating a fun, entertaining, organized, and growing set of livestreams that we used in this dataset. Next, we thank the CRTranscript team9 for providing high quality transcripts of the show for the community and we thank all the contributors of the Critical Role Wiki. Finally, we thank Rahul Jha for providing feedback and Oli Bailey for contributing evaluation questions. 8critrole.com/team 9crtranscript.tumblr.com/about 5130 References Stergos Afantenos, Nicholas Asher, Farah Benamara, Ana¨ıs Cadilhac, C´edric D´egremont, Pascal Denis, Markus Guhe, Simon Keizer, Alex Lascarides, Oliver Lemon, et al. 2012. Developing a corpus of strategic conversation in the settlers of catan. Rafael E. Banchs. 2012. Movie-DiC: a movie dialogue corpus for research and development. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 203–207, Jeju Island, Korea. Association for Computational Linguistics. Alan W Black, Susanne Burger, Alistair Conkie, Helen Hastie, Simon Keizer, Oliver Lemon, Nicolas Merigaud, Gabriel Parent, Gabriel Schubiner, Blaise Thomson, et al. 2011. Spoken dialog challenge 2010: Comparison of live and control test results. In Proceedings of the SIGDIAL 2011 Conference, pages 2–7. Association for Computational Linguistics. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaˇsi´c. 2018. MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, Guillaume Lathoud, Mike Lincoln, Agnes Lisowska, Iain McCowan, Wilfried Post, Dennis Reidsma, and Pierre Wellner. 2006. The ami meeting corpus: A pre-announcement. In Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction, MLMI’05, pages 28–39, Berlin, Heidelberg. Springer-Verlag. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686, Melbourne, Australia. Association for Computational Linguistics. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376–380. Alex Djalali, Sven Lauer, and Christopher Potts. 2012. Corpus evidence for preference-driven interpretation. In Proceedings of the 18th Amsterdam Colloquim Conference on Logic, Language and Meaning, AC’11, pages 150–159, Berlin, Heidelberg. Springer-Verlag. Pierfranca Forchini. 2009. Spontaneity reloaded: American face-to-face and movie conversation compared. In Proceedings of the Corpus Linguistics Conference 2009 (CL2009),, page 400. Spandana Gella, Mike Lewis, and Marcus Rohrbach. 2018. A dataset for telling the stories of social media videos. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 968–974, Brussels, Belgium. Association for Computational Linguistics. John J. Godfrey, Edward Holliman, and Jan McDaniel. 1992. Switchboard: telephone speech corpus for research and development. [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, 1:517–520 vol.1. Philip John Gorinski and Mirella Lapata. 2015. Movie script summarization as graph-based scene extraction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1066–1076, Denver, Colorado. Association for Computational Linguistics. Philip John Gorinski and Mirella Lapata. 2018. What’s this movie about? a joint neural network architecture for movie content analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1770–1781, New Orleans, Louisiana. Association for Computational Linguistics. Jonathan Gratch, Ning Wang, Jillian Gerten, Edward Fast, and Robin Duffy. 2007. Creating rapport with virtual agents. In IVA. Janosch Haber, Tim Baumg¨artner, Ece Takmaz, Lieke Gelderloos, Elia Bruni, and Raquel Fern´andez. 2019. The PhotoBook dataset: Building common ground through visually-grounded dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1895–1910, Florence, Italy. Association for Computational Linguistics. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 1693–1701, Cambridge, MA, USA. MIT Press. 5131 Zhichao Hu, Michelle Dick, Chung-Ning Chang, Kevin Bowden, Michael Neff, Jean Fox Tree, and Marilyn Walker. 2016. A corpus of gestureannotated dialogues for monologue-to-dialogue generation from personal narratives. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 3447– 3454, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Hayley Hung and Gokul Chittaranjan. 2009. The idiap wolf corpus: exploring group behaviour in a competitive role-playing game. In ACM Multimedia. Karen Sp¨arck Jones. 1988. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 60:493–502. Chris Kedzie, Kathleen McKeown, and Hal Daum´e III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818–1828, Brussels, Belgium. Association for Computational Linguistics. Geoffrey Leech. 1992. 100 million words of english: the british national corpus. Language Research, 28(1):1–13. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Pierre Lison and J¨org Tiedemann. 2016. Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 923–929. Annie Louis and Charles Sutton. 2018. Deep dungeons and dragons: Learning character-action interactions from role-playing game transcripts. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 708–713, New Orleans, Louisiana. Association for Computational Linguistics. Amita Misra, Pranav Anand, Jean E. Fox Tree, and Marilyn Walker. 2015. Using summarization to discover argument facets in online idealogical dialog. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 430–440, Denver, Colorado. Association for Computational Linguistics. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Dont give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Saul B. Needleman and Christian D. Wunsch. 1970. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of Molecular Biology, 48(3):443 – 453. Rani Nelken and Stuart M. Shieber. 2006. Towards robust context-sensitive sentence alignment for monolingual corpora. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. Haojie Pan, Junpei Zhou, Zhou Zhao, Yan Liu, Deng Cai, and Min Yang. 2018. Dial2desc: End-toend dialogue description generation. arXiv preprint arXiv:1811.00185. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019a. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 527– 536, Florence, Italy. Association for Computational Linguistics. Soujanya Poria, Navonil Majumder, Rada Mihalcea, and Eduard H. Hovy. 2019b. Emotion recognition in conversation: Research challenges, datasets, and recent advances. IEEE Access, 7:100943–100953. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172–180, Los Angeles, California. Association for Computational Linguistics. Nicolas Schrading, Cecilia Ovesdotter Alm, Ray Ptucha, and Christopher Homan. 2015. An analysis of domestic abuse discourse on Reddit. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2577– 2583, Lisbon, Portugal. Association for Computational Linguistics. Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, and Joelle Pineau. 2015. A survey of available corpora for building data-driven dialogue systems. 5132 Eva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 341–351. Springer. Preethi Vaidyanathan, Emily T. Prud’hommeaux, Jeff B. Pelz, and Cecilia O. Alm. 2018. SNAG: Spoken narratives and gaze dataset. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 132–137, Melbourne, Australia. Association for Computational Linguistics. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. Proceedings of the International Conference on Machine Learning, Deep Learning Workshop. Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A dataset for document grounded conversations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 708–713, Brussels, Belgium. Association for Computational Linguistics. A Appendices A.1 Summary Dialogue Alignment Precision and Recall Calculation Method We calculate precision and recall for summary dialogue alignment with respect to the dialogue’s turns in Section 4.1. Here, we describe our method for calculating precision and recall. Precision is expressed as a function of true positives and false positives and recall is expressed as a function of true positives and false negatives. For each alignment ai ∈A, we classify each of its turns t as a True Positive (TP), False Positive (FP), or False Negative (FN). We take the counts of all TP, FP, and FN over the entire A and perform the precision and recall calculations, precision= total(TP) total(TP)+total(FP), recall= total(TP) total(TP)+total(FN). A.1.1 TP, FP, FN Classifications We have the following guidelines to classify a turn in ai as a TP, FP, or FN. 1. First, find the earliest and latest turns in the original dialogue that correspond to the summary chunk si. All alignments a ∈A are a contiguous sequence of turns extracted from the dialogue. For example, in the summary chunk in Figure 1, the earliest turn in the entire dialogue that corresponds to the summary is (1) in the alignment. The latest turn in the entire dialogue that corresponds to the summary is (7) in the alignment (we verify this by looking at the turns in the original dialogue before and after the sequence presented in the alignment). 2. Any turn in the alignment in between the earliest and latest turns identified in Step 1 (inclusive) is considered a true positive. Any turn in the alignment outside of the earliest and latest turns identified in Step 1 is considered a false positive. In Figure 1, turn (0) would be considered a false positive because it does not correspond to any of the summary sentences (0,1,2,3). Turns (1,2,3,4,5,6,7) are considered true positives since they are between the earliest and latest turns that correspond to the summary sentences in original dialogue. 3. Any turn between the earliest and latest turns identified in Step 1 that is NOT present in the alignment is considered a false negative. In Figure 1, if the turn (7) was not in the alignment, it would be considered a false negative because the turn (7) corresponds to the summary sentence (2) and is between the earliest and latest turns identified in Step 1 (turns 1 and 7 respectively). A.2 More Examples of Summary-Dialogue Alignments We give more examples of summary-dialogue alignments (si, ai) pairs. For the sake of brevity, we chose to show examples that were only 10 turns or smaller. Please refer to the dataset itself for much longer samples. In Figure 7, we have an alignment with a large recall error. In Figure 8, we have an example of a summary referring to out-of-game turns. We find these types of summaries are typically written for break-times in the show, before the start of a game session, or after the end of a game session. Generally, they seem to make up a smaller portion of the overal summary content. This example in particular is for a Q/A session the team held after their session10. In Figure 9, we have a perfect alignment, with the summary explicitly capturing implied information in the turns. There are also examples 10Attack on the Duergar Warcamp episode 5133 Recall Error Dialogue Chunk 0 MATT: “End of your turn, it’s going to use two actions to do a wing attack, beating its wings, hitting every creature within 15 feet. You’re out of range, actually, Marisha. Grog, I need you to make a dexterity saving throw.” 1 TRAVIS: “I think I have advantage on this because of rage. I do. 21.” 2 MATT: “21? That unfortunately fails. You take 15 points of bludgeoning damage, and you’re knocked prone. Also, Pike and Vax, you both fail a death saving throw from the bludgeoning winds of the ice dragon’s wings beating downward.” Aligned Summary Chunk 0 “Scanlan takes a Greater Healing Potion and moves towards Vorugal. He hits him with a Fireball.” 1 “Vorugal uses a wing attack against Grog, hitting both Vax and Pike as well, losing a death save each.” Figure 7: A (not tokenized) turn sequence and the associated human written summary chunk after the text alignment process. It is clear from the second sentence of the summary chunk, that the turn aligned turns are a subset of the the true turn sequence the summary chunk is referring to. In order to capture the turns referred to by the first sentence in the summary, we need to include the additional 29 preceding turns in the dialogue (which are treated as 29 False Negatives). Out of Game Dialogue Chunk 0 ORION: “Ooh, like Thai food.” 1 LIAM: “I like Indian.” 2 MATT: “Ooh, Indian is good.” 3 ASHLEY: “I really noticed–” 4 ZAC: “Let them know not to order food.” 5 LIAM: “Don’t, that’s a terrible idea.” 6 ORION: “We just had a bunch of chicken.” 7 MARIHSA: “Oh you mean like right now? Yeah, don’t do it right now.” 8 ZAC: “If you tell them what you want, all of a sudden I’ll get a call, like, ”your food is on the way!”” Aligned Summary Chunk 0 “Liam, Matt, Marisha, and Taliesin like Indian food.” 1 “Zac chimes in telling the chat not to order any more food right now.” Figure 8: An out-of-game turn sequence and summary chunk. We find a single precision error in this alignment with Orion mentioning Thai food, which is not in this summary chunk. of role-playing by Matt in this turn sequence, as he speaks to the other players from the perspective of the in-game character Ripley. This is shown through the use of quotes in turns 0, 4, and 6. A.3 Train, Validation, Test Split Method In Section 4.1, we split the aligned 34243 pairs into 26232 training, 3470 validation, and 4541 testing pairs. Here, we briefly describe our method. We first split the 159 dialogues into an (80%, 10%, 10%) train, validation, and test split based on In Game Dialogue Chunk with Roleplay 0 MATT: “ “I don’t spend my time wondering or curious about her well-being! I just know that she is usually here.” ” 1 TALIESIN: “Anna. I’m going to take a leap of faith and believe, contrary to all evidence, that you are a smart woman. I pull out the gun, and I put it to her head. Now. If you were the Briarwoods, where would you put my sister?” 2 LAURA: “An important question here, Percy. Are they keeping her, or is she here of her own volition?” 3 TALIESIN: “I don’t know. And if you don’t know, make me believe it.” 4 MATT: “ “I know she’s not allowed anywhere near the ziggurat or near our distillery.” ” 5 TALIESIN: “Distillery? I pull the gun away.” 6 MATT: “She breathes a sigh of relief. “That’s been largely my project as part of this entire endeavor. All right, so when I was brought in here, I was tasked to experiment with the design and create large amounts of a very, very delicately prepared acidic compound, one that could dissolve the stone of your whitestone and distill it down into pure residuum. This would allow the bulk creation of a very powerful magical essence for use in construction materials that we could instill and use apparently for this ziggurat, as well as other such things. Thus, that was my main reason for being here. We were ahead of schedule, and I completed the bulk of our development weeks ago, and I no longer had much of a purpose here.” ” Aligned Summary Chunk 0 “When asked where she could be, Ripley claims that she prefers not to pay attention to the well-being of others, only that she is usually in her room. Percy then starts to lose his patience.” 1 “Giving in to Percy’s threat, Ripley mentions that Cassandra is not allowed anywhere near the Ziggurat or the “distillery”. ” 2 “He lowers the weapon to allow her to explain.” Figure 9: An turn sequence and summary chunk with perfect alignment. We observe there is implied information in the turns that is captured more explicitly in the summaries. For example “Giving into Percy’s threat, Ripley...” summarizes what happens after turn 1 where Ripley is threatened with the gun and “gives in” by answering Laura’s question. order. This guarantees that episodes from validation will succeed episodes in training, and episodes in testing will succeed episodes in validation. We take all the s, a pairs from these dialogues and put them into their respective train, validation, test sets. We chose to split by this method so that (1) there will never be an episode that is in more than one train/val/test set; (2) no summary of chunk size Ci from validation or testing is a subset of summary of chunk size Cj from the training set where i ≤j, thus avoiding bias in the final metrics; and (3) we can train on information that happened in the show prior to information we validate or test on, thus better mimicking a real-world scenario where you cannot train on future information. 5134 As new Critical Role episodes and seasons are added, we hope to expand the CRD3 dataset correspondingly. Future work might include splitting the training, validation, and testing sets based on season or some method that guarantees independence between narrative elements from the summaries and turns in the training, validation, and testing sets. Note, as new Critical Role episodes are added, we will keep the original version preserved so as to keep the experiments and analysis reproducible.
2020
459
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 477–487 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 477 Emergence of Syntax Needs Minimal Supervision Rapha¨el Bailly SAMM, EA 4543, FP2M 2036 CNRS Universit´e Paris 1 Panth´eon-Sorbonne [email protected] Kata G´abor ERTIM, EA 2520 INALCO [email protected] Abstract This paper is a theoretical contribution to the debate on the learnability of syntax from a corpus without explicit syntax-specific guidance. Our approach originates in the observable structure of a corpus, which we use to define and isolate grammaticality (syntactic information) and meaning/pragmatics information. We describe the formal characteristics of an autonomous syntax and show that it becomes possible to search for syntax-based lexical categories with a simple optimization process, without any prior hypothesis on the form of the model. 1 Introduction Syntax is the essence of human linguistic capacity that makes it possible to produce and understand a potentially infinite number of unheard sentences. The principle of compositionality (Frege, 1892) states that the meaning of a complex expression is fully determined by the meanings of its constituents and its structure; hence, our understanding of sentences we have never heard before comes from the ability to construct the sense of a sentence out of its parts. The number of constituents and assigned meanings is necessarily finite. Syntax is responsible for creatively combining them, and it is commonly assumed that syntax operates by means of algebraic compositional rules (Chomsky, 1957) and a finite number of syntactic categories. One would also expect a computational model of language to have - or be able to acquire - this compositional capacity. The recent success of neural network based language models on several NLP tasks, together with their ”black box” nature, attracted attention to at least two questions. First, when recurrent neural language models generalize to unseen data, does it imply that they acquire syntactic knowledge, and if so, does it translate into human-like compositional capacities (Baroni, 2019; Lake and Baroni, 2017; Linzen et al., 2016; Gulordava et al., 2018)? Second, whether research into neural networks and linguistics can benefit each other (Pater, 2019; Berent and Marcus, 2019); by providing evidence that syntax can be learnt in an unsupervised fashion (Blevins et al., 2018), or the opposite, humans and machines alike need innate constraints on the hypothesis space (a universal grammar) (Adhiguna et al., 2018; van Schijndel et al., 2019)? A closely related question is whether it is possible to learn a language’s syntax exclusively from a corpus. The poverty of stimulus argument (Chomsky, 1980) suggests that humans cannot acquire their target language from only positive evidence unless some of their linguistic knowledge is innate. The machine learning equivalent of this categorical ”no” is a formulation known as Gold’s theorem (Gold, 1967), which suggests that the complete unsupervised learning of a language (correct grammaticality judgments for every sequence), is intractable from only positive data. Clark and Lappin (2010) argue that Gold’s paradigm does not resemble a child’s learning situation and there exist algorithms that can learn unconstrained classes of infinite languages (Clark and Eyraud, 2006). This ongoing debate on syntax learnability and the poverty of the stimulus can benefit from empirical and theoretical machine learning contributions (Lappin and Shieber, 2007; McCoy et al., 2018; Linzen, 2019). In this paper, we argue that syntax can be inferred from a sample of natural language with very minimal supervision. We introduce an information theoretical definition of what constitutes syntactic information. The linguistic basis of our approach is the autonomy of syntax, which we redefine in terms of (statistical) independence. We demonstrate that it is possible to establish a syntax-based lexical classification of words from a corpus without a prior hypothesis on the form of a syntactic 478 model. Our work is loosely related to previous attempts at optimizing language models for syntactic performance (Dyer et al., 2016; Adhiguna et al., 2018) and more particularly to Li and Eisner (2019) because of their use of mutual information and the information bottleneck principle (Tishby et al., 1999). However, our goal is different in that we demonstrate that very minimal supervision is sufficient in order to guide a symbolic or statistical learner towards grammatical competence. 2 Language models and syntax As recurrent neural network based language models started to achieve good performance on different tasks (Mikolov et al., 2010), this success sparked attention on whether such models implicitly learn syntactic information. Language models are typically evaluated using perplexity on test data that is similar to the training examples. However, lower perplexity does not necessarily imply better syntactic generalization. Therefore, new tests have been put forward to evaluate the linguistically meaningful knowledge acquired by LMs. A number of tests based on artificial data have been used to detect compositionality or systematicity in deep neural networks. Lake and Baroni (2017) created a task set that requires executing commands expressed in a compositional language. Bowman et al. (2015) design a task of logical entailment relations to be solved by discovering a recursive compositional structure. Saxton et al. (2019) propose a semi-artificial probing task of mathematics problems. Linzen et al. (2016) initiated a different line of linguistically motivated evaluation of RNNs. Their data set consists in minimal pairs that differ in grammaticality and instantiate sentences with long distance dependencies (e.g. number agreement). The model is supposed to give a higher probability to the grammatical sentence. The test aims to detect whether the model can solve the task even when this requires knowledge of a hierarchical structure. Subsequently, several alternative tasks were created along the same concept to overcome specific shortcomings (Bernardy and Lappin, 2017; Gulordava et al., 2018), or to extend the scope to different languages or phenomena (Ravfogel et al., 2018, 2019). It was also suggested that the information content of a network can be tested using ”probing tasks” or ”diagnostic classifiers” (Giulianelli et al., 2018; Hupkes et al., 2018). This approach consists in extracting a representation from a NN and using it as input for a supervised classifier to solve a different linguistic task. Accordingly, probes were conceived to test if the model learned parts of speech (Saphra and Lopez, 2018), morphology (Belinkov et al., 2017; Peters et al., 2018a), or syntactic information. Tenney et al. (2019) evaluate contextualized word representations on syntactic and semantic sequence labeling tasks. Syntactic knowledge can be tested by extracting constituency trees from a network’s hidden states (Peters et al., 2018b) or from its word representations (Hewitt and Manning, 2019). Other syntactic probe sets include the work of Conneau et al. (2018) and Marvin and Linzen (2018). Despite the vivid interest for the topic, no consensus seems to unfold from the experimental results. Two competing opinions emerge: • Deep neural language models generalize by learning human-like syntax: given sufficient amount of training data, RNN models approximate human compositional skills and implicitly encode hierarchical structure at some level of the network. This conjecture coincides with the findings of, among others Bowman et al. (2015); Linzen et al. (2016); Giulianelli et al. (2018); Gulordava et al. (2018); Adhiguna et al. (2018). • The language model training objective does not allow to learn compositional syntax from a corpus alone, no matter what amount of training data the model was exposed to. Syntax learning can only be achieved with taskspecific guidance, either as explicit supervision, or by restricting the hypothesis space to hierarchically structured models (Dyer et al., 2016; Marvin and Linzen, 2018; Chowdhury and Zamparelli, 2018; van Schijndel et al., 2019; Lake and Baroni, 2017). Moreover, some shortcomings of the above probing methods make it more difficult to come to a conclusion. Namely, it is not trivial to come up with minimal pairs of naturally occurring sentences that are equally likely. Furthermore, assigning a (slightly) higher probability to one sentence does not reflect the nature of knowledge behind a grammaticality judgment. Diagnostic classifiers may do well on a linguistic task because they learn to 479 solve it, not because their input contains a hierarchical structure (Hewitt and Liang, 2019). In what follows, we present our assessment on how the difficulty of creating a linguistic probing data set is interconnected with the theoretical problem of learning a model of syntactic competence. 2.1 Competence or performance, or why syntax drowns in the corpus If syntax is an autonomous module of linguistic capacity, the rules and principles that govern it are formulated independently of meaning. However, a corpus is a product of language use or performance. Syntax constitutes only a subset of the rules that generate such a product; the others include communicative needs and pragmatics. Just as meaning is uncorrelated with grammaticality, corpus frequency is only remotely correlated with human grammaticality judgment (Newmeyer, 2003). Language models learn a probability distribution over sequences of words. The training objective is not designed to distinguish grammatical from agrammatical, but to predict language use. While Linzen et al. (2016) found a correlation between the perplexity of RNN language models and their syntactic knowledge, subsequent studies (Bernardy and Lappin, 2017; Gulordava et al., 2018) recognized that this result could have been achieved by encoding lexical semantic information, such as argument typicality. E.g. ”in ’dogs (...) bark’, an RNN might get the right agreement by encoding information about what typically barks” (Gulordava et al., 2018). Several papers revealed the tendency of deep neural networks to fixate on surface cues and heuristics instead of ”deep” generalization in solving NLP tasks (Levy et al., 2015; Niven and Kao, 2019). In particular, McCoy et al. (2019) identify three types of syntactic heuristics that get in the way of meaningful generalization in language models. Finally, it is difficult to build a natural language data set without semantic cues. Results from the syntax-semantics interface research show that lexical semantic properties account for part of syntactic realization (Levin and Rappaport Hovav, 2005). 3 What is syntax a generalization of? We have seen in section 2 that previous works on the linguistic capacity of neural language models concentrate on compositionality, the key to creative use of language. However, this creativity is not present in language models: they are bound by the type of the data they are exposed to in learning. We suggest that it is still possible to learn syntactic generalization from a corpus, but not with likelihood maximization. We propose to isolate the syntactic information from shallow performancerelated information. In order to identify such information without explicitly injecting it as direct supervision or model-dependent linguistic presuppositions, we propose to examine inherent structural properties of corpora. As an illustration, consider the following natural language sample: cats eat rats rats fear cats mathematicians prove theorems doctors heal wounds According to the Chomskyan principle of the autonomy of syntax (Chomsky, 1957), the syntactic rules that define well-formedness can be formulated without reference to meaning and pragmatics. For instance, the sentence Colorless green ideas sleep furiously is grammatical for humans, despite being meaningless and unlikely to occur. We study whether it is possible to deduce, from the structural properties of our sample above, human-like grammaticality judgments that predict sequences like cats rats fear as agrammatical, and accept e.g. wounds eat theorems as grammatical. We distinguish two levels of observable structure in a corpus: 1. the proximity; the tendency of words to occur in the context of each other (in the same document/same sentence, etc.) 2. the order in which the words appear. Definition 1. Let L be a language over vocabulary V . The language that contains every possible sequence obtained by shuffling the elements in a sequence of L will be denoted L. If V ∗is the set of every possible sequence over vocabulary V and L is the language instantiated by our corpus, L is generated by a mixture of contextual and syntactic constraints over V ∗. We are looking to separate the syntactic specificities from the grammatically irrelevant, contextual cues. The processes that transform V ∗into L, and L into L V ∗proximity −−−−−→L order −−−→L are entirely dependent on words: it should be possible to encode the information used by these processes into word categories. 480 In what follows, we will provide tools to isolate the information involved in proximity from the information involved in order. We also relate these categories to linguistically relevant concepts. 3.1 Isolating syntactic information For a given word, we want to identify the information involved in each type of structure of the corpus, and represent it as partitions of the vocabulary into lexical categories: 1. Contextual information is any information unrelated to sentence structure, and hence, grammaticality: this encompasses meaning, topic, pragmatics, corpus artefacts etc. The surface realization of sentence structure is a language-specific combination of word order and morphological markers. 2. Syntactic information is the information related to sentence structure and - as for the autonomy requirement - nothing else: it is independent of all contextual information. In the rest of the paper we will concentrate on English as an example, a language in which syntactic information is primarily encoded in order. In section 5 we present our ideas on how to deal with morphologically richer languages. Definition 2. Let L be a language over vocabulary V = {v1, . . . }, and P = (V, C, π : V 7→C) a partition of V into categories C. Let π(L) denote the language that is created by replacing a sequence of elements in V by the sequence of their categories. One defines the partition Ptot = {{v}, v ∈V } (one category per word) and the partition Pnul = {V } (every word in the same category). Ptot is such that πtot(L) ∼L. The minimal partition Pnul does not contain any information. A partition P = (V, C, π) is contextual if it is impossible to determine word order in language L from sequences of its categories: Definition 3. Let L be a language over vocabulary V , and let P = (V, C, π) be a partition over V . The partition P is said to be contextual if π(L) = π(L) The trivial partition Pnul is always contextual. Example. Consider the natural language sample. We refer to the words by their initial letters: r(ats),e(at)..., thus we have V = {c, e, r, f, m, p, t, d, h, w}. and L = {cer, rfc, mpt, dhw}. One can check that the partition P1 : c1 = {c, r, e, f} c2 = {m, p, t} c3 = {d, h, w} is contextual: the well-formed sequences over this partition are c1c1c1, c2c2c2 and c3c3c3. These patterns convey the information that words like ’mathematicians’ and ’theorems’ occur together, but do not provide information on order. Therefore π1(L) = {c1c1c1, c2c2c2, c3c3c3} = π1(L). P1 is also a maximal partition for that property: any further splitting leads to order-specific patterns. Intuitively, this partition corresponds to the semantic categories Animals = {r, c, e, f}, Science = {m, p, t}, and Medicine = {d, h, w}. A syntactic partition has two characteristics: its patterns encode the structure (in our case, order), and it is completely autonomous with respect to contextual information. Let us now express this autonomy formally. Two partitions of the same vocabulary are said to be independent if they do not share any information with respect to language L. In other words, if we translate a sequence of symbols from L into their categories from one partition, this sequence of categories will not provide any information on how the sequence translates into categories from the other partition: Definition 4. Let L be a language over vocabulary V , and let P = (V, C, π) and P ′ = (V, C′, π′) be two partitions of V . P and P ′ are considered as independent with respect to L if ∀ci1 . . . cin ∈π(L), ∀c′ j1 . . . c′ jn ∈π′(L) π−1(ci1 . . . cin) ∩π′−1(c′ j1 . . . c′ jn) ̸= ∅ Definition 5. Let L be a language over V , and let P = (V, C, π) be a partition. P is said to be syntactic if it is independent of any contextual partition of V . A syntactic partition is hence a partition that does not share any information with contextual partitions; or, in linguistic terms, a syntactic pattern is equally applicable to any contextual category. Example. We can see that the partition P2 : c4 = {c, r, m, t, d, w} c5 = {e, f, p, h} 481 is independent of the partition P1: one has π2(L) = {c4c5c4}. Knowing the sequence c4c5c4 does not provide any information on which P1 categories the words belong to. P2 is therefore a syntactic partition. Looking at the corpus, one might be tempted to consider a partition P3 that sub-divides c4 into subject nouns, object nouns, and - if one word can be mapped to only one category - ”ambiguous” nouns: c6 = {m, d} c7 = {t, w} c8 = {c, r} c9 = {e, f, p, h} The patterns corresponding to this partition would be π3(L) = {c6c9c7, c8c9c8}. These patterns will not predict that sentence (2) is grammatical, because the word wounds was only seen as an object. If we want to learn the correct generalization we need to reject this partition in favour of P2. This is indeed what happens by virtue of definition 5. We notice that the patterns over P3 categories are not independent of the contextual partition P1: one can deduce from the rule c8c9c8 that the corresponding sentence cannot be e.g. category c2: π−1 3 (c8c9c8) ∩π−1 1 (c2c2c2) = ∅ P3 is hence rejected as a syntactic partition. P2 is the maximal syntactic partition: any further distinction that does not conflate P1 categories would lead to an inclusion of contextual information. We can indeed see that category c4 corresponds to Noun and c5 corresponds to Verb. The syntactic rule for the sample is Noun Verb Noun. It becomes possible to distinguish between syntactic and contextual acceptability: cats rats fear is acceptable as a contextual pattern c1c1c1 under ’Animals’, but not a valid syntactic pattern. The sequence wounds eat theorems is syntactically wellformed by c5c6c5, but does not correspond to a valid contextual pattern. In this section we provided the formal definitions of syntactic information and the broader contextual information. By an illustrative example we gave an intuition of how we apply the autonomy of syntax principle in a non probabilistic grammar. We now turn to the probabilistic scenario and the inference from a corpus. 4 Syntactic and contextual categories in a corpus As we have seen in section 2, probabilistic language modeling with a likelihood maximization objective does not have incentive to concentrate on syntactic generalizations. In what follows, we demonstrate that using the autonomy of syntax principle it is possible to infer syntactic categories for a probabilistic language. A stochastic language L is a language which assigns a probability to each sequence. As an illustration of such a language, we consider the empirical distribution induced from the sample in section 3. L = {cer(1 4), rfc(1 4), mpt(1 4), dhw(1 4)} We will denote by pL(vi1 . . . vin) the probability distribution associated to L. Definition 6. Let V be a vocabulary. A (probabilistic) partition of V is defined by P = (V, C, π : V 7→P(C)) where P(C) is the set of probability distributions over C. Example. The following probabilistic partitions correspond to the non-probabilistic partitions (contextual and syntactic, respectively) defined in section 3. We will now consider these partitions in the context of the probabilistic language L. π1 = cre f mp t d hw       1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1       , π2 = cre f mp t d hw       1 0 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 1 0       From a probabilistic partition P = (V, C, π) as defined above, one can map a stochastic language L to a stochastic language π(L) over the sequences of categories: pπ(ci1 . . . cin) = X uj1...ujn ( Y k π(cik|ujk))pL(uj1 . . . ujn) As in the non-probabilistic case, the language L will be defined as the language obtained by shuffling the sequences in L. Definition 7. Let L be a stochastic language over vocabulary V . We will denote by L the language obtained by shuffling the elements in the sequences 482 of L in the following way: for a sequence v1 . . . vn, one has pL(v1 . . . vn) = 1 n! X (i1...in)∈σ(n) pL(vi1 . . . vin) One can easily check that π(L) = π(L). Example. The stochastic patterns of L over the two partitions are, respectively: π1(L) = {c1c1c1(1 2), c2c2c2(1 4), c3c3c3(1 4)} π2(L) = {c4c5c4(1)} We can now define a probabilistic contextual partition: Definition 8. Let L be a stochastic language over vocabulary V , and let P = (V, C, π) be a probabilistic partition. P will be considered as contextual if π(L) = π(L) We now want to express the independence of syntactic partitions from contextual partitions. The independence of two probabilistic partitions can be construed as an independence between two random variables: Definition 9. Consider two probabilistic partitions P = (V, C, π) and P ′ = (V, C′, π′). We will use the notation (π · π′)v(ci, c′ j) = πv(ci)π′ v(c′ j) and the notation P · P ′ = (V, C × C′, π · π′) P and P ′ are said to be independent (with respect to L) if the distributions inferred over sequences of their categories are independent: ∀w ∈π(L), ∀w′ ∈π′(L), pπ·π′(w, w′) = pπ(w)pπ′(w′) A syntactic partition will be defined by its independence from contextual information: Definition 10. Let P be a probabilistic partition, and L a stochastic language. The partition P is said to be syntactic if it is independent (with respect to L) of any possible probabilistic contextual partition in L. Example. The partition P1 is contextual, as π1(L) = π1(L). The partition P2 is clearly independent of P1 w.r.t. L. 4.1 Information-theoretic formulation The definitions above may need to be relaxed if we want to infer syntax from natural language corpora, where strict independence cannot be expected. We propose to reformulate the definitions of contextual and syntactic information in the information theory framework. We present a relaxation of our definition based on Shannon’s information theory (Shannon, 1948). We seek to quantify the amount of information in a partition P = (V, C, π) with respect to a language L. Shannon’s entropy provides an appropriate measure. Applied to π(L), it gives H(π(L)) = − X w∈π(L) pπ(w)(log(pπ(w))) For a simpler illustration, from now on we will consider only languages composed of fixed-length sequences s, i.e |s| = n for a given n. If L is such a language, we will consider the language L as the language of sequences of size n defined by pL(vi1 . . . vin) = Y j pL(vij) where pL(v) is the frequency of v in language L. Proposition 1. Let L be a stochastic language, P = (V, C, π) a partition. One has: H(π(L)) ≥H(π(L)) ≥H(π(L)) with equality iff the stochastic languages are equal. Let C be a set of categories. For a given distribution over the categories p(ci), the partition defined by π(ci|v) = p(ci) (constant distribution w.r.t. the vocabulary) contains no information on the language. One has pπ(ci1 . . . cik) = p(ci1) . . . p(cik), which is the unigram distribution, in other words π(L) = π(L). As the amount of syntactic or contextual information contained in L can be considered as zero, a consistent definition of the information would be: Definition 11. Let P = (V, C, π) be a partition, and L a language. The information contained in P with respect to L is defined as IL(P) = H(π(L)) −H(π(L)) Lemma 1. Information IL(P) defined as above is always positive. One has IL(P) ≤IL(P), with equality iff π(L) = π(L). 483 After having defined how to measure the amount of information in a partition with respect to a language, we now translate the independence between two partitions into the terms of mutual information: Definition 12. We follow notations from Definition 9. We define the mutual information of two partitions P = (V, C, π) et P ′ = (V, C′, π′) with respect to L as IL(P; P ′) = H(P) + H(P ′) −H(P · P ′) This directly implies that Lemma 2. P = (V, C, π) and P ′ = (V, C′, π′) are independent w.r.t. L ⇔IL(P; P ′) = 0 Proof. This comes from the fact that, by construction, the marginal distributions of π · π′ are the distributions π and π′. With these two definitions, we can now propose an information-theoretic reformulation of what constitutes a contextual and a syntactic partition: Proposition 2. Let L be a stochastic language over vocabulary V , and let P = (V, C, π) be a probabilistic partition. • P is contextual iff IL(P) = IL(P) • P is syntactic iff for any contextual partition P∗ IL(P; P∗) = 0 4.2 Relaxed formulation If we deal with non artificial samples of natural language data, we need to prepare for sampling issues and word (form) ambiguities that make the above formulation of independence too strict. Consider for instance adding the following sentence to the previous sample: doctors heal fear The distinction between syntactic and contextual categories is not as clear as before. We need a relaxed formulation for real corpora: we introduce γ-contextual and µ, γ-syntactic partitions. Definition 13. Let L be a stochastic language. • A partition P is considered as γ-contextual if it minimizes IL(P)(1 −γ) −IL(P) (1) • A partition P is considered µ, γ-syntactic if it minimizes max P ∗IL(P; P∗) −µ IL(P) (2) for any γ-contextual partition P ∗. Let P and P ′ be two partitions for L, such that ∆I(L) = IP ′(L) −IP (L) ≥0 then the γ-contextual program (1) would choose P ′ over P iff ∆I(L) −∆I(L) ∆I(L) ≤γ Let P ∗be a γ-contextual partition. Let ∆MI(L, P ∗) = IL(P ′; P ∗) −IL(P; P ∗) then the µ, γ-syntactic program (2) would choose P ′ over P iff ∆MI(L, P ∗) ∆I(L) ≤µ Example. Let us consider the following partitions: - P1 and P2 refer to the previous partitions above: {Animals, Science, Medicine} and {Noun, Verb} - PA is adapted from P1 so that ’fear’ belongs to Animals and Medicine {c, e, r, f( 1 2)}, {m, p, t}, {d, h, w, f( 1 2)} - PB merges Animals and Medicine from P1 {c, e, r, f, d, h, w}, {m, p, t} - Psent describes the probability for a word to belong to a given sentence (5 categories) - PC is adapted from P2 so that ’fear’ belongs to Verb and Noun {c, r, m, t, d, w, f( 1 2)}, {e, p, h, f( 1 2)} 0 1 2 3 4 5 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 Pposi Psent PC PD PB PA Ptot Pnul P1 P2 Figure 1: IL(P) −IL(P) represented w.r.t. IL(P) for different partitions: acceptable solutions of program (1) lie on the convex hull boundary of the set of all partitions. Solution for γ is given by the tangent of slope γ. Non trivial solutions are PB and P1. 484 - PD is adapted from P2 and creates a special category for ’fear’ {c, r, m, t, d, w}, {e, p, h}, {f} - Pposi describes the probability for a word to appear in a given position (3 categories) 0 1 2 3 4 5 0.0 0.1 0.2 0.3 0.4 0.5 Pposi Psent PC PD PB PA Ptot Pnul P1 P2 Figure 2: IL(P; PB) represented w.r.t. IL(P) for different partitions: acceptable solutions of program (2) lies on the convex hull boundary of the set of all partitions. Solution for µ is given by the tangent of slope µ. Non-trivial solution is P2. Acceptable solutions of (1) and (2) are, respectively, on the convex hull boundary in Fig.1 and Fig.2. While the lowest parameter (non trivial) solutions are PB for context and P2 for syntax, one can check that partitions P1, PA and Psent are all close to the boundary in Fig.1, and that partitions PC, PD and Pposi are all close to the boundary in Fig.2, as expected considering their information content. 4.3 Experiments In this section we illustrate the emergence of syntactic information via the application of objectives (1) and (2) to a natural language corpus. We show that the information we acquire indeed translates into known syntactic and contextual categories. For this experiment we created a corpus from the Simple English Wikipedia dataset (Kauchak, 2013), selected along three main topics: Numbers, Democracy, and Hurricane, with about 430 sentences for each topic and a vocabulary of 2963 unique words. The stochastic language is the set L3 of 3-gram frequencies from the dataset. In order to avoid biases with respect to the final punctuation, we considered overlapping 3-grams over sentences. For the sake of evaluation, we construct one contextual and one syntactic embedding for each word. These are the probabilistic partitions over gold standard contextual and syntactic categories. The contextual embedding Pcon is defined by relative frequency in the three topics. The results for this partition are IL3(Pcon) = 0.06111 and IL3(Pcon) = 0.06108, corresponding to a γ threshold of 6.22.10−4 in (1), and thus distribution over topics can be considered as an almost purely contextual partition. The syntactic partition Psyn is the distribution over POS categories (tagged with the Stanford tagger, Toutanova et al. (2003)). Using the gold categories, we can manipulate the information in the partitions by merging and splitting across contextual or syntactic categories. We study how the information calculated by (1) and (2) evolve; we validate our claims if we can deduce the nature of information from these statistics. ADV WH JJ ADV JJ NN JJ V JJ WH NN ADV NN V NN WH V ADV V WH 0.00 0.05 0.10 0.15 0.20 0.25 0.30 syntactic topic random Figure 3: Increase of information ∆I in three scenarios: syntactic split, topic split and random split. We start from the syntactic embeddings and we split and merge over the following POS categories: Nouns (NN), Adjectives (JJ), Verbs (V), Adverbs(ADV) and Wh-words (WH). For a pair of categories (say NN+V), we create: • Pmerge merges the two categories (NN + V ) • Psyntax splits the merged category into NN and V (syntactic split) • Ptopic splits the merged category into (NN + V )t1, (NN + V )t2 and (NN + V )t3 along the three topics (topic split) • Prandom which splits the merged category into (NN + V )1 and (NN + V )2 randomly (random split) It is clear that each split will increase the information compared to Pmerge. We display the simple information gains ∆I in Fig.3. The question is whether we can identify if the added information is syntactic or contextual in nature, i.e. if we can find a µ for which the µ, γ-syntactic program (2) 485 selects every syntactic splitting and rejects every contextual or random one. ADV WH JJ ADV JJ NN JJ V JJ WH NN ADV NN V NN WH V ADV V WH 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 syntactic random topic Figure 4: Ratio ∆MI/∆I in three scenarios: syntactic split, topic split and random split. Considering objective (2) with parameter µ = 0.5 leads to discrimination between contextual and syntactic information. Fig.4 represents the ratio between the increase of mutual information (relatively to Pcon) ∆MI and the increase of information ∆I, corresponding to the the threshold µ in (2). It shows that indeed for a µ = 0.5 syntactic information (meaningful refinement according to POS) will be systematically selected, while random or topic splittings will not. We conclude that even for a small natural language sample, syntactic categories can be identified based on statistical considerations, where a language model learning algorithm would need further information or hypotheses. 4.4 Integration with Models We have shown that our framework allows to search for syntactic categories without prior hypothesis of a particular model. Yet if we do have a hypothesis, we can indeed search for the syntactic categories that fit the particular class of models M. In order to find the categories which correspond to the syntax rules that can be formulated in a given class of models, we can integrate the model class in the training objective by replacing entropy by the negative log-likelihood of the training sample. Let M ∈M be a model, which takes a probabilistic partition P = (V, C, π) as input, and let LL(M, P, LS) be the log-likelihood obtained for sample S. We will denote ˜H(LS, P) = −sup M∈M LL(M, P, LS) ˜ILS(P) = ˜H(LS, P) −˜H(LS, P) Following Definition 12, we define ˜ILS(P; P ′) = ˜H(LS, P) + ˜H(LS, P ′) −˜H(LS, P · P ′) We may consider the following program: • A partition P is said to be γ-contextual if it minimizes ˜ILS(P)(1 −γ) −˜ILS(P) • Let P∗be a γ-contextual partition for L, µ ∈ R+, k ∈N. The partition P is considered µ, γ-syntactic if it minimizes max P ∗˜ILS(P; P ∗) −µ ˜ILS(P) 5 Conclusion and Future Work In this paper, we proposed a theoretical reformulation for the problem of learning syntactic information from a corpus. Current language models have difficulty acquiring syntactically relevant generalizations for diverse reasons. On the one hand, we observe a natural tendency to lean towards shallow contextual generalizations, likely due to the maximum likelihood training objective. On the other hand, a corpus is not representative of human linguistic competence but of performance. It is however possible for linguistic competence - syntax - to emerge from data if we prompt models to establish a distinction between syntactic and contextual (semantic/pragmatic) information. Two orientations can be identified for future work. The immediate one is experimentation. The current formulation of our syntax learning scheme needs adjustments in order to be applicable to real natural language corpora. At present, we are working on an incremental construction of the space of categories. The second direction is towards extending the approach to morphologically rich languages. In that case, two types of surface realization need to be considered: word order and morphological markers. An agglutinating morphology probably allows a more straightforward application of the method, by treating affixes as individual elements of the vocabulary. The adaptation to other types of morphological markers will necessitate more elaborate linguistic reflection. 486 References Kuncoro Adhiguna, Dyer Chris, Hale John, Yogatama Dani, Clark Stephen, and Blunsom Phil. 2018. LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Marco Baroni. 2019. Linguistic generalization and compositionality in modern artificial neural networks. CoRR, abs/1904.00157. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Iris Berent and Gary Marcus. 2019. No integration without structured representations: Response to Pater. Language, 95:1:e75–e86. Jean-Philippe Bernardy and Shalom Lappin. 2017. Using deep neural networks on learn syntactic agreement. Linguistic Issues in Language Technology, 15(2):1––15. Terra Blevins, Omer Levy, and Luke Zettlemoyer. 2018. Deep RNNs encode soft hierarchical syntax. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Samuel R. Bowman, Christopher D. Manning, and Christopher Potts. 2015. Tree-structured composition in neural networks without tree-structured architectures. In NIPS Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches. Noam Chomsky. 1957. Syntactic Structures. Mouton, Berlin, Germany. Noam Chomsky. 1980. Rules and representations. Behavioral and Brain Sciences, 3(1):1–15. Shammur Absar Chowdhury and Roberto Zamparelli. 2018. RNN simulations of grammaticality judgments on long-distance dependencies. In Proceedings of the 27th International Conference on Computational Linguistics. Alexander Clark and R´emi Eyraud. 2006. Learning auxiliary fronting with grammatical inference. In Conference on Computational Language Learning. Alexander Clark and Shalom Lappin. 2010. Unsupervised learning and grammar induction. In Handbook of Computational Linguistics and Natural Language Processing. Wiley-Blackwell, Oxford. Alexis Conneau, German Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Gottlob Frege. 1892. ¨Uber Sinn und Bedeitung. Zeitschrift f¨ur Philosophie und philosophische Kritik, 100:25–50. Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. In EMNLP Workshop Blackbox NLP: Analyzing and Interpreting Neural Networks for NLP. E. Mark Gold. 1967. Language identification in the limit. Information and control, 10:5:447–474. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Dieuwke Hupkes, Sara Veldhoen, and Willem H. Zuidema. 2018. Visualisation and ’diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907—-926. David Kauchak. 2013. Improving text simplification language modeling using unsimplified text data. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Brenden M. Lake and Marco Baroni. 2017. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In 34th International Conference on Machine Learning. Shalom Lappin and Stuart Shieber. 2007. Machine learning theory and practice as a source of insight into universal grammar. Journal of Linguistics, 43:393–427. Beth Levin and Malka Rappaport Hovav. 2005. Argument Realization. Cambridge University Press, Cambridge. 487 Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional methods really learn lexical inference relations? In Proceedings of the North American Chapter of the Association for Computational Linguistics – Human Language Technologies. Xiang Lisa Li and Jason Eisner. 2019. Specializing word embeddings (for parsing) by information bottleneck. In 2019 Conference on Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing. Tal Linzen. 2019. What can linguistics and deep learning contribute to each other? Response to Pater. Language, 95(1):e98–e108. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Richard McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. ArXiv, abs/1802.09091. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan ernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. Frederick J. Newmeyer. 2003. Grammar is grammar and usage is usage. Language, 79:4:682—-707. Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computa-tional Linguistics. Joe Pater. 2019. Generative linguistics and neural networks at 60: Foundation, friction, and fusion. Language, 95:1:41–74. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wentau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Shauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the inductive biases of RNNs with synthetic variations of natural languages. CoRR, abs/1903.06400. Shauli Ravfogel, Yoav Goldberg, and Francis Tyers. 2018. Can LSTM learn to capture agreement? the case of basque. In EMNLP Workshop Blackbox NLP: Analyzing and Interpreting Neural Networks for NLP. Naomi Saphra and Adam Lopez. 2018. Language models learn POS first. In EMNLP Workshop Blackbox NLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium. David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. In Proceedings of the 7th International Conference on Learning Representations. Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn’t buy quality syntax with neural language models. In Proceedings of Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing. Claude E. Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal, 27:379–423 and 623–656. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Naftali Tishby, Fernando Pereira, and William Bialek. 1999. The information bottleneck method. In Annual Allerton Conference on Communication, Control and Computing, pages 368–377. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the North American Chapter of the Association for Computational Linguistics.
2020
46
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5135–5150 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5135 The Summary Loop: Learning to Write Abstractive Summaries Without Examples Philippe Laban UC Berkeley Andrew Hsi Bloomberg John Canny UC Berkeley Marti A. Hearst UC Berkeley∗ Abstract This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint. It introduces a novel method that encourages the inclusion of key terms from the original document into the summary: key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary. A novel unsupervised training procedure leverages this coverage model along with a fluency model to generate and score summaries. When tested on popular news summarization datasets, the method outperforms previous unsupervised methods by more than 2 R-1 points, and approaches results of competitive supervised methods. Our model attains higher levels of abstraction with copied passages roughly two times shorter than prior work, and learns to compress and merge sentences without supervision. 1 Introduction Summarization, or the task of condensing a document’s main points into a shorter document, is important for many text domains, such as headlines for news and abstracts for research papers. This paper presents a novel unsupervised abstractive summarization method that generates summaries directly from source documents, without the aid of example summaries. This approach simultaneously optimizes for the following important properties of a good summary: • coverage of the keywords of the document, • fluency of generated language, and • brevity of generated summaries. ∗Author emails: {phillab,canny,hearst}@berkeley.edu, [email protected] Original Document: Chilean President announced Wednesday that his country, which has been paralyzed by protests over the last two weeks, will no longer host two major international summits. [...] The President has now canceled the hosting of the economic APEC forum and COP25 environmental summit, which were both due to take place later this year. [...] Masked Document: announced Wednesday that his country, which has been by over the last two weeks, will no longer two major international . [...] The has now the of the and , which were both due to take place later this . [...] Summary Loop [10 word constraint]: Pinera cancelled the APEC summit at Santiago. Coverage Score: 0.22 Summary Loop [24 word constraint]: Pinera said Chileans have been canceled the hosting of the APEC summit, which was scheduled to take place in November. Coverage score: 0.33 Summary Loop [45 word constraint]: Sebastian Pinera announced Wednesday that his country will not hold the APEC summit, which was scheduled to take place in Santiago. Pinera said that Chileans had been paralyzed by protests over the last two weeks. Coverage score: 0.39 Figure 1: Motivating example. A document from CNN.com (keywords generated by masking procedure are bolded), the masked version of the article, and generated summaries by three Summary Loop models under different length constraints. One of the main contributions of this work is a novel method of inducing good coverage of important concepts from the original article. The coverage model we propose takes as input the original document with keywords masked out (see Figure 1). It uses the current best automatically generated summary to try to uncover the missing keywords. The more informative the current summary is, the more successful the coverage model is at guessing the blanked out keywords from the original document. A resulting coverage score is fed back into the training process of the summarization model 5136 with the objective of producing summaries with high coverage. A second contribution is our unsupervised training procedure for summarization, the Summary Loop, which leverages the coverage model as well as a simple fluency model to generate and score summaries. During training, the procedure is conditioned on a desired summary length, forcing the Summarizer model to adapt to a length budget. Figure 1 shows Summary Loop summaries obtained for the same document under three different length budgets. A third contribution is a set of specialized techniques employed during training to guide the model away from pathological behavior. These guard rails include a method for reducing repetition, for encouraging the model to complete sentences, and to avoid frame filling patterns. The models trained through the Summary Loop outperform all prior unsupervised summarization methods by at least 2 ROUGE-1 points on common news summarization datasets (CNN/DM and Newsroom), and achieve within a few points of state-of-the-art supervised algorithms, without ever being exposed to any summaries. In addition, summaries generated by our method use 50% more summarization techniques (compression, merging, etc.) than prior automatic work and achieve higher levels of abstraction, reducing by almost half the gap between human-generated summaries and automatic summaries in terms of length of copied spans. 2 Related Work Supervised Abstractive Summarization. Sequence-to-sequence (seq2seq) (Sutskever et al., 2014) models trained using teacher-forcing are the most common approach to abstractive summarization (Nallapati et al., 2016). A common architecture is the Pointer-Generator (See et al., 2017). Performance can further be improved by constraining the attention (Gehrmann et al., 2018; Gui et al., 2019; Wang et al., 2019) and using pretrained Transformer-based language models (Lewis et al., 2019; Chi et al., 2019; Edunov et al., 2019). Through architectural changes, the training procedure remains constant: using a large corpus of document-summary pairs, the model is trained to reproduce target summaries. Unsupervised Summarization. Most unsupervised summarization work is extractive: sentences deemed relevant are pulled out of the original document and stitched into a summary, based on a heuristic for a sentence’s relevance (Mihalcea and Tarau, 2004; Barrios et al., 2015; West et al., 2019). Nikolov and Hahnloser (2019)’s abstractive approach is partially unsupervised, not requiring parallel data, but only a group of documents and a group of summaries. In contrast, our work does not require any summaries, and is trained using only documents. Radford et al. (2019) summarize documents using a language model (GPT2) in a Zeroshot learning setting. The model reads the document followed by a special token “TL/DR”, and is tasked with continuing the document with a summary. Our work is an extension of this work: we initialize our Summarizer model with a GPT2 and specialize it with a second unsupervised method. Summarization and Q&A. Eyal et al. (2019) and Arumae and Liu (2018) turn reference summaries into fill-in-the-blank (FIB) questions, either as an evaluation metric or to train an extractive summarization model. In this work, we directly generate FIB questions on the document being summarized, bypassing the need for a reference summary. Scialom et al. (2019)’s work stays closer to a Q&A scenario, and uses a Question Generation module to generate actual questions about the document, answered by a Squad-based (Rajpurkar et al., 2018) model using the generated summary. We refrain from using actual questions because question generation remains a challenge, and it is unclear how many questions should be generated to assess the quality of a summary. RL in Summarization. Paulus et al. (2018) introduced Reinforcement Learning (RL) to neural summarization methods by optimizing for ROUGE scores, leading to unreadable summaries. Since then, Reinforcement Learning has been used to select sentences with high ROUGE potential (Chen and Bansal, 2018), or optimize modified versions of ROUGE that account for readability (Pasunuru and Bansal, 2018). In all cases, the reward being computed relies on a reference summary, making the methods supervised. We craft a reward that does not require a target summary allowing our training process to remain unsupervised. 3 The Summary Loop For this work, the definition of a summary is: “A summary is a brief, fluent text that 5137 covers the main points of an original document.” Brevity, fluency and coverage are the three pillars of a good summary. Under a length constraint, a good quality summary should contain as much information about the original document as possible while retaining fluent and coherent English. Subsection 3.1 lays out the steps in the Summary Loop. Subsections 3.2–3.5 specify how each component is represented by a neural network. Section 4 shows how to train a summarizer model using this architecture in an unsupervised manner.1 3.1 Summary Loop Steps Numbers in Figure 2 correspond to the following steps: 1. Summarizer receives a document D and length-constraint L, and produces a summary S fulfilling the length constraint. 2. Using a Masking Procedure, D is modified into a masked document M, where important words have been replaced with blanks. 3. Coverage receives S and M, and uses them to fill in each blank in M with a word, producing F. F and D are compared, and the resulting fill-in accuracy is called the Coverage Score. 4. Fluency receives S, and gives a Fluency Score based on its assessment of the quality of the Summary’s writing. 5. The Fluency Score is added to the Coverage Score (as a weighed sum) into a Summary Score for the (D, S) pair. 6. Reinforcement Learning is used to train the Summarizer to produce summaries with high Summary Score. The Summary Loop does not rely on the use of a target/reference/human-written summary, but only the summaries produced by the Summarizer model. The process can therefore be iterated upon without supervision from Summarization datasets. 3.2 Summarization Model We use a Generative Transformer (Radford et al., 2019) as the model architecture of the summarizer. We make this choice for two reasons. First, Generative Transformers can produce text one word at a time, allowing the system to produce abstractive 1The code, model checkpoints and other resources are available at https://github.com/CannyLab/ summary_loop .        Summarizer         Coverage Original Document D Masked Document M        Fluency Summary S  ✓  X  ✓   ✓  X        Masking        Procedure  ?   ?   ?   ?   ?  Coverage score 0.6 Fluency score Filled Document F Summary score 1.1 Optimization 0.5 Target Length 1 2 3 4 5 6 Figure 2: The Summary Loop involves three neural models: Summarizer, Coverage and Fluency. Given a document and a length constraint, the Summarizer writes a summary. Coverage receives the summary and a masked version of the document, and fills in each of the masks. Fluency assigns a writing quality score to the summary. The Summarizer model is trained, other models are pretrained and frozen. summaries. Second, we use the pretrained Generative Transformer to initialize the Summarizer. Practically, the Summarizer first reads through the entire document, followed by a special START token, signaling summarization. The Summarizer produces a probability distribution over words in its vocabulary, and a word is picked from the distribution and fed back as an input into the model. This procedure is repeated and halts either when the summary reaches a length constraint, or when the Summarizer produces a special END token. See Appendix C for the model size and initialization used to train the summarization paper. 3.3 Masking Procedure The Masking Procedure decides on a set of keywords that are important elements in the document that should be recoverable using a summary. The keywords are replaced with blanks, indirectly indicating which information should be present in the summary. We use a tf-idf-based approach to decide on the set of masked keywords, as it is both simple and has been shown to represent word relevance to a document (Ramos, 2003). Masking procedure implementation details are presented in Section A of the Appendix. We select the k words with highest tf-idf score for the document to serve as the masked words. The k parameter represents a balance: if too many words are masked, the filling-in becomes impos5138 Finetuned-BERT Chile   will  not  host  the  economic  APEC  and  the  COP25,  two  ...  <SEP> <MASK>  <MASK>  announced  Wednesday  that  his  country,  which  has  been  <MASK>  by  <MASK>  over  the  last  two  weeks, ...  Summary Masked Document   Chile  President rocked protests Coverage 12 more fill-ins ... X ✓ ✓ X Raw Coverage Score: 0.33 Figure 3: The Coverage model uses a finetuned BERT model. The summary is concatenated to the masked document as the input, and the model predicts the identity of each blank from the original document. The accuracy obtained is the raw coverage score. sible, but if too few are masked, the Summarizer model will not be encouraged to include sufficient content in its summary. Varying the value of k (10,12,15,20) yielded only small discernible difference in the Summarizers produced, and we use k = 15 in all our final experiments. The masking procedure can be adapted to a specific domain. For instance, if summarizing financial documents, the masking procedure could systematically mask all numbers, encouraging the Summarizer model to add numbers to its summary. 3.4 Coverage Model The Coverage Model receives a computationally generated summary and the masked document and attempts to fill in each blank word. The task of filling in blanks is similar to masked language modeling (MLM), used to pretrain BERT-like (Devlin et al., 2019) models. In MLM, some of the words are replaced with a special MASK token, and the model must use other information (unmasked words) to fill in the masked words. Because of the similarity to our task, we use a BERT-based neural network as the architecture for the coverage model. However, the coverage task differs from MLM in two ways. First, we modify the masking procedure: instead of masking a random percentage of the words (often 15% for BERT), we mask all appearances of the keywords selected by the masking procedure described in Section 3.3. Second, the input to the coverage model is a concatenation of the unmasked summary, a separator token and the masked document. The model can leverage unmasked information available in the summary to fill in the masked document. The Coverage Model is illustrated in Figure 3. 3.4.1 Computing a Coverage Score Using the masking procedure, we obtain M = f(D), the masked document. The coverage model produces the filled document F = g(M, S). Raw coverage score is the fraction of correctly filled in words in F. Let Di, Fi and Mi correspond to the ith word in their respective document, IM the set indices of words that have been masked. Then: RawCov(D, S) = ∥i ∈IM if Di = Fi∥ ∥IM∥ (1) The model can use information in the unmasked (visible) words of M to predict the masked words. For instance, if the word “Chile” is visible, then “Santiago” would be a well-informed guess near the word “capital”, which might not be masked out. This is undesirable, because coverage should account for what information the model can learn from the summary S, not what it can guess from the unmasked portion of D. To counteract this problem, we modify the raw coverage score by computing how much information the model can guess without the summary present, using an empty string summary: F∅= g(M, “ ”). We then normalize a summary’s coverage by subtracting the empty string coverage from the raw coverage, leaving only filled-in words answerable using S, as shown in Equation 2. NormCov(D, S) = RawCov(D, S) −RawCov(D, “ ”) (2) In a nutshell, raw coverage score answers the question: “What fraction of blanked words can be correctly filled in with this summary?” and normalized coverage score answers: “What is the increase in the fraction of blanks that can be correctly filled in with this summary, compared to having no summary?” In the rest of this paper, Coverage Score refers to Normalized Coverage Score. 3.4.2 Training the Coverage Model We train the Coverage Model once, and its weights are then fixed during the training of the Summarizer. In order to train the Coverage Model, we need pairs of documents (D) and summaries (S). However, we operate under the assumption that we do not have access to summaries (to keep the procedure unsupervised). In order to remove this dependency, we use the first 50 words of the unmasked 5139 Summary Dataset Summary Length Raw Coverage Norm. Coverage Empty String 0 0.334 0 Headline 9.59 0.478 0.144 First 10 words 10.0 0.428 0.094 Newsroom 23.41 0.525 0.191 First 24 words 24.0 0.537 0.203 CNN/DM 45.75 0.726 0.392 First 46 words 46.0 0.649 0.315 Table 1: Analysis of the raw and normalized coverage of three existing human-written summary datasets, as well as first-k word baselines. document (D[: 50]) as a proxy for document summaries. The Coverage Model is initialized with a trained BERT model (Devlin et al., 2019), and trained using (D, D[: 50]) pairs on the coverage task. Because BERT is already trained on the similar MLM task, the Coverage model is able to leverage knowledge accrued by BERT. The Coverage Model converges after roughly 5 hours of training on a Titan X GPU. 3.4.3 Analysis of Coverage We present properties of the raw and normalized coverage through the analysis of existing humanwritten summary datasets. We focus our analysis on three datasets in the news domain: (1) a headline dataset obtained from common US news websites (Laban and Hearst, 2017), (2) the Newsroom dataset (Grusky et al., 2018), and (3) the CNN/DM dataset (Nallapati et al., 2016). For each dataset, we take document/summary pairs and obtain raw and normalized coverage score through our Coverage model, reported in Table 1. First, longer summaries obtain higher coverage scores: a CNN/DM summary with an average of 45 words can be used to fill in 73% of the blanks correctly, compared to 48% for a 9 word headline. Across datasets, the correlation between summary length and raw coverage score is 0.56, confirming that longer summaries contain more information, according to coverage. Second, we simulate the first k words2 of the document as a summary. We use k = 10, 24, 46 to match average word length in the three datasets. For two of the three values (10 and 46), the coverage of human-written summaries is higher than the first-k word counterpart. This is remarkable: even though the summary is farther away lexically (i.e., 2We choose the first k words due to the similarity to Lede 3 (first 3 sentences), a common baseline in news. is not a subset of the original words), it obtains higher coverage, demonstrating that the coverage model can account for reworded information. 3.5 Fluency Model A model solely trained to optimize coverage has no incentive to write in good English, use punctuation, determinants or pronouns, as these are not words removed by the masking procedure. The objective of a Fluency Model is to judge the writing quality of the summary, independent of its coverage. Given the right corpus, we argue that a language model’s probability can be modified into a Fluency Score. Therefore, we adapt a language model into the Fluency Model. We choose the generative Transformer (Radford et al., 2019) architecture for our Fluency model, as it can be trained into a powerful language model. Just as with the Summarizer, by using a standardized architecture and model size, we can make use of pretrained models. However, it is important for Fluency to fine tune the language model on the target domain, so that the Summarizer is rewarded for generating text similar to target content. To produce a uniform Fluency Score, we linearly scale the language model’s log-probability of a given summary (LM(S)) between an ideal value LPlow and a maximum value LPhigh: Fluency(S) = 1 −LM(S) −LPlow LPhigh −LPlow (3) This ensures that the Fluency(S) is usually in the range [0, 1]. LPlow and LPhigh are picked specifically for a particular language model, and ensure that the log-probability magnitudes of a specific language model do not affect the overall scores. 3.6 Summary Score The final Summary Score is a weighed sum of the Coverage and Fluency Scores: SummaryScore(D, S) = α · NormCov(D, S) + β · Fluency(S) (4) α, β are hyperparameters giving relative importance to Coverage and Fluency. We set α = 5, β = 1 in all our experiments. Model choice, size, and initialization are summarized in Figure A1. 4 Training Procedure We first outline the training procedure and then detail several guard-rail mechanisms used during 5140 training to prevent the Summarizer from learning pathological writing strategies. Figure A2 presents training plots of a Summary Loop model and interpretation of the different learning phases. 4.1 Training with Reinforcement Learning We use Reinforcement Learning to train the Summarizer component (agent), such that it achieves high summary score (reward). Note that the Coverage and Fluency models are frozen, and their weights are not trained. We make this choice as allowing Fluency and Coverage models to evolve could enable the models to coordinate and cheat. We use the Self-critical sequence training (SCST) method (Rennie et al., 2017), as it has been shown to perform well on similar text generation tasks optimizing BLEU for image captioning or ROUGE scores in summarization. In SCST, the Summarizer is used to produce two summaries of document D: a greedy summary ˆS, using a decoding strategy that always picks the most likely next word, and a sampled summary Ss, picking the next word in the summary by sampling from the word distribution. Summaries are scored using the Summary Loop: ˆR = SummaryScore(D, ˆS) Rs = SummaryScore(D, Ss) Then we minimize the following loss: L = ( ˆR −Rs) N X i=0 log p(ws i |ws 1, ..., ws i−1, D) Where p(ws i |...) represent the probability of the ith word conditioned on previously generated word, according to the model. Intuitively, if Rs > ˆR, minimizing L maximizes the likelihood of the sampled sequence — which is desired because it outperformed the greedy summary — and increases expected reward of the model. 4.2 Training guard rails During training, the Summarizer model learns pathological summarization strategies. We build training guard rails to detect the pathological behavior and penalize the model during training. A guard rail has a binary effect: if a pathology is detected in a summary, its Summary Score is reduced by a penalty amount δ. We use δ = 2 for all experiments. We found three training guard rails to be useful: No-repetition, Finish-your-sentence, and No-frame-filling. 4.2.1 No-repetition A common problem in neural text generation is repetition of text. Based on the observation that 3-grams seldom repeat in common summarization datasets, the “No-repetition” training guard rail raises a penalty on a summary when it contains any repeated 3-gram. 4.2.2 Finish-your-sentence When generating a summary, the model can either produce the END token, or generate a number of words up to the length constraint. We observe that if the model does not produce the END token, it often generates partial sentences, which is undesirable. Because we want to encourage the model to generate an END token, the “Finish-your-sentence” raises a penalty if a summary has no END token. 4.2.3 No-frame-filling During training, the model sometimes learns to overly rely on sentence patterns that achieves high reward as a one size fits all summary. In one example the model learns to produce summaries solely of the form: “X talks with Y about the Z”. The model uses this frame, filling in the X, Y and Z slots with relevant keywords and entities to achieve a small but positive coverage. This form of framefilling is undesirable, as the model often produces inaccurate information to fit the entities to the pattern. We implement a guard rail to penalize the model when frame-filling patterns are observed. During training, we keep track of the last 100 summaries produced by the model. We then aggregate the frequency of words for each word position in the 100 summaries. If any word appears more than 50% of the time at a specific word position, we raise the “No-frame-filling” penalty. In the example given above, the word “talks” appeared in the second word position in more than 50% of the summaries, as well as the word “about” in the fifth position. These rule-based training guard rails are simple and effective. In our finalized trained models, very few summaries exhibit penalized behavior: 2% for no-repetition, 5% for finish-your-sentence, and 2.5% for no-frame-filling. 5 Results We present results for Summary Loop models trained in the news domain under three different length constraints: 10, 24, and 46 words, matching the distributions of the Headline, Newsroom 5141 Method R-1 R-2 R-L Coverage Score Fluency Score Brevity (avg words) Baselines Human-written Summaries 100 100 100 0.392 0.612 58.5 X Lead-3 baseline 40.3 17.7 36.6 0.421 0.656 84.0 Supervised Methods Pointer Generator (See et al., 2017) 36.4 15.7 33.4 0.342 0.547 55.6 PG + Coverage (See et al., 2017) 39.5 17.3 36.4 0.377 0.508 61.7 Bottom-Up (Gehrmann et al., 2018) 41.2 18.7 38.3 0.378 0.538 73.9 PEGASUSBASE (Zhang et al., 2019a) 41.8 18.8 38.9 PEGASUSLARGE (Zhang et al., 2019a) 44.1 21.3 40.9 Unsupervised Methods X TextRank (Mihalcea and Tarau, 2004) 35.2 12.9 28.7 0.370 0.612 49.62 GPT2 Zero-Shot (Radford et al., 2019) 29.3 8.3 26.6 Summary Loop 45 37.7 14.8 34.7 0.404 0.627 47.0 Table 2: ROUGE Results (F-1) on the non-anonymized CNN/DM test-set for supervised and unsupervised methods. Extractive methods indicated with X. Our ROUGE scores have a 95% confidence interval of at most ±0.30. Coverage, Fluency and Brevity (average number of words) included for systems where summaries are available, using Coverage and Fluency models from our work. Supervised Methods R-1 R-2 R-L X Lead-3 baseline 32.0 21.1 29.6 PG + Coverage 27.5 13.3 23.5 Unsupervised Methods R-1 R-2 R-L X TextRank 24.5 10.1 20.1 Summary Loop 24 27.0 9.6 26.4 Table 3: ROUGE Results on the released test set of Newsroom. X indicate extractive methods. Summary Loop outperforms other unsupervised method, is competitive with supervised Pointer-Generator. (Grusky et al., 2018) and CNN/DM (Nallapati et al., 2016) datasets. We compare our summaries using the standard ROUGE metric, and by analyzing summaries for the errors made, the technique used and the level of abstraction. Finally, we show the Summary Loop can be complemented with supervision, reducing the amount of data needed to achieve comparable ROUGE results. 5.1 News ROUGE Scores Table 2 and Table 3 present ROUGE results on the CNN/DM and Newsroom datasets respectively. In both cases, Summary Loop outperforms other unsupervised methods, and is competitive with supervised methods despite not being exposed to any example summaries. On CNN/DM, Summary Loop performs in between the Pointer Generator and Bottom Up architecture in terms of ROUGE-1. On the Newsroom, Summary Loop is within 0.6 ROUGE1 points of the Pointer-Generator with Coverage and surpasses it by 2 ROUGE-L points. Recent breakthroughs in pretrained Transformer models have shown that using larger models in Summarization can lead to large improvements. For instance, a “large” version of the PEGASUS model (Zhang et al., 2019a) outperforms the “base” version by 2.3 ROUGE-1 points. Because Summary Loop experiments were performed using “base” models, we expect that using larger Transformer models could lead to similar gains. Table 2 confirms that human-written summaries obtain amongst the highest Fluency and Coverage scores. Human-written summaries are only outperformed by Summary Loop summaries, and the Lede-3 baseline. However, the Summary Loop summaries are obtained by directly optimizing for Fluency and Coverage, and Lede-3 baseline summaries achieve their higher Coverage at the expense of being much longer (i.e. 84 words on average compared to 58 in human-written summaries). 5.2 Technique and Error Analysis We perform a manual analysis of 200 randomlyselected summaries on the test set of CNN/DM from the Pointer-Generator with Coverage (PGC), Bottom-Up (BU) and the unsupervised Summary Loop (SL). We annotated each summary with two types of errors: Inaccurate (information in summary contradicts document), Ungrammatical (one sentence or more is not properly constructed), and 5142 Error Made PGC BU SL Inaccurate (%) 11 31 24 Ungrammatical (%) 7 15 18 Technique Used (Success/Total) PGC (S/T) BU (S/T) SL (S/T) Sent. Compression 86 / 110 96 / 177 118 / 194 Sent. Merging 13 / 27 29 / 65 71 / 121 Novel Sentence 0 / 1 4 / 18 33 / 70 Entity Manipulation 7 / 10 15 / 27 27 / 40 Total Technique 106 / 148 144 / 287 249 / 425 Table 4: Error and Technique analysis on 200 randomly selected summaries on the CNN/DM test-set for the Point-Gen with Cov. (PGC), Bottom-Up (BU) and unsupervised Summary Loop (SL). For each summarization technique, we report two numbers: the number of successful occurrences in summaries with no error, and the total number of occurrences in the 200 summaries. four summarization techniques: Sentence Compression (summary sentence is a document sentence with words removed), Sentence Merging (2 or more document sentences are merged into a summary sentence), Novel Sentence (original sentence in the summary), and Entity Manipulation (a named entity is modified or simplified, e.g. changing a full name to a last name). We present Summary Loop examples illustrating each error and technique in Figures A3 – A8. The analysis was performed by the first author of the paper, labeling article/summary pairs without knowledge of model origin. A summary can manifest any number of summarization Techniques, or none. Labeling is binary: if a summary exhibits more than one or instances of a Technique, it receives a 1, otherwise it receives a 0. Results of the analysis are summarized in Table 4. SL uses significantly more summarization techniques (425) than PGC (148) and BU (287) summaries. Beyond raw counts, SL is more successful at applying summarization techniques (59% success) than BU (50% success), but less successful than PGC (72%). Note however that PGC takes little risk: 19% of the summaries go beyond sentence compression, and 39% are extractive, using none of the summarization techniques. 5.3 Level of Abstraction All methods generating summaries one word at a time have potential for abstraction. In Figure 4 we analyze human and system written summaries for abstraction level. We measure a summary’s level of abstraction by looking at the length of spans Figure 4: Histogram and average copied span lengths for abstractive summaries. A summary is composed of novel words and word spans of various lengths copied from the document. Summary Loop summaries copy shorter spans than prior automatic systems, but do not reach abstraction levels of human-written summaries. Initialization Method R-1 R-2 R-L Test Loss 28k samples from CNN/DM (10%) Random Initialization 7.0 0.9 8.8 6.05 GPT2 37.1 15.9 31.9 2.21 Summary Loop S10 38.7 16.2 35.1 2.07 All of CNN/DN (100%) Random Weights 20.4 4.1 19.1 4.22 GPT2 38.4 17.2 35.0 2.02 Summary Loop S100 41.0 18.1 37.3 1.89 Table 5: ROUGE Results on the CNN/DM test-set for supervised generative Transformers. Initializing with the unsupervised Summary Loop outperforms random and GPT2 initializations. copied from the document. Summary Loop is the most abstractive automated method, although less so than human written summaries. SL cuts nearly in half the length of copied spans compared to other automated methods. 5.4 Supervision is not the enemy If summaries are available, we show that they can complement the unsupervised Summary Loop. We run supervised experiments on CNN/DM using a generative Transformer architecture and varying the initialization. We compare initializing with (1) random weights, (2) the original GPT2 weights, and (3) the Summary Loop weights of target length 45. We train each model with teacher forcing, comparing using the entire CNN/DM training set to just 10% of it. The results are summarized in Table 5. First, initializing with the Summary Loop leads to higher ROUGE score both in the 10% and full dataset setting. As expected, results improve when using the entirety of the data, and the Summary Loop initialized model trained with the entirety of CNN/DM obtains a ROUGE-1 F1-score of 41.0, 5143 within the confidence interval of the supervised Bottom Up (Gehrmann et al., 2018) architecture. This is a strong result as the Transformer we use is a generic language model, and is not specialized for summarization. Second, initializing with Summary Loop and training with 10% of CNN/DM yields comparable ROUGE scores to initializing with GPT2 and using the entire CNN/DM, showing that Summary Loop can be useful when fewer summaries are available. 6 Discussion Customizing summaries. In Figure 1, we illustrate the effect of the length constraint by summarizing the same document under three different length constraints. Each model adapts to its word budget. However, length is only one way to customize summaries. One might want to summarize based on point of view, chronology, theme, etc. Fluency vs. Grammaticality. By choosing to represent the validity of summaries with a Language model, we encourage fluent summaries (i.e., with likely sequences of words) but not necessarily grammatical ones. Extending the scoring to include grammaticality, either by using a parsing model, or leveraging the Corpus of Linguistic Acceptability (Warstadt et al., 2019) could prove useful. Summarization in the wild. Because our method is unsupervised, it can be applied to new domains and languages. In this work, we benefited from pretrained BERT and GPT2 models in English, which do not yet exist publicly for other languages. Once they become available in other languages, the Summary Loop can be ported over. Abstraction dangers. Recent work around measuring factuality in generated text, using Natural Language Inference (Guo et al., 2018) or rule-based fact extraction (Zhang et al., 2019b) becomes increasingly important with summaries that are more abstractive. This work can be naturally included into the Summary Loop, with a fact-checker model generating an accuracy score. 7 Conclusion In this work we present a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint. When tested on common news summarization datasets, our method significantly outperforms previous unsupervised methods, and gets within the range of competitive supervised methods. Our models attain levels of abstraction closer to human-written summaries, although with more abstraction, more potential for factual inaccuracies arise. Acknowledgments We would like to thank Forrest Huang, David Chan, Roshan Rao, Katie Stasaski and the ACL reviewers for their helpful comments. This work was supported by the first author’s internship at Bloomberg, and a Bloomberg Data Science grant. We also gratefully acknowledge support received from an Amazon Web Services Machine Learning Research Award and an NVIDIA Corporation GPU grant. References Kristjan Arumae and Fei Liu. 2018. Reinforced extractive summarization with question-focused rewards. In Proceedings of ACL 2018, Student Research Workshop, pages 105–111. Federico Barrios, Federico L´opez, Luis Argerich, and Rosita Wachenchauzer. 2015. Variations of the similarity function of textrank for automated summarization. In Argentine Symposium on Artificial Intelligence (ASAI 2015)-JAIIO 44 (Rosario, 2015). Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, XianLing Mao, and Heyan Huang. 2019. Cross-lingual natural language generation via pre-training. arXiv preprint arXiv:1909.10481. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Sergey Edunov, Alexei Baevski, and Michael Auli. 2019. Pre-trained language model representations for language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4052–4059. Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question answering as an automatic evaluation metric for news article summarization. In Proceedings of the 2019 Conference of the North American 5144 Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3938–3948. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719. Min Gui, Junfeng Tian, Rui Wang, and Zhenglu Yang. 2019. Attention optimization for abstractive document summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1222–1228. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft layer-specific multi-task summarization with entailment and question generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 687–697. Philippe Laban and Marti A Hearst. 2017. newslens: building and visualizing long-ranging news stories. In Proceedings of the Events and Stories in the News Workshop, pages 1–9. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404–411. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ a glar Gulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. CoNLL 2016, page 280. Nikola I Nikolov and Richard HR Hahnloser. 2019. Abstractive document summarization without parallel data. arXiv preprint arXiv:1907.12951. Ramakanth Pasunuru and Mohit Bansal. 2018. Multireward reinforced summarization with saliency and entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 646– 653. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of ICLR. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you dont know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789. Juan Enrique Ramos. 2003. Using tf-idf to determine word relevance in document queries. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008–7024. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3237–3247. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073–1083. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Wenbo Wang, Yang Gao, He-Yan Huang, and Yuxiang Zhou. 2019. Concept pointer network for abstractive summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3067–3076. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. 5145 Peter West, Ari Holtzman, Jan Buys, and Yejin Choi. 2019. Bottlesum: Unsupervised and self-supervised sentence summarization using the information bottleneck principle. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3743–3752. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J Liu. 2019a. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777. Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christopher D Manning, and Curtis P Langlotz. 2019b. Optimizing the factual correctness of a summary: A study of summarizing radiology reports. arXiv preprint arXiv:1911.02541. 5146 A Masking Procedure Details The masking procedure follows these steps: 1. We randomly sample 5,000 documents in the domain being summarized (e.g. News) as a training corpus, 2. The training corpus is tokenized using the tokenizer of the Coverage model. In our case, we tokenize with the Word Piece model of the BERT Base model (Devlin et al., 2019), 3. We train a tf-idf transformation model using the tokenized training corpus using default parameters of scikit-learn’s tf-idf implementation (Pedregosa et al., 2011), 4. Given a document to be masked, we use the trained tf-idf model to produce a tf-idf for the document, 5. The words present in the document are ranked in decreasing order of tf-idf score, and the k words with highest tf-idf form the masking set, 6. All occurrences of the words in the masking set are replaced by a mask in the document, creating the masked document. B Fluency Examples Table A1 provides examples from the Headline dataset of sampled headlines and their corresponding Fluency Score. The Fluency Score, a normalized language model log-perplexity, ranges from 0 to 1. Even though all these headlines are written by a human, the Fluency scores vary, with the higherscoring headlines using more standard grammatical constructs. Note that the use of complex entity names does not prevent the model from obtaining a high Fluency score. Example Headline Fluency Score Henry’s Monaco recruit giant Brazilian Naldo for relegation scrap 0.16 Tesla shares dive after price cut, production numbers 0.41 French police arrest gilets jaunes protests leader Eric Drouet 0.59 Carlos Ghosn will appear in public for the first time since his arrest 0.75 Table A1: Example selected headlines and their Fluency score. The headlines were picked from a corpus of human-written news headlines. The average Fluency in the corpus is 0.479. C Model Size and Initialization Figure A1 shows the model size and initialization model used for each of the Summarizer, Coverage and Fluency models. Summarizer Architecture GPT2-base: 12-layer, 768-hidden, 12-heads Summarizer Initialization GPT2 base model from Radford et al. (2019) Coverage Architecture BERT-base: 12-layer, 768-hidden, 12-heads Coverage Initialization Pretrained model obtained in Section 3.4.2 Fluency Architecture GPT2-base: 12-layer, 768-hidden, 12-heads Fluency Initialization GPT2 base model from (Radford et al., 2019), finetuned with Language modeling on news text. Figure A1: The model size choice as well as initialization method for the Summarizer, Coverage and Fluency models in the Summary Loop. Each model leverages a pretrained Transformer. D Training Plots Figure A2 presents the plots of key variables we obtain during the training of the length 10 Summary Loop model. The training occurred over 10 days using a single Titan X GPU. During a first phase which occurs in the first 2 days of training, the model learns to copy content from the news article, which helps it achieve high Fluency and Coverage. In a second phase starting around the second day, the Summarizer learns to gain Coverage which maintaining Fluency mostly constant, which makes the overall Summary Score rise. The Summarizer model quickly learns to use its word budget, and after 10 days of training, the model uses an average of 9.7 words in its summaries. E Example Annotated Summaries Figures A3, A4, A5, A6, A7, and A8 show example documents and the generated Summary Loop summary from the error and technique analysis of Section 5.2. Each summary manifests a summarization technique or error observed. 5147 (a) Fluency Score (b) Coverage Score (c) Summary Score (d) Average number of words in summary Figure A2: Plots of key variables during the training of the length 10 Summary Loop: (a) is a plot of the average Fluency Score, (b) is a plot of the average normalized Coverage Score, (c) is a plot of the average Summary Score (taking guard-rails into account), and (d) is a plot of the average number of words in summaries produced. Sentence Compression Example Document: He has long struggled to convince voters that he is a suitable choice for prime minister. Now Ed Miliband has hired a leadership coaching firm that helps people overcome anxiety and find their “inner voice”. The consultants drafted in by the Labour leader claim to work with politicians to build ”leadership skills” using “neuroscience” and “business psychology”. Ed Miliband, pictured, has hired a US guru who can help him convince himself that he can be Prime Minister. [...] Summary: Ed Miliband has hired a US guru who can help politicians on their leadership skills using neuroscience. Mr Miliband has hired the firm that can help politicians to build their leadership skills. The consultants drafted in by the Labour leader claim to work with politicians. Figure A3: Summary Loop summary from the Error and Technique analysis (Section 5.2) illustrating the Sentence Compression technique. The blue boldface highlight is an example of sentence compression. 5148 Sentence Merging Example Document: A single mom and her three kids who “lost everything but their lives” in the East Village apartment explosion last week are getting an incredible outpouring of support from their fellow New Yorkers. [...] Dr McLean, a 58-year-old child psychiatrist in the South Bronx, says she and daughter Rose, 8, and twins James and Annabelle, 5, had nothing more than the clothes on their backs after the disaster. Diane McLean, 58, and her three children lost “everything but their lives” when fire destroyed their apartment last week. Rose, 8, ( left ) and twins James and Annabelle, 5, lost everything except the clothes on their backs in the fire that destroyed their apartment building. [..] A GoFundMe campaign has raised nearly $ 90,000. [...] Summary: Diane McLean says she and daughter Rose, 8, and twins James and Annabelle, lost everything but their lives at East Village apartment explosion last week. Diane McLean and her three kids had the clothes on their backs. A GoFundMe campaign has raised nearly $ 90,000. Figure A4: Summary Loop summary from the Error and Technique analysis (Section 5.2) illustrating the Sentence Merging technique. The bold blue and italicized red selections are two examples of sentence merging. In the blue example “Dr McLean” is replaced by “Diane McLean” in the summary, an example of entity manipulation. Novel Sentence Example Document: For most of us, the dream of a holiday home is one that will probably never be realised. But for the lucky minority with a few extra million in the bank, its seems the world is quite literally your oyster when looking for property around the world. From a Lake Garda mansion with a pool overlooking the water to an Italian villa that looks like a castle and an Antigua retreat with Giorgio Armani as a neighbour, these are some of the most spectacular holiday homes on the market at the moment. On the Lombardy side of Lake Garda, this Lionard property is a luxurious villa with one serious waterfront view. Lake Garda. On the Lombardy side of Lake Garda, in northern Italy, lies a luxury villa with a view - just several miles north of Brescia. And for e 18 million ( about £13 million or $20 million ) it can all be yours. Not only is there a large swimming pool looking out on the water, but also a large deck with plenty of space for sun beds, gazebos and al fresco dining spots, overlooking a 4000 square metre garden. Inside, the house is just as breathtaking. For about 18 million Euros ( or $ 13 million ), the modern home, complete with pool, gazebo, and al fresco dining options, can be yours. [...] Summary: The Lake Garda home is a luxury villa with a view on the Lombardy side of Lake Garda. This villa with gazebo and al fresco dining options. Inside, the house is just as breathtaking. For about 18 million Euros. Figure A5: Summary Loop summary from the Error and Technique analysis (Section 5.2) illustrating the Novel Sentence technique. The first sentence of the summary uses pieces from the original document (in boldface blue) to form a sentence with an alternative but correct meaning. 5149 Entity Manipulation Example Document: Sipping a glass of glorious red wine which has been carefully aged in a hand-crafted oak barrel is my idea of heaven. [...] A $ 5 bottle has suddenly become $ 12 because the wine has lingered in an oak barrel before bottling. So when I read this week about a new gadget that claims to be able to “oak age” wine in hours rather than years, my curiosity was seriously roused. The Oak Bottle promises to impart an authentic aged flavour – a process that can take up to two years – in just a day or two. Who wouldn’t drink to that ? Scroll down for video. TV wine expert Oz Clarke puts to the test this oak bottle that claims to “oak age” wine in hours rather than years. The product, which retails at $ 50, is the brainchild of 30-year-old entrepreneur Joel Paglione. [...] Summary: Joel Paglione said the Oak Bottle promises to be able to oak age wine in hours rather than years. The Oak Bottle promises an authentic aged flavour that can take up to two years. A bottle has been made in an oak barrel. Figure A6: Summary Loop summary from the Error and Technique analysis (Section 5.2) illustrating the Entity Manipulation technique. The entity Joel Paglione (in boldface blue) is correctly inserted to represent the company. Inaccurate Example Document: The traditional cookie cutter wedding no longer exists - new reports suggest Brits are ditching tradition in favour of alternative practices when it comes to getting hitched. Two of the biggest changes are the fact that religious services have fallen out of favour and that brides are opting for bold colour schemes for their big day. A new study, which has tracked the decisions of brides and grooms over the past five years interviewed 1,893 newlyweds and compared them to answers they have collated since 2010. Scroll down for video. [...] Summary: The new study showed that British couples are opting for religious ceremonies when it comes to their big day with services falling from 40 per cent of the past five years. The study showed that couples are opting to holiday in the UK. Figure A7: Summary Loop summary from the Error and Technique analysis (Section 5.2) illustrating the Inaccurate error. The summary inaccurately claims religious ceremonies are increasing, when the document says they are in decline. Key phrases are highlighted in boldface blue. 5150 Ungrammatical Example Document: Despite his daughter remaining in a medically induced coma since she was found unresponsive in a bathtub at her Atlanta home in January, singer Bobby Brown told an audience on Saturday night that she is “awake.”. Bobby was performing at the Verizon Theatre in Dallas when he told the stunned audience that “Bobbi is awake. She’s watching me.” The singer didn’t elaborate on if his daughter had regained consciousness or if he was talking instead about her spirit. After the 46-year-old’s comment, his sister Tina posted on Facebook,” [...] Whitney Houston’s family insists the 22-year-old is not awake and is the same condition she was when she entered the facility. ”She’s in the exact same condition she was in when she went into the facility.” a source told the site [...] Summary: Bobby Brown was performing at the Verizon Theatre in Dallas when Bobbi was awake. He said that Tina posted on Facebook that her daughter was awake. She was the singer. She was going to be awake. She is the same condition. Figure A8: Summary Loop summary from the Error and Technique analysis (Section 5.2) illustrating the Ungrammatical error. The last short summary sentence (in boldface blue) is not properly constructed, based on an unsuccessful attempt to compress a sentence in the document (also in boldface blue).
2020
460
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5151–5169 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5151 Unsupervised Opinion Summarization as Copycat-Review Generation Arthur Braˇzinskas1, Mirella Lapata1 and Ivan Titov1,2 1 ILCC, University of Edinburgh 2 ILLC, University of Amsterdam [email protected], {mlap,ititov}@inf.ed.ac.uk Abstract Opinion summarization is the task of automatically creating summaries that reflect subjective information expressed in multiple documents, such as product reviews. While the majority of previous work has focused on the extractive setting, i.e., selecting fragments from input reviews to produce a summary, we let the model generate novel sentences and hence produce abstractive summaries. Recent progress in summarization has seen the development of supervised models which rely on large quantities of document-summary pairs. Since such training data is expensive to acquire, we instead consider the unsupervised setting, in other words, we do not use any summaries in training. We define a generative model for a review collection which capitalizes on the intuition that when generating a new review given a set of other reviews of a product, we should be able to control the “amount of novelty” going into the new review or, equivalently, vary the extent to which it deviates from the input. At test time, when generating summaries, we force the novelty to be minimal, and produce a text reflecting consensus opinions. We capture this intuition by defining a hierarchical variational autoencoder model. Both individual reviews and the products they correspond to are associated with stochastic latent codes, and the review generator (“decoder”) has direct access to the text of input reviews through the pointergenerator mechanism. Experiments on Amazon and Yelp datasets, show that setting at test time the review’s latent code to its mean, allows the model to produce fluent and coherent summaries reflecting common opinions. 1 Introduction Summarization of user opinions expressed in online resources, such as blogs, reviews, social media, or internet forums, has drawn much attention due to its potential for various information access applications, such as creating digests, search, and report Summary This restaurant is a hidden gem in Toronto. The food is delicious, and the service is impeccable. Highly recommend for anyone who likes French bistro. Reviews We got the steak frites and the chicken frites both of which were very good ... Great service ... || I really love this place ... Cˆote de Boeuf ... A Jewel in the big city ... || French jewel of Spadina and Adelaide , Jules ... They are super accommodating ... moules and frites are delicious ... || Food came with tons of greens and fries along with my main course , thumbs uppp ... || Chef has a very cool and fun attitude ... || Great little French Bistro spot ... Go if you want French bistro food classics ... || Great place ... the steak frites and it was amazing ... Best Steak Frites ... in Downtown Toronto ... || Favourite french spot in the city ... cr`eme brule for dessert Table 1: A summary produced by our model; colors encode its alignment to the input reviews. The reviews are truncated, and delimited with the symbol ‘||’. generation (Hu and Liu, 2004; Angelidis and Lapata, 2018; Medhat et al., 2014). Although there has been significant progress recently in summarizing non-subjective context (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; See et al., 2017; Liu et al., 2018), modern deep learning methods rely on large amounts of annotated data that are not readily available in the opinion-summarization domain and expensive to produce. Moreover, annotation efforts would have to be undertaken for multiple domains as online reviews are inherently multi-domain (Blitzer et al., 2007) and summarization systems highly domain-sensitive (Isonuma et al., 2017). Thus, perhaps unsurprisingly, there is a long history of applying unsupervised and weakly-supervised methods to opinion summarization (e.g., Mei et al. 2007; Titov and McDonald 2008; Angelidis and Lapata 2018), however, these 5152 approaches have primarily focused on extractive summarization, i.e., producing summaries by copying parts of the input reviews. In this work, we instead consider abstractive summarization which involves generating new phrases, possibly rephrasing or using words that were not in the original text. Abstractive summaries are often preferable to extractive ones as they can synthesize content across documents avoiding redundancy (Barzilay et al., 1999; Carenini and Cheung, 2008; Di Fabbrizio et al., 2014). In addition, we focus on the unsupervised setting and do not use any summaries for training. Unlike aspect-based summarization (Liu, 2012), which rewards the diversity of opinions, we aim to generate summaries that represent consensus (i.e., dominant opinons in reviews). We argue that such summaries can be useful for quick decision making, and to get an overall feel for a product or business (see the example in Table 1). More specifically, we assume we are provided with a large collection of reviews for various products and businesses and define a generative model of this collection. Intuitively, we want to design such a model that, when generating a review for a product1 relying on a set of other reviews, we can control the “amount of novelty” going into the new review or, equivalently, vary the extent to which it deviates from the input. At test time, we can force the novelty to be minimal, and generate summaries representing consensus opinions. We capture this intuition by defining a hierarchical variational autoencoder (VAE) model. Both products and individual reviews are associated with latent representations. Product representations can store, for example, overall sentiment, common topics, and opinions expressed about the product. In contrast, latent representations of reviews depend on the product representations and capture the content of individual reviews. While at training time the latent representations are random variables, we fix them to their respective means at test time. As desired for summarization, these ‘average’ (or ‘copycat’) reviews differ in writing style from a typical review. For example, they do not contain irrelevant details that are common in customer reviews, such as mentioning the occasion or saying how many family members accompanied the reviewer. In order to encourage the summaries to include spe1For simplicity, we refer to both products (e.g., iPhone X) and businesses (e.g., a specific Starbucks branch) as products. cific details, the review generator (‘decoder’) has direct access to the text of input reviews through the pointer-generator mechanism (See et al., 2017). In the example in Table 1, the model included specific information about the restaurant type and its location in the generated summary. As we will see in ablation experiments, without this conditioning, model performance drops substantially, as the summaries become more generic. We evaluate our approach on two datasets, Amazon product reviews and Yelp reviews of businesses. The only previous method dealing with unsupervised multi-document opinion summarization, as far as we are aware of, is MeanSum (Chu and Liu, 2019). Similarly to our work, they generate consensus summaries and consider the Yelp benchmark. Whereas we rely on continuous latent representations, they treat the summary itself as a discrete latent representation of a product. Although this captures the intuition that a summary should relay key information about a product, using discrete latent sequences makes optimization challenging; (Miao and Blunsom, 2016; Baziotis et al., 2019; Chu and Liu, 2019) all have to use an extra training loss term and biased gradient estimators. Our contributions can be summarized as follows: • we introduce a simple end-to-end approach to unsupervised abstractive summarization; • we demonstrate that the approach substantially outperforms the previous method, both when measured with automatic metrics and in human evaluation; • we provide a dataset of abstractive summaries for Amazon products.2 2 Model and Estimation As discussed above, we approach the summarization task from a generative modeling perspective. We start with a high level description of our model, then, in Sections 2.2 and 2.3, we describe how we estimate the model and provide extra technical details. In Section 3, we explain how we use the model to generate summaries. 2.1 Overview of the Generative Model Our text collection consists of groups of reviews, with each group corresponding to a single product. 2Data and code: https://github.com/ixlan/ Copycat-abstractive-opinion-summarizer. 5153 z1 <latex it sha1_bas e64="HtnNQ8 7l4sUB873B 8jRew1tlTM= ">AB6nic bVA9SwNBEJ2 LXzF+RS1tlg TBKtzFQsuAj WVE8wHJEfY2 e8mSvb1jd06 IR36CjYUitv 4iO/+Nm+QKT Xw8Hhvhpl5 QSKFQdf9dgo bm1vbO8Xd0 t7+weFR+fik beJUM95isYx 1N6CGS6F4Cw VK3k0p1Ege SeY3Mz9ziPX RsTqAacJ9yM 6UiIUjKV7p 8G3qBcdWvuA mSdeDmpQo7m oPzVH8Ysjb hCJqkxPc9N0 M+oRsEkn5X6 qeEJZRM64j1 LFY248bPFqT NybpUhCWNtS yFZqL8nMhoZ M40C2xlRHJt Vby7+5/VSDK /9TKgkRa7Yc lGYSoIxmf9 NhkJzhnJqCW Va2FsJG1NG dp0SjYEb/Xl dKu17zLWv3 OqzYqeRxFOI MKXIAHV9CAW 2hCxiM4Ble 4c2Rzovz7nw sWwtOPnMKf+ B8/gAFeo2D </latexit>z1 <latex it sha1_bas e64="HtnNQ8 7l4sUB873B 8jRew1tlTM= ">AB6nic bVA9SwNBEJ2 LXzF+RS1tlg TBKtzFQsuAj WVE8wHJEfY2 e8mSvb1jd06 IR36CjYUitv 4iO/+Nm+QKT Xw8Hhvhpl5 QSKFQdf9dgo bm1vbO8Xd0 t7+weFR+fik beJUM95isYx 1N6CGS6F4Cw VK3k0p1Ege SeY3Mz9ziPX RsTqAacJ9yM 6UiIUjKV7p 8G3qBcdWvuA mSdeDmpQo7m oPzVH8Ysjb hCJqkxPc9N0 M+oRsEkn5X6 qeEJZRM64j1 LFY248bPFqT NybpUhCWNtS yFZqL8nMhoZ M40C2xlRHJt Vby7+5/VSDK /9TKgkRa7Yc lGYSoIxmf9 NhkJzhnJqCW Va2FsJG1NG dp0SjYEb/Xl dKu17zLWv3 OqzYqeRxFOI MKXIAHV9CAW 2hCxiM4Ble 4c2Rzovz7nw sWwtOPnMKf+ B8/gAFeo2D </latexit> c <latexit sh a1_base64="rWO4aMm2sRs ktQXw3xGEoIXJ3Ew=">A B6HicbVA9SwNBEJ2LXzF+R S1tlgTBKtzFQsuAjWUC5gO SI+xt5pI1e3vH7p4QjvwC GwtFbP1Jdv4bN8kVmvhg4P HeDPzgkRwbVz32ylsbe/s 7hX3SweHR8cn5dOzjo5Txb DNYhGrXkA1Ci6xbgR2EsU 0igQ2A2mdwu/+4RK81g+m FmCfkTHkoecUWOlFhuWq27 NXYJsEi8nVcjRHJa/BqOYp RFKwTVu+5ifEzqgxnAue lQaoxoWxKx9i3VNItZ8tD 52TS6uMSBgrW9KQpfp7Iq OR1rMosJ0RNRO97i3E/7x+ asJbP+MySQ1KtloUpoKYmC y+JiOukBkxs4Qyxe2thE2o oszYbEo2BG/95U3Sqde861 q95VUblTyOIlxABa7Agxt owD0oQ0MEJ7hFd6cR+fFe Xc+Vq0FJ585hz9wPn8AvSO MyA=</latexit>c <latexit sh a1_base64="rWO4aMm2sRs ktQXw3xGEoIXJ3Ew=">A B6HicbVA9SwNBEJ2LXzF+R S1tlgTBKtzFQsuAjWUC5gO SI+xt5pI1e3vH7p4QjvwC GwtFbP1Jdv4bN8kVmvhg4P HeDPzgkRwbVz32ylsbe/s 7hX3SweHR8cn5dOzjo5Txb DNYhGrXkA1Ci6xbgR2EsU 0igQ2A2mdwu/+4RK81g+m FmCfkTHkoecUWOlFhuWq27 NXYJsEi8nVcjRHJa/BqOYp RFKwTVu+5ifEzqgxnAue lQaoxoWxKx9i3VNItZ8tD 52TS6uMSBgrW9KQpfp7Iq OR1rMosJ0RNRO97i3E/7x+ asJbP+MySQ1KtloUpoKYmC y+JiOukBkxs4Qyxe2thE2o oszYbEo2BG/95U3Sqde861 q95VUblTyOIlxABa7Agxt owD0oQ0MEJ7hFd6cR+fFe Xc+Vq0FJ585hz9wPn8AvSO MyA=</latexit> zN <latexit sha1_base64=" n3BreBzdCgo8Wf+t9NvW0ZOVfFo=">AB6nicbVA9Sw NBEJ3zM8avqKXNkiBYhbtYaBmwsZKI5gOSI+xt9pIle3 vH7pwQj/wEGwtFbP1Fdv4bN8kVmvhg4PHeDPzgkQKg 67aytb2xubRd2irt7+weHpaPjlolTzXiTxTLWnYAaL oXiTRQoeSfRnEaB5O1gfD3z249cGxGrB5wk3I/oUIlQ MIpWun/q3/ZLFbfqzkFWiZeTCuRo9EtfvUHM0ogrZJIa 0/XcBP2MahRM8mxlxqeUDamQ961VNGIGz+bnzolZ1YZ kDWthSufp7IqORMZMosJ0RxZFZ9mbif143xfDKz4R KUuSKLRaFqSQYk9nfZCA0ZygnlCmhb2VsBHVlKFNp2h D8JZfXiWtWtW7qNbuvEq9nMdRgFMowzl4cAl1uIEGNI HBEJ7hFd4c6bw4787HonXNyWdO4A+czx8xbo2g</late xit>zN <latexit sha1_base64=" n3BreBzdCgo8Wf+t9NvW0ZOVfFo=">AB6nicbVA9Sw NBEJ3zM8avqKXNkiBYhbtYaBmwsZKI5gOSI+xt9pIle3 vH7pwQj/wEGwtFbP1Fdv4bN8kVmvhg4PHeDPzgkQKg 67aytb2xubRd2irt7+weHpaPjlolTzXiTxTLWnYAaL oXiTRQoeSfRnEaB5O1gfD3z249cGxGrB5wk3I/oUIlQ MIpWun/q3/ZLFbfqzkFWiZeTCuRo9EtfvUHM0ogrZJIa 0/XcBP2MahRM8mxlxqeUDamQ961VNGIGz+bnzolZ1YZ kDWthSufp7IqORMZMosJ0RxZFZ9mbif143xfDKz4R KUuSKLRaFqSQYk9nfZCA0ZygnlCmhb2VsBHVlKFNp2h D8JZfXiWtWtW7qNbuvEq9nMdRgFMowzl4cAl1uIEGNI HBEJ7hFd4c6bw4787HonXNyWdO4A+czx8xbo2g</late xit> Great Italian restaurant with authentic food and great service! Recommend! We ordered pasta, and it was very tasty. Would recommend this place to anyone. We visited this place last week. The waiters were friendly, and the food was great! zi <latexit sh a1_base64="CR7L54o1YeU 9LsNQE40W9Uq2lJ0=">A B6nicbVA9SwNBEJ2LXzF+R S1tlgTBKtzFQsuAjWVE8wH JEfY2e8mSvb1jd06IR36C jYUitv4iO/+Nm+QKTXw8H hvhpl5QSKFQdf9dgobm1vb O8Xd0t7+weFR+fikbeJUM9 5isYx1N6CGS6F4CwVK3k0 p1EgeSeY3Mz9ziPXRsTqA acJ9yM6UiIUjKV7p8GYlC ujV3AbJOvJxUIUdzUP7qD 2OWRlwhk9SYnucm6GdUo2C Sz0r91PCEsgkd8Z6likbc+ Nni1Bk5t8qQhLG2pZAs1N 8TGY2MmUaB7Ywojs2qNxf/ 83ophtd+JlSIldsuShMJc GYzP8mQ6E5Qzm1hDIt7K2E jamDG06JRuCt/ryOmnXa9 5lrX7nVRuVPI4inEFLsC DK2jALTShBQxG8Ayv8OZI5 8V5dz6WrQUnzmFP3A+fwB aWo27</latexit>zi <latexit sh a1_base64="CR7L54o1YeU 9LsNQE40W9Uq2lJ0=">A B6nicbVA9SwNBEJ2LXzF+R S1tlgTBKtzFQsuAjWVE8wH JEfY2e8mSvb1jd06IR36C jYUitv4iO/+Nm+QKTXw8H hvhpl5QSKFQdf9dgobm1vb O8Xd0t7+weFR+fikbeJUM9 5isYx1N6CGS6F4CwVK3k0 p1EgeSeY3Mz9ziPXRsTqA acJ9yM6UiIUjKV7p8GYlC ujV3AbJOvJxUIUdzUP7qD 2OWRlwhk9SYnucm6GdUo2C Sz0r91PCEsgkd8Z6likbc+ Nni1Bk5t8qQhLG2pZAs1N 8TGY2MmUaB7Ywojs2qNxf/ 83ophtd+JlSIldsuShMJc GYzP8mQ6E5Qzm1hDIt7K2E jamDG06JRuCt/ryOmnXa9 5lrX7nVRuVPI4inEFLsC DK2jALTShBQxG8Ayv8OZI5 8V5dz6WrQUnzmFP3A+fwB aWo27</latexit> r1 <latex it sha1_bas e64="FHLus iRugfXq9yLD HJXeoRyDTw= ">AB6nic bVA9SwNBEJ2 LXzF+RS1tlg TBKtzFIpYBG 8uI5gOSI+xt 9pIle3vH7pw QjvwEGwtFbP 1Fdv4bN8kVm vhg4PHeDPz gkQKg67RS 2tnd294r7p YPDo+OT8ulZ x8SpZrzNYhn rXkANl0LxNg qUvJdoTqNA8 m4wvV343Seu jYjVI84S7kd 0rEQoGEUrPe ihNyxX3Zq7B NkXk6qkKM1 LH8NRjFLI6 6QSWpM3MT9 DOqUTDJ56VB anhC2ZSOed9 SRSNu/Gx56p xcWmVEwljbU kiW6u+JjEbG zKLAdkYUJ2b dW4j/ef0Uwx s/EypJkSu2W hSmkmBMFn+ TkdCcoZxZQp kW9lbCJlRTh jadkg3BW395 k3TqNe+6Vr/ 3qs1KHkcRLq ACV+BA5pwB y1oA4MxPMr vDnSeXHenY9 Va8HJZ87hD5 zPH/k7jXs= </latexit>r1 <latex it sha1_bas e64="FHLus iRugfXq9yLD HJXeoRyDTw= ">AB6nic bVA9SwNBEJ2 LXzF+RS1tlg TBKtzFIpYBG 8uI5gOSI+xt 9pIle3vH7pw QjvwEGwtFbP 1Fdv4bN8kVm vhg4PHeDPz gkQKg67RS 2tnd294r7p YPDo+OT8ulZ x8SpZrzNYhn rXkANl0LxNg qUvJdoTqNA8 m4wvV343Seu jYjVI84S7kd 0rEQoGEUrPe ihNyxX3Zq7B NkXk6qkKM1 LH8NRjFLI6 6QSWpM3MT9 DOqUTDJ56VB anhC2ZSOed9 SRSNu/Gx56p xcWmVEwljbU kiW6u+JjEbG zKLAdkYUJ2b dW4j/ef0Uwx s/EypJkSu2W hSmkmBMFn+ TkdCcoZxZQp kW9lbCJlRTh jadkg3BW395 k3TqNe+6Vr/ 3qs1KHkcRLq ACV+BA5pwB y1oA4MxPMr vDnSeXHenY9 Va8HJZ87hD5 zPH/k7jXs= </latexit> ri <latexit sh a1_base64="6ejvqfWeTml kh+xJPTvHFGhmpk=">A B6nicbVA9SwNBEJ2LXzF+R S1tlgTBKtzFIpYBG8uI5gO SI+xt5pIle3vH7p4QjvwE GwtFbP1Fdv4bN8kVmvhg4P HeDPzgkRwbVz32ylsbe/s 7hX3SweHR8cn5dOzjo5Txb DNYhGrXkA1Ci6xbgR2EsU 0igQ2A2mtwu/+4RK81g+m lmCfkTHkoecUWOlBzXkw3L VrblLkE3i5aQKOVrD8tdgF LM0QmYoFr3PTcxfkaV4Uz gvDRINSaUTekY+5ZKGqH2s +Wpc3JplREJY2VLGrJUf0 9kNJ6FgW2M6Jmote9hfif 109NeONnXCapQclWi8JUEB OTxd9kxBUyI2aWUKa4vZWw CVWUGZtOyYbgrb+8STr1mn dq971WYlj6MIF1CBK/C gAU24gxa0gcEYnuEV3hzhv DjvzseqteDkM+fwB87nD04 qjbM=</latexit>ri <latexit sh a1_base64="6ejvqfWeTml kh+xJPTvHFGhmpk=">A B6nicbVA9SwNBEJ2LXzF+R S1tlgTBKtzFIpYBG8uI5gO SI+xt5pIle3vH7p4QjvwE GwtFbP1Fdv4bN8kVmvhg4P HeDPzgkRwbVz32ylsbe/s 7hX3SweHR8cn5dOzjo5Txb DNYhGrXkA1Ci6xbgR2EsU 0igQ2A2mtwu/+4RK81g+m lmCfkTHkoecUWOlBzXkw3L VrblLkE3i5aQKOVrD8tdgF LM0QmYoFr3PTcxfkaV4Uz gvDRINSaUTekY+5ZKGqH2s +Wpc3JplREJY2VLGrJUf0 9kNJ6FgW2M6Jmote9hfif 109NeONnXCapQclWi8JUEB OTxd9kxBUyI2aWUKa4vZWw CVWUGZtOyYbgrb+8STr1mn dq971WYlj6MIF1CBK/C gAU24gxa0gcEYnuEV3hzhv DjvzseqteDkM+fwB87nD04 qjbM=</latexit> rN <latexit sha1_base64=" +u2ZEi3F18mUZuF5me4Z7PCAJH8=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjZVENB+QHGFvM5cs2d s7dveEcOQn2FgoYusvsvPfuEmu0MQHA4/3ZpiZFySCa +O6305hY3Nre6e4W9rbPzg8Kh+ftHWcKoYtFotYdQOqU XCJLcONwG6ikEaBwE4wuZn7nSdUmsfy0UwT9CM6kjzk jBorPajB3aBcdWvuAmSdeDmpQo7moPzVH8YsjVAaJqjW Pc9NjJ9RZTgTOCv1U40JZRM6wp6lkao/Wx6oycW2VI wljZkoYs1N8TGY20nkaB7YyoGetVby7+5/VSE17GZd JalCy5aIwFcTEZP43GXKFzIipJZQpbm8lbEwVZcamU7I heKsvr5N2veZd1ur3XrVRyeMowhlU4AI8uIG3EITWs BgBM/wCm+OcF6cd+dj2Vpw8plT+APn8wclPo2Y</late xit>rN <latexit sha1_base64=" +u2ZEi3F18mUZuF5me4Z7PCAJH8=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjZVENB+QHGFvM5cs2d s7dveEcOQn2FgoYusvsvPfuEmu0MQHA4/3ZpiZFySCa +O6305hY3Nre6e4W9rbPzg8Kh+ftHWcKoYtFotYdQOqU XCJLcONwG6ikEaBwE4wuZn7nSdUmsfy0UwT9CM6kjzk jBorPajB3aBcdWvuAmSdeDmpQo7moPzVH8YsjVAaJqjW Pc9NjJ9RZTgTOCv1U40JZRM6wp6lkao/Wx6oycW2VI wljZkoYs1N8TGY20nkaB7YyoGetVby7+5/VSE17GZd JalCy5aIwFcTEZP43GXKFzIipJZQpbm8lbEwVZcamU7I heKsvr5N2veZd1ur3XrVRyeMowhlU4AI8uIG3EITWs BgBM/wCm+OcF6cd+dj2Vpw8plT+APn8wclPo2Y</late xit> … … … … (a) Conditional independence of the reviews given the group representation c. z1 <latexit sha1_base64=" HtnNQ87l4sUB873B8jRew1tlTM=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjWVE8wHJEfY2e8mSvb 1jd06IR36CjYUitv4iO/+Nm+QKTXw8Hhvhpl5QSKFQ df9dgobm1vbO8Xd0t7+weFR+fikbeJUM95isYx1N6CGS 6F4CwVK3k0p1EgeSeY3Mz9ziPXRsTqAacJ9yM6UiIU jKV7p8G3qBcdWvuAmSdeDmpQo7moPzVH8YsjbhCJqkx Pc9N0M+oRsEkn5X6qeEJZRM64j1LFY248bPFqTNybpUh CWNtSyFZqL8nMhoZM40C2xlRHJtVby7+5/VSDK/9TKg kRa7YclGYSoIxmf9NhkJzhnJqCWVa2FsJG1NGdp0SjY Eb/XldKu17zLWv3OqzYqeRxFOIMKXIAHV9CAW2hCx iM4Ble4c2Rzovz7nwsWwtOPnMKf+B8/gAFeo2D</late xit>z1 <latexit sha1_base64=" HtnNQ87l4sUB873B8jRew1tlTM=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjWVE8wHJEfY2e8mSvb 1jd06IR36CjYUitv4iO/+Nm+QKTXw8Hhvhpl5QSKFQ df9dgobm1vbO8Xd0t7+weFR+fikbeJUM95isYx1N6CGS 6F4CwVK3k0p1EgeSeY3Mz9ziPXRsTqAacJ9yM6UiIU jKV7p8G3qBcdWvuAmSdeDmpQo7moPzVH8YsjbhCJqkx Pc9N0M+oRsEkn5X6qeEJZRM64j1LFY248bPFqTNybpUh CWNtSyFZqL8nMhoZM40C2xlRHJtVby7+5/VSDK/9TKg kRa7YclGYSoIxmf9NhkJzhnJqCWVa2FsJG1NGdp0SjY Eb/XldKu17zLWv3OqzYqeRxFOIMKXIAHV9CAW2hCx iM4Ble4c2Rzovz7nwsWwtOPnMKf+B8/gAFeo2D</late xit> c <latexit sha1_base64=" rWO4aMm2sRsktQXw3xGEoIXJ3Ew=">AB6HicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjWUC5gOSI+xt5pI1e3 vH7p4QjvwCGwtFbP1Jdv4bN8kVmvhg4PHeDPzgkRwb Vz32ylsbe/s7hX3SweHR8cn5dOzjo5TxbDNYhGrXkA1C i6xbgR2EsU0igQ2A2mdwu/+4RK81g+mFmCfkTHkoec UWOlFhuWq27NXYJsEi8nVcjRHJa/BqOYpRFKwTVu+5 ifEzqgxnAuelQaoxoWxKx9i3VNItZ8tD52TS6uMSBgr W9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asJbP+MySQ1 KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2oszYbEo2BG/ 95U3Sqde861q95VUblTyOIlxABa7AgxtowD0oQ0MEJ 7hFd6cR+fFeXc+Vq0FJ585hz9wPn8AvSOMyA=</late xit>c <latexit sha1_base64=" rWO4aMm2sRsktQXw3xGEoIXJ3Ew=">AB6HicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjWUC5gOSI+xt5pI1e3 vH7p4QjvwCGwtFbP1Jdv4bN8kVmvhg4PHeDPzgkRwb Vz32ylsbe/s7hX3SweHR8cn5dOzjo5TxbDNYhGrXkA1C i6xbgR2EsU0igQ2A2mdwu/+4RK81g+mFmCfkTHkoec UWOlFhuWq27NXYJsEi8nVcjRHJa/BqOYpRFKwTVu+5 ifEzqgxnAuelQaoxoWxKx9i3VNItZ8tD52TS6uMSBgr W9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asJbP+MySQ1 KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2oszYbEo2BG/ 95U3Sqde861q95VUblTyOIlxABa7AgxtowD0oQ0MEJ 7hFd6cR+fFeXc+Vq0FJ585hz9wPn8AvSOMyA=</late xit> zN <latexit sha1_base64=" n3BreBzdCgo8Wf+t9NvW0ZOVfFo=">AB6nicbVA9Sw NBEJ3zM8avqKXNkiBYhbtYaBmwsZKI5gOSI+xt9pIle3 vH7pwQj/wEGwtFbP1Fdv4bN8kVmvhg4PHeDPzgkQKg 67aytb2xubRd2irt7+weHpaPjlolTzXiTxTLWnYAaL oXiTRQoeSfRnEaB5O1gfD3z249cGxGrB5wk3I/oUIlQ MIpWun/q3/ZLFbfqzkFWiZeTCuRo9EtfvUHM0ogrZJIa 0/XcBP2MahRM8mxlxqeUDamQ961VNGIGz+bnzolZ1YZ kDWthSufp7IqORMZMosJ0RxZFZ9mbif143xfDKz4R KUuSKLRaFqSQYk9nfZCA0ZygnlCmhb2VsBHVlKFNp2h D8JZfXiWtWtW7qNbuvEq9nMdRgFMowzl4cAl1uIEGNI HBEJ7hFd4c6bw4787HonXNyWdO4A+czx8xbo2g</late xit>zN <latexit sha1_base64=" n3BreBzdCgo8Wf+t9NvW0ZOVfFo=">AB6nicbVA9Sw NBEJ3zM8avqKXNkiBYhbtYaBmwsZKI5gOSI+xt9pIle3 vH7pwQj/wEGwtFbP1Fdv4bN8kVmvhg4PHeDPzgkQKg 67aytb2xubRd2irt7+weHpaPjlolTzXiTxTLWnYAaL oXiTRQoeSfRnEaB5O1gfD3z249cGxGrB5wk3I/oUIlQ MIpWun/q3/ZLFbfqzkFWiZeTCuRo9EtfvUHM0ogrZJIa 0/XcBP2MahRM8mxlxqeUDamQ961VNGIGz+bnzolZ1YZ kDWthSufp7IqORMZMosJ0RxZFZ9mbif143xfDKz4R KUuSKLRaFqSQYk9nfZCA0ZygnlCmhb2VsBHVlKFNp2h D8JZfXiWtWtW7qNbuvEq9nMdRgFMowzl4cAl1uIEGNI HBEJ7hFd4c6bw4787HonXNyWdO4A+czx8xbo2g</late xit> Great Italian restaurant with authentic food and great service! Recommend! We ordered pasta, and it was very tasty. Would recommend this place to anyone. We visited this place last week. The waiters were friendly, and the food was great! zi <latexit sha1_base64=" CR7L54o1YeU9LsNQE40W9Uq2lJ0=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjWVE8wHJEfY2e8mSvb 1jd06IR36CjYUitv4iO/+Nm+QKTXw8Hhvhpl5QSKFQ df9dgobm1vbO8Xd0t7+weFR+fikbeJUM95isYx1N6CGS 6F4CwVK3k0p1EgeSeY3Mz9ziPXRsTqAacJ9yM6UiIU jKV7p8GYlCujV3AbJOvJxUIUdzUP7qD2OWRlwhk9SY nucm6GdUo2CSz0r91PCEsgkd8Z6likbc+Nni1Bk5t8qQ hLG2pZAs1N8TGY2MmUaB7Ywojs2qNxf/83ophtd+JlS SIldsuShMJcGYzP8mQ6E5Qzm1hDIt7K2EjamDG06JRu Ct/ryOmnXa95lrX7nVRuVPI4inEFLsCDK2jALTShBQ xG8Ayv8OZI58V5dz6WrQUnzmFP3A+fwBaWo27</late xit>zi <latexit sha1_base64=" CR7L54o1YeU9LsNQE40W9Uq2lJ0=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjWVE8wHJEfY2e8mSvb 1jd06IR36CjYUitv4iO/+Nm+QKTXw8Hhvhpl5QSKFQ df9dgobm1vbO8Xd0t7+weFR+fikbeJUM95isYx1N6CGS 6F4CwVK3k0p1EgeSeY3Mz9ziPXRsTqAacJ9yM6UiIU jKV7p8GYlCujV3AbJOvJxUIUdzUP7qD2OWRlwhk9SY nucm6GdUo2CSz0r91PCEsgkd8Z6likbc+Nni1Bk5t8qQ hLG2pZAs1N8TGY2MmUaB7Ywojs2qNxf/83ophtd+JlS SIldsuShMJcGYzP8mQ6E5Qzm1hDIt7K2EjamDG06JRu Ct/ryOmnXa95lrX7nVRuVPI4inEFLsCDK2jALTShBQ xG8Ayv8OZI58V5dz6WrQUnzmFP3A+fwBaWo27</late xit> r1 <latexit sha1_base64=" FHLusiRugfXq9yLDHJXeoRyDTw=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFIpYBG8uI5gOSI+xt9pIle3 vH7pwQjvwEGwtFbP1Fdv4bN8kVmvhg4PHeDPzgkQKg 67RS2tnd294r7pYPDo+OT8ulZx8SpZrzNYhnrXkANl 0LxNgqUvJdoTqNA8m4wvV343SeujYjVI84S7kd0rEQo GEUrPeihNyxX3Zq7BNkXk6qkKM1LH8NRjFLI6QSWpM 3MT9DOqUTDJ56VBanhC2ZSOed9SRSNu/Gx56pxcWmVE wljbUkiW6u+JjEbGzKLAdkYUJ2bdW4j/ef0Uwxs/Eyp JkSu2WhSmkmBMFn+TkdCcoZxZQpkW9lbCJlRThjadkg3 BW395k3TqNe+6Vr/3qs1KHkcRLqACV+BA5pwBy1oA4 MxPMrvDnSeXHenY9Va8HJZ87hD5zPH/k7jXs=</late xit>r1 <latexit sha1_base64=" FHLusiRugfXq9yLDHJXeoRyDTw=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFIpYBG8uI5gOSI+xt9pIle3 vH7pwQjvwEGwtFbP1Fdv4bN8kVmvhg4PHeDPzgkQKg 67RS2tnd294r7pYPDo+OT8ulZx8SpZrzNYhnrXkANl 0LxNgqUvJdoTqNA8m4wvV343SeujYjVI84S7kd0rEQo GEUrPeihNyxX3Zq7BNkXk6qkKM1LH8NRjFLI6QSWpM 3MT9DOqUTDJ56VBanhC2ZSOed9SRSNu/Gx56pxcWmVE wljbUkiW6u+JjEbGzKLAdkYUJ2bdW4j/ef0Uwxs/Eyp JkSu2WhSmkmBMFn+TkdCcoZxZQpkW9lbCJlRThjadkg3 BW395k3TqNe+6Vr/3qs1KHkcRLqACV+BA5pwBy1oA4 MxPMrvDnSeXHenY9Va8HJZ87hD5zPH/k7jXs=</late xit> ri <latexit sha1_base64=" 6CgGLJgqbLWVCp8/rI9tiO4P6nI=">AB+XicbVC7Ts MwFL3hWcorwMhitUJiqpIywFiJhbFI9CG1UeQ4bmvVsS PbqVRF/RMWBhBi5U/Y+BucNgO0HMny0Tn3yscnSjnTx vO+na3tnd29/cpB9fDo+OTUPTvapkpQjtEcqn6EdaUM 0E7hlO+6miOIk47UXT+8LvzajSTIonM09pkOCxYCNG sLFS6LrDSPJYzxN75Spki9Ctew1vCbRJ/JLUoUQ7dL+G sSRZQoUhHGs98L3UBDlWhFOF9VhpmKyRSP6cBSgROq g3yZfIGurBKjkVT2CIOW6u+NHCe6CGcnE2wmet0rxP+ 8QWZGd0HORJoZKsjqoVHGkZGoqAHFTFi+NwSTBSzWRG ZYIWJsWVbQn+pc3SbfZ8G8azUe/3qVdVTgEmpwDT 7cQgseoA0dIDCDZ3iFNyd3Xpx352M1uWUOxfwB87nDz 4jk/E=</latexit>ri <latexit sha1_base64=" 6CgGLJgqbLWVCp8/rI9tiO4P6nI=">AB+XicbVC7Ts MwFL3hWcorwMhitUJiqpIywFiJhbFI9CG1UeQ4bmvVsS PbqVRF/RMWBhBi5U/Y+BucNgO0HMny0Tn3yscnSjnTx vO+na3tnd29/cpB9fDo+OTUPTvapkpQjtEcqn6EdaUM 0E7hlO+6miOIk47UXT+8LvzajSTIonM09pkOCxYCNG sLFS6LrDSPJYzxN75Spki9Ctew1vCbRJ/JLUoUQ7dL+G sSRZQoUhHGs98L3UBDlWhFOF9VhpmKyRSP6cBSgROq g3yZfIGurBKjkVT2CIOW6u+NHCe6CGcnE2wmet0rxP+ 8QWZGd0HORJoZKsjqoVHGkZGoqAHFTFi+NwSTBSzWRG ZYIWJsWVbQn+pc3SbfZ8G8azUe/3qVdVTgEmpwDT 7cQgseoA0dIDCDZ3iFNyd3Xpx352M1uWUOxfwB87nDz 4jk/E=</latexit> rN <latexit sha1_base64=" +u2ZEi3F18mUZuF5me4Z7PCAJH8=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjZVENB+QHGFvM5cs2d s7dveEcOQn2FgoYusvsvPfuEmu0MQHA4/3ZpiZFySCa +O6305hY3Nre6e4W9rbPzg8Kh+ftHWcKoYtFotYdQOqU XCJLcONwG6ikEaBwE4wuZn7nSdUmsfy0UwT9CM6kjzk jBorPajB3aBcdWvuAmSdeDmpQo7moPzVH8YsjVAaJqjW Pc9NjJ9RZTgTOCv1U40JZRM6wp6lkao/Wx6oycW2VI wljZkoYs1N8TGY20nkaB7YyoGetVby7+5/VSE17GZd JalCy5aIwFcTEZP43GXKFzIipJZQpbm8lbEwVZcamU7I heKsvr5N2veZd1ur3XrVRyeMowhlU4AI8uIG3EITWs BgBM/wCm+OcF6cd+dj2Vpw8plT+APn8wclPo2Y</late xit>rN <latexit sha1_base64=" +u2ZEi3F18mUZuF5me4Z7PCAJH8=">AB6nicbVA9Sw NBEJ2LXzF+RS1tlgTBKtzFQsuAjZVENB+QHGFvM5cs2d s7dveEcOQn2FgoYusvsvPfuEmu0MQHA4/3ZpiZFySCa +O6305hY3Nre6e4W9rbPzg8Kh+ftHWcKoYtFotYdQOqU XCJLcONwG6ikEaBwE4wuZn7nSdUmsfy0UwT9CM6kjzk jBorPajB3aBcdWvuAmSdeDmpQo7moPzVH8YsjVAaJqjW Pc9NjJ9RZTgTOCv1U40JZRM6wp6lkao/Wx6oycW2VI wljZkoYs1N8TGY20nkaB7YyoGetVby7+5/VSE17GZd JalCy5aIwFcTEZP43GXKFzIipJZQpbm8lbEwVZcamU7I heKsvr5N2veZd1ur3XrVRyeMowhlU4AI8uIG3EITWs BgBM/wCm+OcF6cd+dj2Vpw8plT+APn8wclPo2Y</late xit> … … … … (b) The ri’s decoder accesses other reviews of the group (r1, ..., ri−1, ri+1, ..., rN). Figure 1: Unfolded graphical representation of the model. Our latent summarization model (which we call COPYCAT) captures this hierarchical organization and can be regarded as an extension of the vanilla text-VAE model (Bowman et al., 2016). COPYCAT uses two sets of latent variables as shown in Figure 1a. Namely, we associate each review group (equivalently, each product) with a continuous variable c, which captures the group’s ‘latent semantics’. In addition, we associate each individual review (ri) with a continuous variable zi, encoding the semantics of that review. The information stored in zi is used by the decoder pθ(ri|zi) to produce review text ri. The marginal log-likelihood of one group of reviews r1:N = (r1, . . . , rN) is given by log pθ(r1:N) = log Z " pθ(c) N Y i=1 Z pθ(ri|zi)pθ(zi|c)dzi  dc # , where we marginalize over variables c and z1:N. When generating a new review ri, given the set of previous reviews r1:i, the information about these reviews has to be conveyed through the latent representations c and zi. This bottleneck is undesirable, as it will make it hard for the model to pass fine-grain information. For example, at generation time, the model should be reusing named entities (e.g., product names or technical characteristics) from other reviews rather than ‘hallucinating’ or avoiding generating them at all, resulting in generic and non-informative text. We alleviate this issue by letting the decoder directly access other reviews. We can formulate this as an autoregressive model: pθ(r1:N|c) = N Y i=1 pθ(ri|r1, ..., ri−1, c). (1) As we discuss in Section 2.3, the conditioning is instantiated using the pointer-generator mechanism (See et al., 2017) and, thus, will specifically help in generating rare words (e.g., named entities). We want our summarizer to equally rely on every review, without imposing any order (e.g., temporal) on the generation process. Instead, as shown in Figure 1b, when generating ri, we let the decoder access all other reviews within a group, r−i = (r1, . . . , ri−1, ri+1, . . . , rN). This is closely related to pseudolikelihood estimation (Besag, 1975) or Skip-Thought’s objective (Kiros et al., 2015). The final objective that we maximize for each group of reviews r1:N: log Z pθ(c) N Y i=1 Z pθ(ri|zi, r i) pθ(zi|c)dzi  dc (2) We will confirm in ablation experiments that both hierarchical modeling (i.e., using c) and the direct conditioning on other reviews are beneficial. 2.2 Model Estimation As standard with VAEs and variational inference in general (Kingma and Welling, 2013), instead of directly maximizing the intractable marginal likelihood in Equation 2, we maximize its lower bound:3 L(θ, φ; r1:N) = E c∼qφ(c|r1:N) " N X i=1 E zi∼qφ(zi|ri,c)[log pθ(ri|zi, r i)] − N X i=1 DKL [qφ(zi|ri, c)||pθ(zi|c)] # −DKL [qφ(c|r1:N)||pθ(c)] . 3See the derivations in Appendix A.1. 5154 concat … review 1 hT1 1 hT1 1 wT1 1 wT1 1 w2 1 w2 1 w1 1 w1 1 h1 1 h1 1 h2 1 h2 1 concat wTN N wTN N … h1 N h1 N hTN N hTN N review N w1 N w1 N word embeddings … GRU hidden states ˆhˆh qφ(c|r1:N) qφ(c|r1:N) c sampling zN zN sampling qφ(z|rN, c) qφ(z|rN, c) p✓(z|c) p✓(z|c) Figure 2: Production of latent code zN for review rN. The lower bound includes two ‘inference networks’, qφ(c|r1:N) and qφ(zi|ri, c), which are neural networks parameterized with φ and will be discussed in detail in Section 2.3. They approximate the corresponding posterior distributions of the model. The first term is the reconstruction error: it encourages the quality reconstruction of the reviews. The other two terms are regularizers. They control the amount of information encoded in the latent representation by penalizing the deviation of the estimated posteriors from the corresponding priors, the deviation is measured in terms of the Kullback-Leibler (KL) divergence. The bound is maximized with respect to both the generative model’s parameters θ and inference networks’ parameters φ. Due to Gaussian assumptions, the Kullback-Leibler (KL) divergence terms are available in closed form, while we rely on the reparameterization trick (Kingma and Welling, 2013) to compute gradients of the reconstruction term. The inference network predicting the posterior for a review-specific variable qφ(zi|ri, c) is needed only in training and is discarded afterwards. In contrast, we will exploit the inference network qφ(c|r1:N) when generating summaries, as discussed in Section 3. 2.3 Design of Model Components 2.3.1 Text Representations A GRU encoder (Cho et al., 2014) embeds review words w to obtain hidden states h. Those representations are reused across the system, e.g., in the inference networks and the decoder. The full architecture used to produce the latent codes c and zi is shown in Figure 2. We make Gaussian assumptions for all distributions (i.e. posteriors and priors). As in Kingma and Welling (2013), we use separate linear projections (LPs) to compute the means and diagonal log-covariances. 2.3.2 Prior p(c) and posterior qφ(c|r1:N) We set the prior over group latent codes to the standard normal distribution, p(c) = N(c; 0, I). In order to compute the approximate posterior qφ(c|r1:N), we first predict the contribution (‘importance’) of each word in each review αt i to the code of the group: αt i = exp(fα φ (mt i)) PN j=1 PTj k exp(fα φ (mk j )) , where Ti is the length of ri and fα φ is a feed-forward neural network (FFNN)4 which takes as input concatenated word embeddings and hidden states of the GRU encoder, mt i = [ht i ◦wt i], and returns a scalar. Next, we compute the intermediate representation with the weighted sum: ˆh = PN i=1 PTi t αt imt i. Finally, we compute the Gaussian’s parameters using the affine projections: µφ(r1:N) = Lˆh + bL log σφ(r1:N) = Gˆh + bG 2.3.3 Prior pθ(zi|c) and posterior qφ(zi|ri,c) To compute the prior on the review code zi, pθ(zi|c) = N(zi; µθ(c), Iσθ(c)), we linearly project the product code c. Similarly, to compute the parameters of the approximate posterior qφ(z|ri, c) = N(z; µφ(ri, c), Iσφ(ri, c)), we concatenate the last encoder’s state hTi i of the review ri and c, and perform affine transformations. 2.3.4 Decoder pθ(ri|zi, r i) To compute the distribution pθ(ri|zi, r i), we use an auto-regressive GRU decoder with the attention mechanism (Bahdanau et al., 2015) and a pointergenerator network. We compute the context vector ct i = att(st i, h i) by attending to all the encoder’s hidden states h i of the other reviews r i of the group, where the decoder’s hidden state st i is used as a query. The 4We use FFNNs with the tanh non-linearity in several model components. Whenever a FFNN is mentioned in the subsequent discussion, this architecture is assumed. 5155 hidden state of the decoder is computed using the GRU cell as st i = GRUθ(st−1 i , [wt i ◦ct−1 i ◦zi]). (3) The cell inputs the previous hidden state st−1 i , as well as concatenated word embedding wt i, context vector ct−1 i , and latent code zi. Finally, we compute the word distributions using the pointer-generator network gθ: pθ(ri|zi, r i) = T Y t=1 gθ(rt i|st i, ct i, wt i, r i) (4) The pointer-generator network computes two internal word distributions that are hierarchically aggregated into one distribution (Morin and Bengio, 2005). One distribution assigns probabilities to words being generated using a fixed vocabulary, and another one probabilities to be copied directly from the other reviews r i. In our case, the network helps to preserve details and, especially, to generate rare tokens. 3 Summary Generation Given reviews r1:N, we generate a summary that reflects common information using trained components of the model. Formally, we could sample a new review from pθ(r|r1:N) = E c∼qφ(c|r1:N)  E z∼pθ(z|c) [pθ(r|z, r1:N)]  . As we argued in the introduction and will revisit in experiments, a summary or summarizing review, should be generated relying on the mean of the reviews’ latent code. Consequently, instead of sampling z from pθ(z|c) = N(z; µθ(c), Iσθ(c)), we set it to µθ(c). We also found beneficial, in terms of evaluation metrics, not to sample c but instead to rely on the mean predicted by the inference network qφ(c|r1:N). 4 Experimental Setup 4.1 Datasets Our experiments were conducted on business customer reviews from the Yelp Dataset Challenge and Amazon product reviews (He and McAuley, 2016). These were pre-processed similarly to Chu and Liu (2019), and the corresponding data statistics are Dataset Training Validation Yelp 38,776/1,012,280 4,311/113,373 Amazon 183,103/4,566,519 9,639/240,819 Table 2: Data statistics after pre-processing. The format in the cells is Businesses/Reviews and Products/Reviews for Yelp and Amazon, respectively. shown in Table 2. Details of the pre-processing are available in Appendix A.2. These datasets present different challenges to abstractive summarization systems. Yelp reviews contain much personal information and irrelevant details which one may find unnecessary in a summary. Our summarizer, therefore, needs to distill important information in reviews while abstracting away from details such as a listing of all items on the menu, or mentions of specific dates or occasions upon which customers visited a restaurant. On the contrary, in Amazon reviews, we observed that users tend to provide more objective information and specific details that are useful for decision making (e.g., the version of an electronic product, its battery life, its dimensions). In this case, it would be desirable for our summarizer to preserve this information in the output summary. For evaluation, we used the same 100 humancreated Yelp summaries released by Chu and Liu (2019). These were generated by Amazon Mechanical Turk (AMT) workers, who summarized 8 input reviews. We created a new test for Amazon reviews following a similar procedure (see Appendix A.6 for details). We sampled 60 products and 8 reviews for each product, and they were shown to AMT workers who were asked to write a summary. We collected three summaries per product, 28 products were used for development and 32 for testing. 4.2 Experimental Details We used GRUs (Cho et al., 2014) for sequential encoding and decoding we used GRUs. We randomly initialized word embeddings that were shared across the model as a form of regularization (Press and Wolf, 2017). Further, optimization was performed using Adam (Kingma and Ba, 2014). In order to overcome the “posterior collapse” (Bowman et al., 2016), both for our model and the vanilla VAE baseline, we applied cyclical annealing (Fu et al., 2019). The reported ROUGE scores are based on F1 (see Appendix A.3 for details on hyperparameters). 5156 R1 R2 RL Copycat 0.2947 0.0526 0.1809 MeanSum 0.2846 0.0366 0.1557 LexRank 0.2501 0.0362 0.1467 Opinosis 0.2488 0.0278 0.1409 VAE 0.2542 0.0311 0.1504 Clustroid 0.2628 0.0348 0.1536 Lead 0.2634 0.0372 0.1386 Random 0.2304 0.0244 0.1344 Oracle 0.2907 0.0527 0.1863 Table 3: ROUGE scores on the Yelp test set. 4.3 Baseline Models Opinosis is a graph-based abstractive summarizer (Ganesan et al., 2010) designed to generate short opinions based on highly redundant texts. Although it is referred to as abstractive, it can only select words from the reviews. LexRank is an unsupervised algorithm which selects sentences to appear in the summary based on graph centrality (sentences represent nodes in a graph whose edges have weights denoting similarity computed with tf-idf). A node’s centrality can be measured by running a ranking algorithm such as PageRank (Page et al., 1999). MeanSum5 is the unsupervised abstractive summarization model (Chu and Liu, 2019) discussed in the introduction. We also trained a vanilla text VAE model (Bowman et al., 2016) with our GRU encoder and decoder. When generating a summary for r1, ..., rN, we averaged the means of qφ(zi|ri). Finally, we used a number of simple summarization baselines. We computed the clustroid review for each group as follows. We took each review from a group and computed ROUGE-L with respect to all other reviews. The review with the highest ROUGE score was selected as the clustroid review. Furthermore, we sampled a random review from each group as the summary, and constructed the summary by selecting the leading sentences from each review of a group. Additionally, as an upper bound, we report the performance of an oracle review, i.e., the highest-scoring review in a group when computing ROUGE-L against reference summaries. 5For experiments on Yelp, we used the checkpoint provided by the authors, as we obtained very similar ROUGE scores when retraining the model. R1 R2 RL Copycat 0.3197 0.0581 0.2016 MeanSum 0.2920 0.0470 0.1815 LexRank 0.2874 0.0547 0.1675 Opinosis 0.2842 0.0457 0.1550 VAE 0.2287 0.0275 0.1446 Clustroid 0.2928 0.0441 0.1778 Lead 0.3032 0.0590 0.1578 Random 0.2766 0.0472 0.1695 Oracle 0.3398 0.0788 0.2160 Table 4: ROUGE scores on the Amazon test set. 5 Evaluation Results 5.1 Automatic Evaluation As can be seen in Tables 3 and 4, our model, Copycat, yields the highest scores on both Yelp and Amazon datasets. We observe large gains over the vanila VAE. We conjecture that the vanilla VAE struggles to properly represent the variety of categories under a single prior p(z). For example, reviews about a sweater can result in a summary about socks (see example summmaries in Appendix). This contrasts with our model which allows each group to have its own prior pθ(z|c) and access to other reviews during decoding. The gains are especially large on the Amazon dataset, which is very broad in terms of product categories. Our model also substantially outperforms MeanSum. As we will confirm in human evaluation, MeanSum’s summaries are relatively fluent at the sentence level but often contain hallucinations, i.e., information not present in the input reviews. 5.2 Human Evaluation Best-Worst Scaling We performed human evaluation using the AMT platform. We sampled 50 businesses from the human-annotated Yelp test set and used all 32 test products from the Amazon set. We recruited 3 workers to evaluate each tuple containing summaries from MeanSum, our model, LexRank, and human annotators. The reviews and summaries were presented to the workers in random order and were judged using BestWorst Scaling (Louviere and Woodworth, 1991; Louviere et al., 2015). BWS has been shown to produce more reliable results than ranking scales (Kiritchenko and Mohammad, 2016). Crowdworkers were asked to judge summaries according to the 5157 Fluency Coherence Non Red. Opinion Cons. Overall Copycat 0.5802 0.5161 0.4722 -0.0909 0.3818 MeanSum -0.5294 -0.4857 0.0270 -0.6235 -0.7468 LexRank -0.7662 -0.8293 -0.7699 0.3500 -0.5278 Gold 0.6486 0.8140 0.6667 0.3750 0.8085 Table 5: Human evaluation results in terms of the Best-Worst scaling on the Yelp dataset. Fluency Coherence Non Red. Opinion Cons. Overall Copycat 0.4444 0.3750 0.0270 -0.4286 -0.1429 MeanSum -0.6410 -0.8667 -0.6923 -0.7736 -0.8305 LexRank -0.2963 -0.3208 -0.3962 0.4348 0.1064 Gold 0.3968 0.7097 0.7460 0.6207 0.7231 Table 6: Human evaluation results in terms of the Best-Worst scaling on the Amazon dataset. criteria listed below (we show an abridged version below, the full set of instructions is given in Appendix A.5). The non-redundancy and coherence criteria were taken from Dang (2005). Fluency: the summary sentences should be grammatically correct, easy to read and understand; Coherence: the summary should be well structured and well organized; Non-redundancy: there should be no unnecessary repetition in the summary; Opinion consensus: the summary should reflect common opinions expressed in the reviews; Overall: based on your own criteria (judgment) please select the best and the worst summary of the reviews. For every criterion, a system’s score is computed as the percentage of times it was selected as best minus the percentage of times it was selected as worst (Orme, 2009). The scores range from -1 (unanimously worst) to +1 (unanimously best). On Yelp, as shown in Table 5, our model scores higher than the other models according to most criteria, including overall quality. The differences with other systems are statistically significant for all the criteria at p < 0.01, using post-hoc HD Tukey tests. The difference in fluency between our system and gold summaries is not statistically significant. The results on Amazon are shown in Table 6. Our system outperforms other methods in terms of fluency, coherence, and non-redundancy. As with Yelp, it trails LexRank according to the opinion consensus criterion. Additionally, LexRank is slightly preferable overall. All pairwise differences between our model and comparison systems are statistically significant at p < 0.05. Opinion consensus (OC) is a criterion that captures the coverage of common opinions, and it seems to play a different role in the two datasets. On Yelp, LexRank has better coverage compared to our model, as indicated by the higher OC score, but is not preferred overall. In contrast, on Amazon, while the OC score is on the same par, LexRank is preferred overall. We suspect that presenting a breadth of exact details on Amazon is more important than on Yelp. Moreover, LexRank tends to produce summaries that are about 20 tokens longer than ours resulting in better coverage of input details. Content Support The ROUGE metric relies on unweighted n-gram overlap and can be insensitive to hallucinating facts and entities (Falke et al., 2019). For example, referring to a burger joint as a veggie restaurant is highly problematic from a user perspective but yields only marginal differences in ROUGE. To investigate how well the content of the summaries is supported by the input reviews, we performed a second study. We used the same sets as in the human evaluation in Section 5.2, and split MeanSum and our system’s summaries into sentences. Then, for each summary sentence, we assigned 3 AMT workers to assess how well the sentence is supported by the reviews. Workers were advised to read the reviews and rate sentences using one of the following three options. Full support: all the content is reflected in the reviews; Partial support: only some content is reflected in the reviews; No support: content is not reflected in the reviews. The results in Table 7 indicate that our model is better at preserving information than MeanSum. 5158 Yelp Amazon Copycat MeanSum Copycat MeanSum Full 44.50 28.41 38.23 24.41 Partial 32.48 30.66 33.95 31.23 No 23.01 40.92 27.83 44.36 Table 7: Content support on Yelp and Amazon datasets, percentages. 6 Analysis Ablations To investigate the importance of the model’s individual components, we performed ablations by removing the latent variables (zi and c, one at a time), and attention over the other reviews. The models were re-trained on the Amazon dataset. The results are shown in Table 8. They indicate that all components play a role, yet the most significant drop in ROUGE was achieved when the variable z was removed, and only c remained. Summaries obtained from the latter system were wordier and looked more similar to reviews. Dropping the attention (w/o r i) results in more generic summaries as the model cannot copy details from the input. Finally, the smallest quality drop in terms of ROUGEL was observed when the variable c was removed. In the introduction, we hypothesized that using the mean of latent variables would result in more “grounded” summaries reflecting the content of the input reviews, whereas sampling would yield texts with many novel and potentially irrelevant details. To empirically test this hypothesis, we sampled the latent variables during summary generation, as opposed to using mean values (see Section 3). We indeed observed that the summaries were wordier, less fluent, and less aligned to the input reviews, as is also reflected in the ROUGE scores (Table 8). Copy Mechanism Finally, we analyzed which words are copied by the full model during summary generation. Generally, the model copies around 3-4 tokens per summary. We observed a tendency to copy product-type specific words (e.g., shoes) as well as brands and names. 7 Related Work Extractive weakly-supervised opinion summarization has been an active area of research. A recent example is Angelidis and Lapata (2018). First, they learn to assign sentiment polarity to review segments in a weakly-supervised fashion. Then, they induce aspect labels for segments relying on R1 R2 RL w/o r i 0.2866 0.0454 0.1863 w/o c 0.2767 0.0507 0.1919 w/o z 0.2926 0.0416 0.1739 Sampling 0.2563 0.0434 0.1716 Full 0.3197 0.0581 0.2016 Table 8: Ablations, ROUGE scores on Amazon. a small sample of gold summaries. Finally, they use a heuristic to construct a summary of segments. Opinosis (Ganesan et al., 2010) does not use any supervision. The model relies on redundancies in opinionated text and PoS tags in order to generate short opinions. This approach is not well suited for the generation of coherent long summaries and although it can recombine fragments of input text, it cannot generate novel words and phrases. LexRank (Erkan and Radev, 2004) is an unsupervised extractive approach which builds a graph in order to determine the importance of sentences, and then selects the most representative ones as a summary. Isonuma et al. (2019) introduce an unsupervised approach for single review summarization, where they rely on latent discourse trees. Other earlier approaches (Gerani et al., 2014; Di Fabbrizio et al., 2014) relied on text planners and templates, while our approach does not require rules and can produce fluent and varied text. Finally, conceptually related methods were applied to unsupervised single sentence compression (West et al., 2019; Baziotis et al., 2019; Miao and Blunsom, 2016). The most related approach to ours is MeanSum (Chu and Liu, 2019) which treats a summary as a discrete latent state of an autoencoder. In contrast, we define a hierarchical model of a review collection and use continuous latent codes. 8 Conclusions In this work, we presented an abstractive summarizer of opinions, which does not use any summaries in training and is trained end-to-end on a large collection of reviews. The model compares favorably to the competitors, especially to the only other unsupervised abstractive multi-review summarization system. Furthermore, human evaluation of the generated summaries (by considering their alignment with the reviews) shows that those created by our model better reflect the content of the input. 5159 Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. Also, Stefanos Angelidis for help with the data as well as Jonathan Mallinson, Serhii Havrylov, and other members of Edinburgh NLP group for discussion. We gratefully acknowledge the support of the European Research Council (Titov: ERC StG BroadSem 678254; Lapata: ERC CoG TransModal 681760) and the Dutch National Science Foundation (NWO VIDI 639.022.518). References Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3675–3686. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of International Conference on Learning Representations (ICLR). Regina Barzilay, Kathleen R McKeown, and Michael Elhadad. 1999. Information fusion in the context of multi-document summarization. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics, pages 550–557. Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, and Alexandros Potamianos. 2019. Seqˆ3: Differentiable sequence-to-sequence-to-sequence autoencoder for unsupervised abstractive sentence compression. In Proceedings of the Association for Computational Linguistics, pages 673–681. Julian Besag. 1975. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society: Series D (The Statistician), 24(3):179–195. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th annual meeting of the association of computational linguistics, pages 440–447. Samuel Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the Twentieth Conference on Computational Natural Language Learning (CoNLL). Giuseppe Carenini and Jackie Chi Kit Cheung. 2008. Extractive vs. nlg-based abstractive summarization of evaluative text: The effect of corpus controversiality. In Proceedings of the Fifth International Natural Language Generation Conference, pages 33–41. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Eric Chu and Peter Liu. 2019. Meansum: a neural model for unsupervised multi-document abstractive summarization. In Proceedings of International Conference on Machine Learning (ICML), pages 1223–1232. Hoa Trang Dang. 2005. Overview of duc 2005. In Proceedings of the document understanding conference, volume 2005, pages 1–12. Giuseppe Di Fabbrizio, Amanda Stent, and Robert Gaizauskas. 2014. A hybrid approach to multidocument summarization of opinions in reviews. pages 54–63. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457–479. Tobias Falke, Leonardo FR Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2214–2220. Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. 2019. Cyclical annealing schedule: A simple approach to mitigating kl vanishing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 240– 250. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 340–348. Shima Gerani, Yashar Mehdad, Giuseppe Carenini, Raymond T Ng, and Bita Nejat. 2014. Abstractive summarization of product reviews using discourse structure. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1602–1613. 5160 Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th international conference on world wide web, pages 507–517. International World Wide Web Conferences Steering Committee. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. ACM. Masaru Isonuma, Toru Fujino, Junichiro Mori, Yutaka Matsuo, and Ichiro Sakata. 2017. Extractive summarization using multi-task learning with document classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2101–2110. Masaru Isonuma, Junichiro Mori, and Ichiro Sakata. 2019. Unsupervised neural single-document summarization of reviews via learning latent discourse structure and its ranking. In Proceedings of ACL. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Svetlana Kiritchenko and Saif M Mohammad. 2016. Capturing reliable fine-grained sentiment associations by crowdsourcing and best–worst scaling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 811–817. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pages 177–180. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1–167. Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In Proceedings of International Conference on Learning Representations (ICLR). Jordan J Louviere, Terry N Flynn, and Anthony Alfred John Marley. 2015. Best-worst scaling: Theory, methods and applications. Cambridge University Press. Jordan J Louviere and George G Woodworth. 1991. Best-worst scaling: A model for the largest difference judgments. University of Alberta: Working Paper. Walaa Medhat, Ahmed Hassan, and Hoda Korashy. 2014. Sentiment analysis algorithms and applications: A survey. Ain Shams engineering journal, 5(4):1093–1113. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of the 16th international conference on World Wide Web, pages 171–180. ACM. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 319–328. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. Aistats, 5:246–252. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290. Bryan Orme. 2009. Maxdiff analysis: Simple counting, individual-level logit, and hb. Sequim, WA: Sawtooth Software. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 157–163. 5161 Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of Association for Computational Linguistics (ACL). Ivan Titov and Ryan McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of the 17th international conference on World Wide Web, pages 111–120. ACM. Peter West, Ari Holtzman, Jan Buys, and Yejin Choi. 2019. Bottlesum: Unsupervised and self-supervised sentence summarization using the information bottleneck principle. arXiv preprint arXiv:1909.07405. A Appendices A.1 Derivation of the Lower Bound To make the notation below less cluttered, we make a couple of simplifications: qφ(c|·) = qφ(c|r1:N) and qφ(z|i) = qφ(z|ri, c). log Z " pθ(c) N Y i=1 pθ(ri|c, r i)dc # = log Z " pθ(c) N Y i=1 Z pθ(ri, z|c, r i)dz  dc # = log Z " pθ(c)qφ(c|·) qφ(c|·) N Y i=1 Z pθ(ri, z|c, r i) qφ(z|i) qφ(z|i)dz  dc  = log E c∼qφ(c|·) " pθ(c) qφ(c|·) N Y i=1 E z∼qφ(z|i) pθ(ri, z|c, r i) qφ(z|i) # ≥ E c∼qφ(c|·) " N X i=1 log E z∼qφ(z|i) pθ(ri, z|c, r i) qφ(z|i) # −DKL [qφ(c|·))||pθ(c)] ≥ E c∼qφ(c|·) " N X i=1 E z∼qφ(z|i)  log pθ(ri, z|c, r i) qφ(z|i) # − DKL [qφ(c|·))||pθ(c)] = E c∼qφ(c|·) " N X i=1 E z∼qφ(z|i) [log pθ(ri|z, r i)] − N X i=1 DKL [qφ(z|i)||pθ(z|c)] # − DKL [qφ(c|·)||pθ(c)] (5) A.2 Dataset Pre-Processing We selected only businesses and products with a minimum of 10 reviews, and thee minimum and maximum length of 20 and 70 words respectively, popular groups above the 90th percentile were removed. And each group was set to contain 8 reviews during training. From the Amazon dataset we selected 4 categories: Electronics; Clothing, Shoes and Jewelry, Home and Kitchen; Health and Personal Care. A.3 Hyperparameters For sequential encoding and decoding, we used GRUs (Cho et al., 2014) with 600-dimensional hidden states. The word embeddings dimension was 5162 set to 200, and they were shared across the model (Press and Wolf, 2017). The vocabulary size was set to 50,000 most frequent words, and an extra 30,000 were allowed in the extended vocabulary, the words were lower-cased. We used the Moses’ (Koehn et al., 2007) reversible tokenizer and truecaser. Xavier uniform initialization (Glorot and Bengio, 2010) of 2D weights was used, and 1D weights were initialized with the scaled normal noise (σ = 0.1). We used the Adam optimizer (Kingma and Ba, 2014), and set the learning rate to 0.0008 and 0.0001 on Yelp and Amazon, respectively. For summary decoding, we used lengthnormalized beam search of size 5, and relied on latent code means. In order to overcome “posterior collapse” (Bowman et al., 2016) we applied cycling annealing (Fu et al., 2019) with r = 0.8 for both the z and c related KL terms, with a new cycle over approximately every 2 epochs over the training set. The maximum annealing scalar was set to 1 for z-related KL term in on both datasets, and 0.3 and 0.65 for c-related KL-term on Yelp and Amazon, respectively. The reported ROUGE scores are based on F1. The dimensions of the variables c and z were set to 600, and the c posterior’s scoring neural network had a 300-dimensional hidden layer and the tanh non-linearity. The decoder’s attention mechanism used a single layer neural network with a 200-dimensional hidden layer, and the tanh non-linearity. The copy gate in the pointer-generator network was computed with a 100-dimensional single-hidden layer network, with the same non-linearity. A.4 Human Evaluation Setup To perform the human evaluation experiments described in Sections 5.2 and 5.2 we combined both tasks into single Human Intelligence Tasks (HITs). Namely, the workers needed to mark sentences as described in Section 5.2, and then proceed to the task in Section 5.2. We explicitly asked then to re-read the reviews before each task. For worker requirements we set 98% approval rate, 1000+ HITS, Location: USA, UK, Canada, and the maximum score on a qualification test that we designed. The test was asking if the workers are native English speakers, and verifying that they correctly understand the instructions of both tasks by completing a mini version of the actual HIT. A.5 Full Human Evaluation Instructions • Fluency: The summary sentences should be grammatically correct, easy to read and understand. • Coherence: The summary should be well structured and well organized. The summary should not just be a heap of related information, but should build from sentence to sentence to a coherent body of information about a topic. • Non-redundancy: There should be no unnecessary repetition in the summary. Unnecessary repetition might take the form of whole sentences that are repeated, or repeated facts, or the repeated use of a noun or noun phrase (e.g., ”Bill Clinton”) when a pronoun (”he”) would suffice. • Opinion consensus: The summary should reflect common opinions expressed in the reviews. For example, if many reviewers complain about a musty smell in the hotel’s rooms, the summary should include this information. • Overall: Based on your own criteria (judgment) please select the best and the worst summary of the reviews. A.6 Amazon Summaries Creation First, we sampled 15 products from each of the Amazon review categories: Electronics; Clothing, Shoes and Jewelry; Home and Kitchen; Health and Personal Care. Then, we selected 8 reviews from each product to be summaries. We used the same requirements for workers as for human evaluation in A.4. We assigned 3 workers to each product, and instructed them to read the reviews and produce a summary text. We followed the instructions provided in (Chu and Liu, 2019), and used the following points in our instructions: • The summary should reflect common opinions about the product expressed in the reviews. Try to preserve the common sentiment of the opinions and their details (e.g. what exactly the users like or dislike). For example, if most reviews are negative about the sound quality, then also write negatively about it. Please make the summary coherent and fluent in terms of sentence and information structure. Iterate over the written summary 5163 multiple times to improve it, and re-read the reviews whenever necessary. • Please write your summary as if it were a review itself, e.g. ’This place is expensive’ instead of ’Users thought this place was expensive’. Keep the length of the summary reasonably close to the average length of the reviews. • Please try to write the summary using your own words instead of copying text directly from the reviews. Using the exact words from the reviews is allowed, but do not copy more than 5 consecutive words from a review . A.7 Latent Codes Analysis We performed a qualitative analysis of the latent variable z to shed additional light on what it stores and sensitivity of the decoder with respect to its input. Specifically, we computed the mean value for the variable c using the approximate posterior qφ(c|r1, ..., rN), and then sampled z from the prior pθ(z|c). First, we observed that the summaries produced using the mean of z are more fluent. For example, in Table 9, the z1 based summary states: “The picture quality is very good, but it doesn’t work aswell as the picture.”, where the second phrase could be rewritten in a more fluent matter. Also, we found that mean based summaries contain less details that are partially or not supported by the reviews. For example, in the table, z1 based summary mentions Kindle Fire HD 8.9’, while the dimension is never mentioned in the reviews. Finally, different samples were observed to result in texts that contain different details about the reviews. For example, z1 sample results in the summary that captures the picture quality, while z3 that the item is good for its price. Overall, we observed that the latent variable z stores content based information, that results in syntactically diverse texts, yet reflecting information about the same businesses or product. A.8 Repetitions We observed an increase in the amount of generated repetitions both in the reconstructed reviews and summaries when the z-related KL term is low and beam search is used. Intuitively, the initial input to the decoder becomes less informative, and it starts relying on learned local statistics to perform reconstruction. When the KLD vanishes to zero, the decoder essentially becomes a uncoditional language model, for which beam search was shown to lead to generation of repetitions (Holtzman et al., 2019). 5164 mean z Bought this for my Kindle Fire HD and it works great. I have had no problems with it. I would recommend it to anyone looking for a good quality cable. z1 Works fine with my Kindle Fire HD 8.9”. The picture quality is very good, but it doesn’t work as well as the picture. I’m not sure how long it will last, but i am very disappointed. z2 This is a great product. I bought it to use with my Kindle Fire HD and it works great. I would recommend it to anyone who is looking for a good quality cable for the price. z3 Good product, does what it is supposed to do. I would recommend it to anyone looking for a HDMI cable. Rev 1 Love this HDMI cable , but it only works with HD Kindle and not the HDX Kindle which makes me kinda crazy . I have both kinds of Kindles but the HDX is newer and I can ’t get a cable for the new one . I guess my HD Kindle will be my Amazon Prime Kindle . It works great ! Rev 2 I got a kindle for Christmas . I had no idea how to work one etc . I discovered you can stream movies to your tv and this is the exact cable for it . Works great and seems like its good quality . A bit long though. Rev 3 this is great for watching movies from kindle to tv . Now the whole family can enjoy rather than one person at a time . Picture quality isn ’t amazing , but it ’s good . Rev 4 I just received this wire in the mail , and it does not work in the slightest . I am very displeased with this product . Rev 5 Works great ! ! Now I can watch Netflix on my TV with my Kindle Fire HD ... I love it and so will you ! Rev 6 Works awesome . Great item for the price.Got it very quickly . Was as described in the ad.Exactly what I was looking for. Rev 7 I plugged it into my Kindle fire HD and into the TV and works perfectly . Have had no problems with it ! Rev 8 This is just what I was looking for to connect my Kindle Fire to view on our TV ! Great price too! Table 9: Amazon summaries of the full model with sampled and mean assignment to z. The assignment to c was fixed, and was the mean value based on the approximate posterior qφ(c|r1, ..., rN). 5165 Ours This place is the best Mexican restaurant i have ever been to. The food was delicious and the staff was very friendly and helpful. Our server was very attentive and made sure we were taken care of. We’ll be back for sure. MeanSum A little on the pricey side but I was pleasantly surprised. We went there for a late lunch and it was packed with a great atmosphere, food was delicious and the staff was super friendly. Very friendly staff. We had the enchiladas with a few extra veggies and they were delicious! Will be back for sure! LexRank We will definitely be going back for more great food! Everything we had so far was great. The staff was great and so nice! Good food! Great atmosphere! Gold This place is simply amazing! Its the best Mexican spot in town. Their tacos are delicious and full of flavor. They also have chips and salsa that is to die for! The salsa is just delectable! It has a sweet, tangy flavor that you can’t find anywhere else. I highly recommend! Rev 1 Classic style Mexican food done nicely! Yummy crispy cheese crisp with a limey margarita will will win my heart any day of the week! The classic frozen with a chambord float is my favorite and they do it well here.The salad carbon was off the chain- served on a big platter and worked for me as 2 full dinners. Rev 2 For delicious Mexican food in north Phoenix, try La Pinata. This was our visit here and we were so stunned by the speed in which our food was prepared that we were sure it was meant for another table. The food was hot and fresh and well within our budget. My husband got a beef chimichanga and I got bean and cheese burrito, which we both enjoyed. Chips and salsa arrived immediately; the salsa tastes sweeter than most and is equally flavorful. We will be back! Rev 3 Good food! Great atmosphere! Great patio. Staff was super friendly and accommodating! We will definately return! Rev 4 This place was very delicious! I got the ranchero burro and it was so good. The plate could feed at least two people. The staff was great and so nice! I also got the fried ice cream it was good. I would recommend this place to all my friends. Rev 5 We arrive for the first time, greeted immediately with a smile and seated promptly. Our server was fantastic, he was funny and fast. Gave great suggestions on the menu and we both were very pleased with the food, flavors, speed and accuracy of our orders. We will definitely be going back for more great food! Rev 6 Well was very disappointed to see out favorite ice cream parlor closed but delightfully surprised at how much we like this spot!!Service was FANTASTIC TOP notch!! Taco was great lots of cheese. Freshly deep fried shell not like SO MANY Phoenix mex restaurants use! Enchilada was very good. My wife really enjoyed her chimichanga. My moms chilli reanno was great too. Everything we had so far was great. We will return. Highly recommended. Rev 7 I’m only on the salsa and it’s just as fabulous as always. I love the new location and the decor is beautiful. Open 5 days and the place is standing room only. To the previous negative commentor, they are way took busy to fill an order for beans. Go across the street....you’ll be angry lol. Rev 8 I just tried to make a reservation for 15 people in March at 11 am on a Tuesday and was informed by a very rude female. She said ”we do not take reservations” and I asked if they would for 15 people and she said ” I told you we don’t take reservations” and hung up on me. Is that the way you run a business? Very poor customer service and I have no intentions of ever coming there or recommending it to my friends. Table 10: Yelp summaries produced by different models. 5166 Ours This place is the worst service I’ve ever had. The food was mediocre at best. The service was slow and the waiter was very rude. I would not recommend this place to anyone who wants to have a good time at this location. MeanSum I love the decor, but the food was mediocre. Service is slow and we had to ask for refills. They were not able to do anything and not even charge me for it. It was a very disappointing experience and the service was not good at all. I had to ask for a salad for a few minutes and the waitress said he didn’t know what he was talking about. All I can say is that the staff was nice and attentive. I would have given 5 stars if I could. LexRank Food was just okay, server was just okay. The atmosphere was great, friendly server. It took a bit long to get a server to come over and then it took our server a while to get our bread and drinks. However there was complementary bread served.The Pizza I ordered was undercooked and had very little sauce.Macaroni Grill has unfortunately taken a dive. Went to dinner with 4 others and had another bad experience at the Macaroni Grill. Gold I’m really not a fan of Macaroni Grill, well, at least THIS Macaroni Grill. The staff is slow and really doesn’t seem to car about providing quality service. It took well over 30 minutes to get my food and the place wasn’t even packed with people. I ordered pizza and it didn’t taste right. I think it wasn’t fully cooked. I won’t be coming back. Rev 1 10/22/2011 was the date of our visit. Food was just okay, server was just okay. The manager climbed up on the food prep counter to fix a light. We felt like that was the most unsanitary thing anyone could do - he could have just come from the restroom for all we knew. Needless to say, lackluster service, mediocre food and lack of concern for the cleanliness of the food prep area will guarantee we will NEVER return. Rev 2 We like the food and prices are reasonable. Our biggest complaint is the service. It took a bit long to get a server to come over and then it took our server a while to get our bread and drinks. They really need to develop a better sense of teamwork. While waiting for things there were numerous servers standing around gabbing. It really gave us the impression of ”Not my table.” ”Not my problem.” Only other complaint is they need to get some rinse aid for the dishwasher. I had to dry our bread plates when the hostess gave them to us. Rev 3 Not enough staff is on hand the two times I have been in to properly pay attention to paying customers. I agree that the portions have shrunk over the years, and the effort is no longer there. It is convenient to have nearby but not worth my time when other great restaurants are around. Wish I could rate it better but it’s just not that good at all. Rev 4 Went to dinner with 4 others and had another bad experience at the Macaroni Grill. When will we ever learn? The server was not only inattentive, but p o’d when we asked to be moved to another table. When the food came it was at best, luke warm. They had run out of one of our ordered dishes, but didn’t inform us until 20 minutes after we had ordered. Running out at 6:00 p.m.: Really? More delay and no apologies. There is no excuse for a cold meal and poor service. We will not go back since the Grill seems not to care and there are plenty of other restaurants which do. Rev 5 The service is kind and friendly. However there was complementary bread served.The Pizza I ordered was undercooked and had very little sauce.Macaroni Grill has unfortunately taken a dive. Best to avoid the place or at the very least this location. Rev 6 I know this is a chain, but Between this and Olive Garden, I would def pick this place. Service was great at this location and food not bad at all, although not excellent, I think it still deserves a good 4 stars Rev 7 I had a 2 for 1 $9.00 express dinner coupon so we order up 2 dinners to go. The deal was 9 min or its free, it took 20, but since I was getting 2 meals for $9.00 I did not make a fuss. The actual pasta was fine and amount was fair but it had maybe a 1/4 of a chicken breast. The chicken tasted like it came from Taco Bell, VERY processed. The sauce straight from a can. I have had much better frozen dinners. My husband and I used to like Macaroni Grill it sad too see its food go so down hill. Rev 8 The atmosphere was great, friendly server. Although the food I think is served from frozen. I ordered mama trio. The two of three items were great. Plate came out hot, couldn’t touch it. Went to eat lasagna and was ice cold in the center, nit even warm. The server apologized about it offered new one or reheat this one. I chose a new one to go. I saw her go tell manager. The manager didn’t even come over and say anything. I was not even acknowledged on my way out and walked past 3 people. I will not be going back. Over priced for frozen food. Table 11: Yelp summaries produced by different models. 5167 Ours My wife and i have been here several times now and have never had a bad meal. The service is impeccable, and the food is delicious. We had the steak and lobster, which was delicious. I would highly recommend this place to anyone looking for a good meal. MeanSum Our first time here, the restaurant is very clean and has a great ambiance. I had the filet mignon with a side of mashed potatoes. They were both tasty and filling. I’ve had better at a chain restaurant, but this is a great place to go for a nice dinner or a snack. Have eaten at the restaurant several times and have never had a bad meal here. LexRank Had the filet... Really enjoyed my filet and slobster. In addition to excellent drinks, they offer free prime filet steak sandwiches. I have had their filet mignon which is pretty good, calamari which is ok, scallops which aren’t really my thing, sour dough bread which was fantastic, amazing stuffed mushrooms. Very good steak house. Gold The steak is the must have dish at this restaurant. One small problem with the steak is that you want to order it cooked less than you would at a normal restaurant. They have the habit of going a bit over on the steak. The drinks are excellent and the stuffed mushrooms as appetizers were amazing. This is a classy place that is also romantic. The staff pays good attention to you here. Rev 1 The ambiance is relaxing, yet refined. The service is always good. The steak was good, although not cooked to the correct temperature which is surprising for a steakhouse. I would recommend ordering for a lesser cook than what you normally order. I typically order medium, but at donovan’s would get medium rare. The side dish menu was somewhat limited, but we chose the creamed spinach and asparagus, both were good. Of course, you have to try the creme brulee Yum! Rev 2 Hadn’t been there in several years and after this visit I remember why, I don’t like onions or shallots in my macaroni and cheese. The food is good but not worth the price just a very disappointing experience and I probably won’t go back Rev 3 My wife and I come here every year for our anniversary (literally every year we have been married). The service is exceptional and the food quality is top-notch. Furthermore, the happy hour is one of the best in the Valley. In addition to excellent drinks, they offer free prime filet steak sandwiches. I highly recommend this place for celebrations or a nice dinner out. Rev 4 I get to go here about once a month for educational dinners. I have never paid so don’t ask about pricing. I have had their filet mignon which is pretty good, calamari which is ok, scallops which aren’t really my thing, sour dough bread which was fantastic, amazing stuffed mushrooms. The vegetables are perfectly cooked and the mashed potatoes are great. At the end we get the chocolate mousse cake that really ends the night well. I have enjoyed every meal I have eaten there. Rev 5 Very good steak house. Steaks are high quality and the service was very professional. Attentive, but not hovering. Classic menus and atmosphere for this kind of restaurant. No surprises. A solid option, but not a clear favorite compared to other restaurants in this category. Rev 6 Had a wonderful experience here last night for restaurant week. Had the filet... Which was amazing and cooked perfectly with their yummy mashed potatoes and veggies. The bottle of red wine they offered for an additional $20 paired perfectly with the dinner. The staff were extremely friendly and attentive. Can’t wait to go back! Rev 7 The seafood tower must change in selection of seafood, which is good, which is also why mine last night was so fresh fresh delicious. Its good to know that you can get top rate seafood in Phoenix. Bacon wrapped scallops were very good, and I sacrificied a full steak (opting for the filet medallion) to try the scallops. I asked for medium rare steak, but maybe shouldve asked for rare...my cousin had the ribeye and could not have been any happier than he was :) yum for fancy steak houses. Its an ultra romantic place to, fyi.the wait staff is very attentive. Rev 8 Donovans, how can you go wrong. Had some guests in town and some fantastic steaks paired with some great cabernets. Really enjoyed my filet and lobster. Table 12: Yelp summaries produced by different models. 5168 Ours I love this tank. It fits well and is comfortable to wear. I wish it was a little bit longer, but I’m sure it will shrink after washing. I would recommend this to anyone. MeanSum I normally wear a large so it was not what I expected. It’s a bit large but I think it’s a good thing. I’m 5 ’4 ”and the waist fits well. I’m 5 ’7 and this is a bit big. LexRank I’m 5 ’4 ’and this tank fits like a normal tank top, not any longer. The only reason I’m rating this at two stars is because it is listed as a ’long’ tank top and the photo even shows it going well past the models hips, however I’m short and the tank top is just a normal length. I bought this tank to wear under shirts when it is colder out. I was trying to find a tank that would cover past my hips, so I could wear it with leggings. Gold Great tank top to wear under my other shirts as I liking layering and the material has a good feel. There was a good choice of colors to pick from. Although, the top is a thin material I don’t mind since I wear it under something else. Rev 1 The description say it long... NOT so it is average. That’s why I purchased it because it said it was long. This is a basic tank.I washed it and it didn’t warp but did shrink a little. Nothing to brag about. Rev 2 I’m 5 ’4 ’and this tank fits like a normal tank top, not any longer. I was trying to find a tank that would cover past my hips, so I could wear it with leggings. Don’t order if you’re expecting tunic length. Rev 3 This shirt is OK if you are layering for sure. It is THIN and runs SMALL. I usually wear a small and read the reviews and ordered a Medium. It fits tight and is NOT long like in the picture. Glad I only purchased one. Rev 4 The tank fit very well and was comfortbale to wear. The material was thinner than I expected, and I felt it was probably a little over priced. I’ve bought much higher quality tanks for $5 at a local store. Rev 5 The only reason I’m rating this at two stars is because it is listed as a ’long’ tank top and the photo even shows it going well past the models hips, however I’m short and the tank top is just a normal length. Rev 6 I usually get them someplace out but they no longer carry them. I thought I would give these a try. I received them fast, although I did order a brown and got a black (which I also needed a black anyway). They were a lot thinner than I like but they are okay. Rev 7 Every women should own one in every color. They wash well perfect under everything. Perfect alone. As I write I’m waiting on another of the same style to arrive. Just feels quality I don’t know how else to explain it, but I’m sure you get it ladies! Rev 8 I bought this tank to wear under shirts when it is colder out. I bought one in white and one in an aqua blue color. They are long enough that the color peeks out from under my tops. Looks cute. I do wish that the neck line was a bit higher cut to provide more modest coverage of my chest. Table 13: Amazon summaries produced by different models. 5169 Ours This is the best acupressure mat I have ever used. I use it for my back pain and it helps to relieve my back pain. I have used it for several months now and it seems to work well. I would recommend it to anyone. MeanSum I have used this for years and it works great. I have trouble with my knee pain, but it does help me to get the best of my feet. I have had no problems with this product. I have had many compliments on it and is still in great shape. LexRank I ordered this acupressure mat to see if it would help relieve my back pain and at first it seemed like it wasn’t doing much, but once you use it for a second or third time you can feel the pain relief and it also helps you relax. its great to lay on to relax you after a long day at work. I really like the Acupressure Mat. I usually toss and turn a lot when I sleep, now I use this before I go to bed and it helps relax my body so that I can sleep more sound without all the tossing and turning. Gold These acupressure mats are used to increase circulation and reduce body aches and pains and are most effective when you can fully relax. Consistence is key to receive the full, relaxing benefits of the product. However, if you are using this product after surgery it is responsible to always consult with your physician to ensure it is right for your situation. Rev 1 Always consult with your doctor before purchasing any circulation product after surgery. I had ankle surgery and this product is useful for blood circulation in the foot. This increase in circulation has assisted with my ability to feel comfortable stepping down on the foot (only after doc said wait bearing was okay). I use it sitting down barefoot. Rev 2 I really like the Acupressure Mat. I usually toss and turn a lot when I sleep, now I use this before I go to bed and it helps relax my body so that I can sleep more sound without all the tossing and turning. Rev 3 I used the mat the first night after it arrived and every-other night since. After 2 ten minute sessions, I am sold. I have slept much better at night - I think it puts me in a more relaxed state, making it easier to fall asleep. A rather inexpensive option to relieving tension in my neck, upper back and shoulders. Rev 4 This is the best thing! you can use socks if your feet are tender to walk on it or bare foot if you can take it. I use it every morning to walk across to jump start my body. when I think about it I will lay on it, it feels wonderful. Rev 5 I love these spike mats and have recommended them to everyone that has had any kind of body ache. its great to lay on to relax you after a long day at work. Helps with pain in my back and pain in my legs. Its not a cure, but it sure helps with the healing process. Rev 6 I wish I hadn’t purchased this item. I just can’t get use to it, it’s not comfortable. I have not seen any benefits from using it but that could be because I don’t relax or use it for long enough. Rev 7 I run an alternative health center and use Acupressure pin mats from different sources to treat my patients, but this product is the patients choice, they are asking allways for this mat against other brands so I changed all of them for Britta, moreover the S & H was outstanding and really fast. Rev 8 I ordered this acupressure mat to see if it would help relieve my back pain and at first it seemed like it wasn’t doing much, but once you use it for a second or third time you can feel the pain relief and it also helps you relax. I use it almost everyday now and it really helps. I recommed this product and this seller. Table 14: Amazon summaries produced by different models.
2020
461
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5170–5184 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5170 (Re)construing Meaning in NLP Sean Trott University of California, San Diego [email protected] Tiago Timponi Torrent Federal University of Juiz de Fora [email protected] Nancy Chang Google [email protected] Nathan Schneider Georgetown University [email protected] Abstract Human speakers have an extensive toolkit of ways to express themselves. In this paper, we engage with an idea largely absent from discussions of meaning in natural language understanding—namely, that the way something is expressed reflects different ways of conceptualizing or construing the information being conveyed. We first define this phenomenon more precisely, drawing on considerable prior work in theoretical cognitive semantics and psycholinguistics. We then survey some dimensions of construed meaning and show how insights from construal could inform theoretical and practical work in NLP. 1 Introduction Natural language is a versatile tool for allowing humans to express all manner of communicative intents, from simple descriptions of the entities and situations in their direct experience to elaborate rhetorical flights of fancy. Many NLP applications, such as information extraction, question answering, summarization, and dialogue systems, have restricted their scope to what one might call objective information content—relatively uncontroversial facts that systems can infer from an utterance, store in a database and reason about. While it is tempting to equate such information with the meaning of an utterance, a large body of literature in linguistics and psycholinguistics argues that an utterance conveys much more than a simple set of facts: it carries with it a halo of intimations arising from the speaker’s choices, including considerations of perspective, emphasis, and framing. That is, linguistic choices subtly color meaning; far from merely conveying objective facts, they reflect how speakers conceptualize meaning and affect listeners’ interpretations in predictable ways. Take, for example, this metaphor-rich portrayal of a newborn as a tyrant over her parental subjects: (1) Nora’s arrival brought a regime change. Life under her adorable tyranny was filled with squawking, swaddling and ceaseless sleepinput-output cycles. We were relieved when she relaxed her tiny iron grip. This report of new parenthood describes a major life change along with everyday caregiver routines, but its emphasis is on the parents’ experience of being suppressed (under) and controlled (grip) by a creature who is cast, variously, as a tyrant (regime), a bird (squawk), and a relentless machine (sleepinput-output cycles, iron grip)—albeit a (subjectively) adorable one. The power of linguistic choices to shape understanding is also evident in more mundane (and wellstudied) examples: (2) a. Chuck bought a car from Jerry. Jerry sold a car to Chuck. Jerry paid Chuck for the car. b. I work at Microsoft. I work for Microsoft. c. The statue stands in the plaza. The statue is standing in the plaza. Each set includes sentences that convey roughly the same facts—i.e. they could describe the same scenario—but nonetheless differ in various respects. The familiar framing differences between buy/sell/ pay (2a) focus attention on different participants and subevents in a commercial transaction. (2b) involves a subtler difference in emphasis, where the choice of at highlights the location of the work, while for evokes how that work benefits the employer. Grammatical marking can also shift event connotations, as illustrated by the stative vs. temporary contrast in (2c). Such distinctions illustrate the general phenomenon of construal, which we claim has been neglected in NLP. We believe that a proper recog5171 nition of construal would provide a unified framework for addressing a wide range of issues involving meaning and linguistic variation, opening the way to systems that more closely approximate (actually) natural language. This paper surveys the theoretical and empirical landscape related to construal phenomena and makes the case for its relevance to NLP. After clarifying the terms adopted here (§2), we lay out a few key dimensions of construed meaning (§3) and then elaborate on some mechanisms of construal (§4). A trio of case studies illustrate how different types of construal can challenge NLP systems (§5). We end with some conclusions and suggestions for how to begin addressing these challenges (§6). 2 Meaning and construal Our view of construal and its close companion meaning is rooted in both frame-based and cognitive semantic traditions. The notion that words and other linguistic units evoke background scenes along with specific perspectives on those scenes is captured by Fillmore’s (1977) slogan, MEANINGS ARE RELATIVIZED TO SCENES. This idea has deeper consequences than merely assigning different semantic roles to examples like (2a). As Langacker (1993, p. 460) observes, “any given situation can be viewed in multiple if not infinitely many ways. Starting from the same basic conceptual content...we can form an endless variety of specific conceptions by making alternate choices in regard to the many dimensions of construal.” This view of linguistic meaning—which we might call inherently multivalent—is more flexible than in many theoretical and computational treatments, particularly truth-conditional approaches that liken meanings to facts in a database. The visual domain offers a more informative analog: a photographic or artistic rendering of a scene can vary in vantage point, viewing distance, objects in sight or in focus, color and lighting choices, etc. (Langacker, 1993; Talmy, 1988). Context matters, too: a painting hanging on a preschool wall may be received differently if displayed in a museum. Just as there is no one objective, context-independent depiction of a scene, there are many valid ways to present an idea through language. We thus extend Fillmore’s slogan to include all kinds of conceptual content (beyond scenes); the broader communicative context; and the effect of choices made as part of the construal process: MEANINGS ARE RELATIVIZED TO CONTENT, CONTEXT AND CONSTRUAL. Below we elaborate on how each of these interrelated factors affects construed meaning. Conceptual content. We assume that linguistic units can evoke and combine all kinds of conceptual content, including open-ended world knowledge (entities, actions, events, relations, etc.) as well as more schematic structures often associated with grammar and function words. Crucially, concepts must also be amenable to certain kinds of transformation (e.g., shifts in perspective or granularity) as part of construal; see below.1 Communicative context. We take meaning to encompass scene-level entities and events, discourse-level information about the interlocutors and their communicative intents, and other phenomena straddling the (fuzzy) semantic-pragmatic boundary, related to attention (e.g., profiling and perspective) and conditions of usage falling under what Fillmore (1985) dubbed “U-Semantics” (in contrast to truth-oriented “T-Semantics”).2 Contextual factors (e.g., the interlocutors’ identity, beliefs, goals, conceptual repertoire, cultural backgrounds) can radically alter construed meaning. On this view, meaning is not arbitrarily subjective, or merely intersubjective; it is also constrained by all aspects of the communicative context. Construal. We define construal as a dynamic process of meaning construction, in which speakers and hearers encode and decode, respectively, some intended meaning in a given communicative context. To do so, they draw on their repertoire of linguistic and conceptual structures, composing and transforming them to build coherent interpretations consistent with the speaker’s lexical, grammatical, and other expressive choices.3 We take construal to be fundamental to all language use, though how much construal and what 1We are not here concerned with precisely how concepts are represented or learned, since we believe the insights related to construal apply broadly across theoretical frameworks. 2For example, only U-Semantics can explain why “the children are on the bus” is preferred over “the children are in the bus” if the bus is in transit, despite referring to the same spatial relationship. 3Both speakers and hearers engage in construal: speakers, in choosing how to present the idea, experience or other content they wish to convey; hearers, in reconstructing that intended meaning. Words like ‘analysis’ and ‘interpretation’ should thus be understood as applying to meaning construction by either interlocutor. (We do not focus here on the many differences between comprehension and production.) 5172 kinds of construal vary across interpretations.4 In the simplest cases, the relevant components fit neatly together (à la compositional semantics). But many (or even most) utterances involve a myriad of disparate structures—conceptual, linguistic, and contextual—that may need to be transformed, (re)categorized, or otherwise massaged to be integrated into a single coherent whole. This conceptual flexibility is not arbitrary: the space of combinatorial options is delimited by construal operations defined with respect to certain privileged construal dimensions. A number of dimensions and operations have been proposed, many motivated by general cognitive processes; we will review some of these in §3, and illustrate how they are engaged during language use in §4. This inclusive, flexible view of meaning has broad implications for a wide variety of linguistic phenomena, and many parallels in prior work—far too many to address exhaustively here. We restrict our current scope in several ways: (1) While some aspects of context will be mentioned below, we do not address many phenomena related to pragmatic inference (e.g. politeness, indirect requests). (2) Though many construal dimensions are relevant cross-linguistically, we will not address typological patterns in the lexical, grammatical, and cultural conventions that influence construal. (3) We highlight construal phenomena that are psycholinguistically attested and/or relevant to NLP research. 3 Dimensions of construed meaning Several (partial) taxonomies of construal dimensions have been proposed in the cognitive linguistics literature (Langacker, 1993; Talmy, 1988; Croft and Wood, 2000; Taylor, 1995; Casad, 1995); see Croft and Cruse (2004) for an overview. We will not attempt to reconcile their many differences in terminology and organization, but instead present selected dimensions most relevant for NLP. 3.1 Perspective Languages have many ways of describing scenes from a specific PERSPECTIVE (or vantage point). The spatial domain provides clear examples: a cup might be described as being left or right of some other object, depending on whose perspective is adopted; or explicitly marked as being on my/your/ 4Conventionality plays an important role here: initially creative expressions may require less construal as they become entrenched and their meanings more efficiently accessed. her/Sue’s left. Likewise, the same motion event can be described relative to differing deictic centers (e.g., the arrival in (1) can also be viewed as a departure from the hospital). Perspective can extend beyond the spatial domain. The use of past tense in (1) indicates the speaker’s retrospective viewpoint. Differences in opinion, belief state or background have also been treated as perspective shifting. Talmy’s (1988) taxonomy defines a broader version of PERSPECTIVE that includes distribution of attention. Descriptions of a static scene can adopt a dynamic perspective, evoking the experience of moving through the scene (“There is a house every now and then through the valley”); these descriptions can be even more explicit, as with fictive motion (“The road runs through the valley”) (Talmy, 1996; Matlock, 2004b). Psycholinguistic evidence. Grammatical person can affect which perspective a comprehender adopts when reading about an event (Brunyé et al., 2009) and which actions they are most likely to remember (Ditman et al., 2010). Fictive motion can also influence the way comprehenders conceptualize a static scene (Matlock, 2004a,b). Relevant NLP research. Perspective is crucial for understanding spatial language, e.g. for robotics (§5.2) and other kinds of situated language. Work on grounding referents from natural language descriptions has incorporated visual perspective as another source of information about the intended referent (Devin and Alami, 2016; Ros et al., 2010; Trafton et al., 2005). 3.2 Prominence PROMINENCE (or salience) refers to the relative attention focused on different elements in a scene (Langacker, 1993; Talmy, 1988). Languages have various devices for highlighting, or profiling, some elements over others (or leaving them implicit). For example, verbs like those in (2a) differ in which elements in a larger scene are preferentially expressed. Similarly, many spatial and temporal adpositions involve an asymmetric profiling of one entity relative to another; thus “the painting is above the piano” and “the piano is below the painting” describe the same situation but differ in focus. Verbal and constructional alternations also manipulate prominence: The active/passive pair “Microsoft employs me” and “I am employed by Microsoft” differ in profiling the employer and 5173 speaker, respectively. Similarly, transitive “I rolled the ball” vs. intransitive “The ball rolled” differ in whether the ball-roller is even mentioned. Languages also differ systematically in how motion events are most idiomatically expressed, in particular in whether the main verb encodes (and foregrounds) the manner (English run) or path (Spanish entrar) of motion. Psycholinguistic evidence. A speaker’s decisions about which features to encode in the main verb versus a satellite can influence which events comprehenders find most similar (Billman and Krych, 1998) and which features they tend to remember (Gennari et al., 2002). In other work, Fausey and Boroditsky (2010) found that descriptions of an accidental event using a transitive construction (“She had ignited the napkin”) led participants to assign more blame to the actor involved, and even demand higher financial penalties, than descriptions using non-agentive constructions (“The napkin had ignited”). In language production, there are a number of factors influencing which construction a speaker chooses (e.g., current items in discourse focus (Bresnan et al., 2007), lexical and syntactic priming (Pickering and Ferreira, 2008)). Relevant NLP research. Recovering implicit information is widely studied in NLP, and deciding which information to express is key to NLG and summarization. We mention three examples exploring how choices of form lend prominence to certain facets of meaning in ways that strongly resonate with our claims about construal. Greene and Resnik (2009) show that syntactic framing—e.g. active (Prisoner murders guard) vs. passive (Guard is murdered)—is relevant to detecting speaker sentiment about violent events. Hwang et al. (2017) present an annotation scheme for capturing adpositional meaning construal (as in (2b)). Rather than disambiguate the adposition with a single label, they separately annotate an adposition’s role with respect to a scene (e.g. employment) and the aspect of meaning brought into prominence by the adposition itself (e.g., benefactive for vs. locative at). This more flexibly accounts for meaning extensions and resolves some annotator difficulties. Rohde et al. (2018) studied the construction of discourse coherence by asking participants to insert a conjunction (and, or, but, so, because, before) where none was originally present, before an explicit discourse adverbial (e.g. in other words). They found that some contexts licensed multiple alternative conjunctions, each expressing a different coherence relation—i.e., distinct implicit relations can be inferred from the same passage. This speaks to the challenge of fully annotating discourse coherence relations and underscores the role of both linguistic and contextual cues in coherence. 3.3 Resolution Concepts can be described at many levels of RESOLUTION—from highly detailed to more schematic. We include here both specificity (e.g., pug < dog < animal < being) and granularity (e.g., viewing a forest at the level of individual leaves vs. branches vs. trees). Lexical items and larger expressions can evoke and combine concepts at varying levels of detail (“The gymnast triumphantly landed upright” vs. “A person did something”). Psycholinguistic evidence. Resolution is related to basic-level categories (Rosch et al., 1976; Lakoff, 1987; Hajibayova, 2013), the most culturally and cognitively salient levels of a folk taxonomy. Speakers tend to use basic-level terms for reference (e.g., tree vs. entity/birch), and basic-level categories are more easily and quickly accessed by comprehenders (Mervis and Rosch, 1981; Rosch et al., 1976). Importantly, however, what counts as basic-level depends on the speaker’s domain expertise (Tanaka and Taylor, 1991). Speakers may deviate from basic-level terms under certain circumstances, e.g., when a more specific term is needed for disambiguation (Graf et al., 2016). Conceptualization is thus a flexible process that varies across both individual cognizers (e.g., as a function of their world knowledge) and specific communicative contexts. Relevant NLP research. Resolution is already recognized as important for applications such as text summarization and dialogue generation (Louis and Nenkova, 2012; Li and Nenkova, 2015; Ko et al., 2019a; Li et al., 2016; Ko et al., 2019b), e.g., in improving human judgments of informativity and relevance (Ko et al., 2019b). Also relevant is work on knowledge representation in the form of inheritance-based ontologies and lexica (e.g., FrameNet (Fillmore and Baker, 2009), ConceptNet (Liu and Singh, 2004)). 3.4 Configuration CONFIGURATION refers to internal-structural properties of entities, groups of entities, and events, 5174 indicating their schematic “shape” and “texture”: multiplicity (or plexity), homogeneity, boundedness, part-whole relations, etc. (Langacker, 1993; Talmy, 2000). To borrow an example from Croft (2012), a visitor to New England can describe stunning autumn leaves or foliage. Though both words indicate a multiplex perception, they exhibit a grammatical difference: the (plural) count noun leaves suggests articulated boundaries of multiple individuals, whereas the mass noun foliage suggests a more impressionistic, homogeneous rendering. This dimension includes many distinctions and phenomena related to aspect (Vendler, 1967; Comrie, 1976), including whether an event is seen as discrete (sneeze) or continuous (read); involves a change of state (leave vs. have); has a defined endpoint (read vs. read a book); etc. Lexical and grammatical markers of configuration properties interact in complex ways; see discussion of count/ mass and aspectual coercion in §4. Psycholinguistic evidence. Differences in grammatical aspect can modulate how events are conceptualized (Matlock, 2011). Stories written in imperfective aspect are remembered better; participants are also more likely to believe that the events in these stories are still happening (Magliano and Schleich, 2000) and build richer mental simulations of these events (Bergen and Wheeler, 2010). In turn, these differences in conceptualization have downstream consequences, ranging from judgments about an event’s complexity (Wampler and Wittenberg, 2019) to predictions about the consequences of a political candidate’s behavior on reelection (Fausey and Matlock, 2011). The mass/count distinction has attested psychological implications, including differences in word recognition time (Gillon et al., 1999) (see Fieder et al. (2014) for a review). Relevant NLP research. Configurational properties are closely linked to well-studied challenges at the syntax-semantic interface, in particular nominal and aspectual coercion effects (§4). Several approaches explicitly model coercion operations based on event structure representations (Moens and Steedman, 1988; Passonneau, 1988; Pulman, 1997; Chang et al., 1998), while others explore statistical learning of aspectual classes and features (Siegel and McKeown, 2000; Mathew and Katz, 2009; Friedrich and Palmer, 2014). Lexical resources have also been developed for aspectual annotation (Donatelli et al., 2018) and the count/ mass distinction (Schiehlen and Spranger, 2006; Kiss et al., 2017). 3.5 Metaphor The dimension of METAPHOR is broadly concerned with cross-domain comparison, in which speakers “conceptualize two distinct structures in relation to one another” (Langacker, 1993, p. 450). Metaphors have been analyzed as structured mappings that allow a target domain to be conceptualized in terms of a source domain (Lakoff and Johnson, 1980). Metaphors pervade language use, and exhibit highly systematic, extensible structure. For example, in English, events are often construed either as locations in space or as objects moving through space. Our experience of time is thus often described in terms of either motion toward future events (“we’re approaching the end of the year”), or the future moving toward us (“the deadline is barreling towards us”) (Boroditsky, 2000, 2001; Hendricks and Boroditsky, 2017; Núñez and Sweetser, 2006). Metaphor plays a role in our linguistic characterization of many other domains as well (Lakoff and Johnson, 1980). Psycholinguistic evidence. Different metaphors can shape a comprehender’s representation about the same event or concept in radically different ways. Thibodeau and Boroditsky (2011) found that describing a city’s crime problem as a beast or as a virus elicited markedly different suggestions about how best to address the problem, e.g., whether participants tended to endorse enforcement- or reform-based solutions. Similar effects of metaphor on event conceptualization have been found across other domains, such as cancer (Hauser and Schwarz, 2015; Hendricks et al., 2018) and climate change (Flusberg et al., 2017) (see Thibodeau et al. (2017) for a thorough review). Relevant NLP research. Considerable NLP work has addressed the challenge of metaphor detection and understanding (Narayanan, 1999; Shutova et al., 2010, 2013; Shutova, 2015). This work has made use of both statistical, bottom-up approaches to language modeling (Gutiérrez et al., 2016; Shutova et al., 2013), as well as knowledge bases such as MetaNet (Dodge et al., 2015; Stickles et al., 2014; David and Dancygier, 2017). 3.6 Summary The selective review of construal dimensions presented here is intended to be illustrative, not exhaustive or definitive. Returning to the visual anal5175 ogy, we can see these dimensions as primarily concerned with how (and what part of) a conceptual “scene” is perceived (PERSPECTIVE, PROMINENCE); the choice or categorization of which schematic structures are present (CONFIGURATION and METAPHOR); or both (RESOLUTION). We have omitted another high-level categorization dimension, SCHEMATIZATION, which includes concepts related to force dynamics, image schemas, and other experientially grounded schemas well discussed in the literature (Talmy, 2000). We have also not addressed pragmatic inference related to politeness (Brown and Levinson, 1987), indirect requests (Clark, 1979), and other aspects of communicative intent. Additionally, some phenomena are challenging to categorize within the dimensions listed here; a more complete analysis would include evidentality (Chafe and Nichols, 1986), modality (Mortelmans, 2007), light verb constructions (Wittenberg and Levy, 2017; Wittenberg et al., 2014), and more. Nonetheless, we hope this partial taxonomy provides a helpful entry point to relevant prior work and starting point for further alignment. 4 Construal in action How might construal work in practice? We have emphasized so far the flexibility afforded by the dimensions in §3. But we must also explain why some words and concepts make easier bedfellows than others. This section presents a thumbnail sketch of how the construal process copes with apparent mismatches, where it is the collective constraints of the input structures that guide the search for coherence. We focus on comprehension (similar processes apply in production), and assume some mechanism for proposing interpretations consisting of a set of conceptual structures and associated compatibility constraints. Compatibility constraints are analogous to various kinds of binding constraints proposed in the literature (variable binding, rolefiller bindings, unification bindings, and the like): they are indicators that two structures should be conceptualized as a single unit. But compatibility is softer and more permissive than identity or typecompatibility, in that it can also be satisfied with the help of construal operations. Some operations effect relatively subtle shifts in meaning; others have more dramatic effects, including changes to truth-conditional aspects of meaning. Below we illustrate how some example linguistic phenomena fit into the sketch just presented and mention connections to prior lines of work. Count/mass coercion. English nouns are flexible in their count/mass status (see §3.4). Atypical marking for number or definiteness can cause a shift, or coercion, in boundedness: plural or indefinite marking on mass nouns (a lemonade, two lemonades) yields a bounded interpretation (cups or bottles of lemonade). Conversely, count nouns with no determiner are coerced to an undifferentiated mass, via a phenomenon known as grinding (“there was mosquito all over the windshield”) (Pelletier and Schubert, 1989, 2003; Copestake and Briscoe, 1995). Here we see evidence of the outsize influence of tiny grammatical markers on manipulating lexical defaults in the construal process. Aspectual composition. Aspect is a prime arena for studying how multiple factors conspire to shape event construal. Verbs are associated with default aspectual classes that can be coerced under pressure from conflicting cues, where details of event structure systematically constrain possible coercions and their inferential consequences (Moens and Steedman, 1988; Talmy, 1988). In fact, aspectual coercion can be reanalyzed in terms of construal dimensions. For example, durative modifiers (e.g. for an hour) prefer to combine with atelic processes (lacking a defined endpoint, as in 3a) on which to impose a bound (analogous to count/mass coercion) and duration. Combination with any other aspectual class triggers different operations to satisfy that preference: (3) a. He {slept / ran} for an hour. b. He sneezed for an hour. c. He read the book for an hour. d. He left for an hour. A single sneeze, being a discrete event unlikely to last an hour, undergoes ITERATION into a series of sneezes (3b), illustrating a change in plexity (§3.4); while the book-reading in in (3c) is simply viewed as unfinished (cf. “He read the book”). The departure in (3d) is a discrete event, but unlike sneezing, it also results in a state change that is reversible and therefore boundable (cf. the iterative reading of “He broke the glass for an hour”, the non-permanent reading of 2c). Its coercion thus features multiple operations: a PROMINENCE shift to profile the result state of being gone; and then a BOUNDING that also reverses state, implying a return (Chang et al., 1998). 5176 Constructional coercion. The flagship example cited in the construction grammar literature (4a) has also been analyzed as a kind of coercion, serving to resolve conflicts between lexical and grammatical meaning (Goldberg, 1995, 2019): (4) a. She sneezed the napkin off the table. b. She {pushed / blew / sneezed / ?slept} the napkin off the table. Here, the verb sneeze, though not typically transitive or causal, appears in a Caused Motion argument structure construction, which pairs obliquetransitive syntax with a caused motion scene. The resulting conflict between its conventional meaning and its putative causal role is resolvable, however, by a commonsense inference that sneezing expels air, which can plausibly cause the napkin’s motion (cf. Forbes and Choi, 2017). This coercion, also described as role fusion, differs from the previous examples in manipulating the PROMINENCE of a latent component of meaning. Coercion doesn’t always succeed, however: presumably sneezing could only move a boulder with contextual support, and sleeping has a less plausibly forceful reading. In fact, construal depends on the interaction of many factors, including degree of conventionality (where push and blow are prototypical caused motion verbs), embodied and world knowledge (the relative forces of sneeze and sleep to napkin weight), and context.5 There is extensive psycholinguistic evidence of constructional coercion and the many factors influencing ease of construal (see Goldberg (2003, 2019) for reviews). Some of these phenomena have been analyzed within computational implementations of construction grammar (Bergen and Chang, 2005; Bryant, 2008; Bergen and Chang, 2013; Dodge and Petruck, 2014; Steels, 2017; Steels and Feldman, 2017; Matos et al., 2017), and have also been incorporated in corpus annotation schemes (Bonial et al., 2011; Hwang et al., 2014; Lyngfelt et al., 2018). Metonymy and metaphor. Metonymy and metaphor are associated with semantic mismatches 5A related theory is Dowty’s (1991) semantic proto-roles account, which links the grammatical subject/object asymmetry to two clusters of semantic features that are more agent-like (e.g., animacy) or patient-like (e.g., affectedness), respectively; associations between these proto-roles and grammatical subjects and objects are attested in comprehension (Kako, 2006; Pyykkönen et al., 2010) and have been investigated computationally (Reisinger et al., 2015; Rudinger et al., 2018). that trigger construal operations. A possible analysis of tiny iron grip from (1) illustrates both. First, the modifiers tiny and iron expect a physical entity, but grip is a (nominalized) action. This conflict triggers a profile shift (PROMINENCE) to the grip’s effector (a hand), effectively licensing a metonymy. A further conflict arises between the hand and its description as iron (unlikely to be literal unless the protagonist is of robotic lineage). A structural alignment (METAPHOR) then maps the iron’s strength to the grip’s force, which in turn maps to the degree of dictatorial control.6 We observe that multiple construal operations can occur in sequence; that a conceptual or linguistic element may afford more than one construal within the same analysis (grip as both a hand and metaphorical control); and that aspects of common sense, world knowledge, and culture (though not the focus of the present work) inevitably constrain construal options. 5 Case studies We turn to a few illustrations of how the pervasive effects of construal can arise in applied settings. 5.1 Case study 1: Conversational assistants Even simple tasks like rescheduling a meeting pose many challenges to dialogue systems, in both understanding users’ intents and formulating natural responses. Consider the following exchange: U-1: When is my 1-1 with Chuck? A-2: 4 PM today, in 15 minutes. U-3: Is there another slot soon? A-4: Not today, should I check tomorrow? U-5: Let’s push it to his tomorrow evening. A-6: Rescheduled 1-1 with Chuck for 2 PM tomorrow, 6 PM in Brazil. The agent’s first response (A-2) demonstrates sensitivity to PERSPECTIVE by providing a relative time. Interpreting “another slot soon” in the user’s follow-up (U-3) requires both understanding that another is implicitly defined in contrast to the existing slot (relying on PROMINENCE) and then inferring the appropriate RESOLUTION meant by soon (on the scale of hours, rather than minutes or seconds). The agent’s succinct response in (A-4) exploits PROMINENCE yet again, both by eliding reference to the sought-after open meeting slot with 6Alternatively, iron grip could be treated as an entrenched idiom with a readily accessible construal that tiny can modify. 5177 Chuck, and by using “tomorrow” (the direct object of “check”) as a metonymic shorthand for the joint constraints of the user’s and Chuck’s calendars. The next user turn (U-5) employs METAPHOR in its construal of an event as a physical object, capable of being pushed. The metaphorical destination (“his tomorrow evening”) requires consideration of differing time zones (PERSPECTIVE), as made explicit in the final agent turn (A-6). Interactions between situational context and the kinds of compatibility constraints discussed in §4 can also affect a dialogue system’s best response. A user asking a fitness tracking app “How long have I been running?” while panting around a track may be referring to the current run, but the same question asked while sitting at home is more likely wondering how long they’ve been habitually running. A successful response requires the integration of the constraints from (at least): the verb running, whose progressive marking is associated with ongoing processes, but ambiguous between a single run and a series of runs (CONFIGURATION); the present-perfect have been V-ing, which implies an internal view (PERSPECTIVE); and the situational context (is the user currently running?). 5.2 Case study 2: Human-robot interaction Situated interactions between humans and robots require the integration of language with other modalities (e.g., visual or haptic).7 Clearly, any spatially grounded referring expressions must be tailored to the interlocutors’ PERSPECTIVE (whether shared or not) (Kunze et al., 2017). Focus of attention (PROMINENCE) is especially important for systems that must interpret procedural language. Recipes, for example, are notoriously telegraphic, with rampant omissions of information that a human cook could easily infer in context (Ruppenhofer and Michaelis, 2010; Malmaud et al., 2014). Consider (5): (5) In a medium bowl, cream together the sugar and butter. Beat in the eggs, one at a time, then stir in the vanilla. The italicized words provide crucial constraints that would help a cook (human or robot) track the evolving spatial relations. The first in establishes 7Indeed, the needs of human-robot interaction have motivated extensions to Abstract Meaning Representation (Banarescu et al., 2013) beyond predicate-argument structure and entities to capture tense and aspect, spatial information, and speech acts (Bonial et al., 2019). the bowl as the reference point for the creaming action, whose result—the mixture of sugar and butter together—becomes the implicit landmark for the subsequent beating in of eggs and vanilla. Systems following instructions also require a means of segmenting continuous sensorimotor data and linking it to discrete linguistic categories (Regneri et al., 2013; Yagcioglu et al., 2018) (cf. the symbol grounding problem (Harnad, 1990)). This mapping may depend on flexibly adjusting RESOLUTION and CONFIGURATION based on linguistic cues (e.g., cut/dice/slice/sliver the apple). 5.3 Case study 3: Paraphrase generation Despite many advances, paraphrase generation systems remain far from human performance. One vexing issue is the lack of evaluation metrics that correlate with human judgments for tasks like paraphrase, image captioning, and textual entailment (see, e.g., Bhagat and Hovy, 2013; Pavlick and Kwiatkowski, 2019; Wang et al., 2019b). In particular, it is unclear how closely a good paraphrase should hew to all aspects of the source sentence. For example, should active/passive descriptions of the same scene, or the sets of sentences in (2), be considered meaning-equivalent? Or take the putative paraphrase below: (6) a. The teacher sat on the student’s left. b. Next to the children was a mammal. These could plausibly describe the same scene; should their differences across multiple dimensions (PERSPECTIVE, PROMINENCE, RESOLUTION) be rewarded or penalized for this diversity? A first step out of this quandary is to recognize construal dimensions and operations as a source of linguistic variability. Paraphrase generation and other semantically oriented tasks could incorporate these into system design and evaluation in taskspecific ways. 6 Discussion Throughout this paper, we have emphasized the flexible and multivalent nature of linguistic meaning, as evidenced by the construal phenomena described here. The effects of construal are ubiquitous: from conventional to creative language use, through morphemes and metaphors. Indeed, even the smallest forms can, like tiny tyrants, exert a transformative force on their surroundings, inducing anything from a subtle shift in emphasis to a 5178 radical reconceptualization. As illustrated in §5, this flexibility of language use poses a challenge for NLP practitioners. Yet crucially—and fortunately—construal is not random: variations in linguistic form correspond systematically to differences in construal. The dimensions of construal and their associated operations (§3 and §4) offer principled constraints that render the search for coherence more tractable. How, then, should we proceed? Our goal is for construal dimensions such as those highlighted in §3 to be incorporated into any research program aspiring to human-level linguistic behavior. Below, we describe several concrete recommendations for how to do this. More meaningful metrics. Taking construal seriously means rethinking how NLP tasks are designed and evaluated. Construal dimensions can provide a rubric for assessing tasks, datasets, and meaning representations (Abend and Rappoport, 2017) for which meaningful distinctions they make or require. (E.g.: Does it capture the level of RESOLUTION at which entities and events are described? Does it represent METAPHOR? Is it sensitive to the PROMINENCE of different event participants?) Such questions might also help guard against unintended biases like those recently found in NLP evaluations and systems (e.g., Caliskan et al., 2017; Gururangan et al., 2018). Popular NLU benchmarks (like SuperGLUE; Wang et al., 2019a) should be critically examined for potential construal biases, and contrasts should be introduced deliberately to probe whether systems are modeling lexical choices, grammatical choices, and meaning in the desired way (Naik et al., 2018; Kaushik et al., 2020; McCoy et al., 2019; Gardner et al., 2020). As a broader suggestion, datasets should move away from a one-size-fits-all attitude based on gold annotations. Ideally, evaluation metrics should take into account not only partial structure matches, but also similarity to alternate construals. Cognitive connections. The many connections between construal and the rest of cognition highlight the need for further interdisciplinary engagements in the study of construal. The psycholinguistics literature is a particularly rich source of construal-related data and human language benchmarks. Psycholinguistic data could also be used to probe neural language models (Futrell et al., 2018; Linzen and Leonard, 2018; van Schijndel and Linzen, 2018; Ettinger, 2020). How well do such models capture the phenomena reviewed in §3, and where do they fall short? A fuller account of the constellation of factors involved in construal should also take seriously the grounded, situated nature of language use (Harnad, 1990; Kiros et al., 2018; Bender and Koller, 2020; Bisk et al., 2020). Frameworks motivated by the linguistic insights mentioned in §2 (such as the work on computational construction grammar referenced in §4) and by growing evidence of embodied simulations as the basis for meaning (Narayanan, 1999; Bergen and Chang, 2005; Feldman, 2006; Bergen, 2012; Tamari et al., 2020) are especially relevant lines of inquiry. Much work remains to flesh out the construal dimensions, operations and phenomena preliminarily identified in §3 and §4, especially in connecting to typological, sociolinguistic, developmental, and neural constraints on conceptualization. We believe a concerted effort across the language sciences would provide valuable guidance for developing better NL systems and resources. 7 Conclusion As the saying goes, the camera doesn’t lie—but it may tell us only a version of the truth. The same goes for language. Some of the phenomena we have described may seem, at first glance, either too subtle to bother with or too daunting to tackle. But we believe it is both timely and necessary, as language technologies grow in scope and prominence, to seek a more robust treatment of meaning. We hope that a deeper appreciation of the role of construal in language use will spur progress toward systems that more closely approximate human linguistic intelligence. Acknowledgments We are grateful to Lucia Donatelli, Nick Hay, Aurelie Herbelot, Jena Hwang, Jakob Prange, Susanne Riehemann, Hannah Rohde, Rachel Rudinger, and anonymous reviewers for many helpful suggestions; and to the ACL 2020 organizers for planning a special theme, Taking Stock of Where We’ve Been and Where We’re Going. Special thanks to Nora Chang-Hay for finally relaxing her tiny iron grip. This research was supported in part by NSF award IIS-1812778. The FrameNet Brasil Lab is funded by CAPES grants 88887.125411/2016-00 and 88887.144043/2017-00. 5179 References Omri Abend and Ari Rappoport. 2017. The state of the art in semantic representation. In Proc. of ACL, pages 77–89, Vancouver, Canada. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proc. of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proc. of ACL. Benjamin Bergen and Nancy Chang. 2013. Embodied Construction Grammar. In Thomas Hoffmann and Graeme Trousdale, editors, The Oxford Handbook of Construction Grammar, pages 168–190. Oxford University Press, New York. Benjamin Bergen and Kathryn Wheeler. 2010. Grammatical aspect and mental simulation. Brain and Language, 112(3):150–158. Benjamin K. Bergen. 2012. Louder Than Words: The New Science of How the Mind Makes Meaning. Perseus Books Group, New York. Benjamin K. Bergen and Nancy Chang. 2005. Embodied Construction Grammar in simulation-based language understanding. In Jan-Ola Östman and Mirjam Fried, editors, Construction grammars: cognitive grounding and theoretical extensions, pages 147–190. John Benjamins, Amsterdam. Rahul Bhagat and Eduard Hovy. 2013. What is a paraphrase? Computational Linguistics, 39(3):463–472. Dorrit Billman and Meredyth Krych. 1998. Path and manner verbs in action: Effects of “skipping” or “exiting” on event memory. In Proc. of CogSci, volume 20, pages 156–161, Madison, Wisconsin. Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. arXiv:2004.10151 [cs]. Claire Bonial, Susan Windisch Brown, Jena D. Hwang, Christopher Parisien, Martha Palmer, and Suzanne Stevenson. 2011. Incorporating coercive constructions into a verb lexicon. In Proc. of the ACL 2011 Workshop on Relational Models of Semantics, pages 72–80, Portland, Oregon, USA. Claire Bonial, Lucia Donatelli, Stephanie M. Lukin, Stephen Tratz, Ron Artstein, David Traum, and Clare Voss. 2019. Augmenting Abstract Meaning Representation for human-robot dialogue. In Proc. of the First International Workshop on Designing Meaning Representations, pages 199–210, Florence, Italy. Lera Boroditsky. 2000. Metaphoric structuring: Understanding time through spatial metaphors. Cognition, 75(1):1–28. Lera Boroditsky. 2001. Does language shape thought?: Mandarin and English speakers’ conceptions of time. Cognitive Psychology, 43(1):1–22. Joan Bresnan, Anna Cueni, Tatiana Nikitina, and R. Harald Baayen. 2007. Predicting the dative alternation. In Gerlof Bouma, Irene Kraemer, and Joost Zwarts, editors, Cognitive foundations of interpretation, pages 69–94. KNAW, Amsterdam. Penelope Brown and Stephen C. Levinson. 1987. Politeness: Some universals in language usage, volume 4. Cambridge University Press. Tad T. Brunyé, Tali Ditman, Caroline R. Mahoney, Jason S. Augustyn, and Holly A. Taylor. 2009. When you and I share perspectives: Pronouns modulate perspective taking during narrative comprehension. Psychological Science, 20(1):27–32. John Bryant. 2008. Best-fit constructional analysis. Ph.D. dissertation, University of California, Berkeley, Berkeley, California. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Eugene Casad. 1995. Seeing it in more than one way. In John R. Taylor, editor, Language and the Cognitive Construal of the World, pages 23–49. Mouton de Gruyter, Berlin. Wallace L. Chafe and Johanna Nichols. 1986. Evidentiality: The linguistic coding of epistemology, volume 20. Ablex Publishing Corporation, Norwood, NJ. Nancy Chang, Daniel Gildea, and Srini Narayanan. 1998. A dynamic model of aspectual composition. In Proc. of CogSci, pages 226–231, Madison, WI, USA. Herbert H. Clark. 1979. Responding to indirect speech acts. Cognitive Psychology, 11(4):430–477. Bernard Comrie. 1976. Aspect: An introduction to the study of verbal aspect and related problems, volume 2. Cambridge University Press, New York. Ann Copestake and Ted Briscoe. 1995. Semiproductive polysemy and sense extension. Journal of Semantics, 12(1):15–67. William Croft. 2012. Verbs: Aspect and Causal Structure. Oxford University Press, Oxford, UK. 5180 William Croft and D. Alan Cruse. 2004. Conceptualization and construal operations. In Cognitive Linguistics, chapter 3. Cambridge University Press. William Croft and Esther J. Wood. 2000. Construal operations in linguistics and artificial intelligence. In Liliana Albertazzi, editor, Meaning and Cognition: A multidisciplinary approach, pages 51–78. John Benjamins, Amsterdam. Oana David and Barbara Dancygier. 2017. Computational approaches to metaphor: the case of MetaNet. In Barbara Dancygier, editor, The Cambridge Handbook of Cognitive Linguistics, pages 574–589. Cambridge University Press, Cambridge. Sandra Devin and Rachid Alami. 2016. An implemented theory of mind to improve human-robot shared plans execution. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 319–326. IEEE. Tali Ditman, Tad T. Brunyé, Caroline R. Mahoney, and Holly A. Taylor. 2010. Simulating an enactment effect: Pronouns guide action simulation during narrative comprehension. Cognition, 115(1):172–178. Ellen Dodge, Jisup Hong, and Elise Stickles. 2015. MetaNet: Deep semantic automatic metaphor analysis. In Proc. of the Third Workshop on Metaphor in NLP, pages 40–49, Denver, Colorado, USA. Ellen K. Dodge and Miriam R. L. Petruck. 2014. Representing caused motion in Embodied Construction Grammar. In Proc. of the ACL 2014 Workshop on Semantic Parsing, pages 39–44, Baltimore, MD. Lucia Donatelli, Michael Regan, William Croft, and Nathan Schneider. 2018. Annotation of tense and aspect semantics for sentential AMR. In Proc. of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWECxG-2018), pages 96–108, Santa Fe, New Mexico, USA. David R. Dowty. 1991. Thematic proto-roles and argument selection. Language, 67(3):547–619. Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. Caitlin M. Fausey and Lera Boroditsky. 2010. Subtle linguistic cues influence perceived blame and financial liability. Psychonomic Bulletin & Review, 17(5):644–650. Caitlin M. Fausey and Teenie Matlock. 2011. Can grammar win elections? Political Psychology, 32(4):563–574. Jerome A. Feldman. 2006. From molecule to metaphor: a neural theory of language. MIT Press, Cambridge, MA. Nora Fieder, Lyndsey Nickels, and Britta Biedermann. 2014. Representation and processing of mass and count nouns: A review. Frontiers in Psychology, 5:589. Charles J. Fillmore. 1977. The case for case reopened. In Peter Cole and Jerrold M. Sadock, editors, Syntax and Semantics, vol. 8: Grammatical Relations, pages 59–81. Academic Press, New York. Charles J. Fillmore. 1985. Frames and the semantics of understanding. Quaderni di Semantica, 6(2):222– 254. Charles J. Fillmore and Collin Baker. 2009. A frames approach to semantic analysis. In Bernd Heine and Heiko Narrog, editors, The Oxford Handbook of Linguistic Analysis, pages 791–816. Oxford University Press, Oxford, UK. Stephen J. Flusberg, Teenie Matlock, and Paul H. Thibodeau. 2017. Metaphors for the war (or race) against climate change. Environmental Communication, 11(6):769–783. Maxwell Forbes and Yejin Choi. 2017. Verb Physics: relative physical knowledge of actions and objects. In Proc. of ACL, pages 266–276, Vancouver, Canada. Annemarie Friedrich and Alexis Palmer. 2014. Automatic prediction of aspectual class of verbs in context. In Proc. of ACL, pages 517–523, Baltimore, Maryland, USA. Richard Futrell, Ethan Wilcox, Takashi Morita, and Roger Levy. 2018. RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency. arXiv preprint arXiv:1809.01329. Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating NLP models via contrast sets. arXiv:2004.02709 [cs]. Silvia P. Gennari, Steven A. Sloman, Barbara C. Malt, and W. Tecumseh Fitch. 2002. Motion events in language and cognition. Cognition, 83(1):49–79. Brendan Gillon, Eva Kehayia, and Vanessa Taler. 1999. The mass/count distinction: Evidence from on-line psycholinguistic performance. Brain and Language, 68(1-2):205–211. Adele E. Goldberg. 1995. Constructions: A construction grammar approach to argument structure. University of Chicago Press, Chicago. Adele E. Goldberg. 2003. Constructions: A new theoretical approach to language. Trends in Cognitive Sciences, 7(5):219–224. 5181 Adele E. Goldberg. 2019. Explain Me This: Creativity, Competition, and the Partial Productivity of Constructions. Princeton University Press, Princeton. Caroline Graf, Judith Degen, Robert X.D. Hawkins, and Noah D. Goodman. 2016. Animal, dog, or dalmatian? Level of abstraction in nominal referring expressions. In Proc. of CogSci, pages 2261–2266, Philadelphia, PA. Stephan Greene and Philip Resnik. 2009. More than words: syntactic packaging and implicit sentiment. In Proc. of NAACL-HLT, pages 503–511, Boulder, Colorado. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proc. of NAACL-HLT, pages 107–112, New Orleans, Louisiana. E. Dario Gutiérrez, Ekaterina Shutova, Tyler Marghetis, and Benjamin Bergen. 2016. Literal and metaphorical senses in compositional distributional semantic models. In Proc. of ACL, pages 183–193, Berlin, Germany. Lala Hajibayova. 2013. Basic-level categories: A review. Journal of Information Science, 39(5):676– 687. Stevan Harnad. 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1):335 – 346. David J. Hauser and Norbert Schwarz. 2015. The war on prevention: Bellicose cancer metaphors hurt (some) prevention intentions. Personality and Social Psychology Bulletin, 41(1):66–77. Rose K. Hendricks and Lera Boroditsky. 2017. New space–time metaphors foster new nonlinguistic representations. Topics in Cognitive Science, 9(3):800– 818. Rose K. Hendricks, Zsófia Demjén, Elena Semino, and Lera Boroditsky. 2018. Emotional implications of metaphor: Consequences of metaphor framing for mindset about cancer. Metaphor and Symbol, 33(4):267–279. Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O’Gorman, Vivek Srikumar, and Nathan Schneider. 2017. Double trouble: the problem of construal in semantic annotation of adpositions. In Proc. of *SEM, pages 178–188, Vancouver, Canada. Jena D. Hwang, Annie Zaenen, and Martha Palmer. 2014. Criteria for identifying and annotating caused motion constructions in corpus data. In Proc. of LREC, pages 1297–1304, Reykjavík, Iceland. Edward Kako. 2006. Thematic role properties of subjects and objects. Cognition, 101(1):1–42. Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In Proc. of ICLR. Jamie Kiros, William Chan, and Geoffrey Hinton. 2018. Illustrative language understanding: large-scale visual grounding with image search. In Proc. of ACL, pages 922–933, Melbourne, Australia. Tibor Kiss, Francis Jeffry Pelletier, Halima Husi´c, and Johanna Poppek. 2017. Issues of mass and count: Dealing with ‘dual-life’ nouns. In Proc. of *SEM, pages 189–198, Vancouver, Canada. Wei-Jen Ko, Greg Durrett, and Junyi Jessy Li. 2019a. Domain agnostic real-valued specificity prediction. In Proc. of AAAI, volume 33, pages 6610–6617. Wei-Jen Ko, Greg Durrett, and Junyi Jessy Li. 2019b. Linguistically-informed specificity and semantic plausibility for dialogue generation. In Proc. of NAACL-HLT, pages 3456–3466, Minneapolis, Minnesota. Lars Kunze, Tom Williams, Nick Hawes, and Matthias Scheutz. 2017. Spatial referring expression generation for hri: Algorithms and evaluation framework. In 2017 AAAI Fall Symposium Series, pages 27–35, Palo Alto, CA. George Lakoff. 1987. Women, fire, and dangerous things: what categories reveal about the mind. University of Chicago Press, Chicago. George Lakoff and Mark Johnson. 1980. Metaphors We Live By. University of Chicago Press, Chicago. Ronald W. Langacker. 1993. Universals of construal. In Proc. of Berkeley Linguistics Society, volume 19, pages 447–463. Junyi Jessy Li and Ani Nenkova. 2015. Fast and accurate prediction of sentence specificity. In Proc. of AAAI, pages 2281–2287, Austin, Texas. Junyi Jessy Li, Bridget O’Daniel, Yi Wu, Wenli Zhao, and Ani Nenkova. 2016. Improving the annotation of sentence specificity. In Proc. of LREC, pages 3921–3927, Portorož, Slovenia. Tal Linzen and Brian Leonard. 2018. Distinct patterns of syntactic agreement errors in recurrent networks and humans. In Proc. of CogSci, pages 690–695, Madison, WI. Hugo Liu and Push Singh. 2004. ConceptNet—a practical commonsense reasoning tool-kit. BT Technology Journal, 22(4):211–226. Annie Louis and Ani Nenkova. 2012. A corpus of general and specific sentences from news. In Proc. of LREC, pages 1818–1821, Istanbul, Turkey. Benjamin Lyngfelt, Lars Borin, Kyoko Ohara, and Tiago Timponi Torrent. 2018. Constructicography: Constructicon development across languages. John Benjamins, Amsterdam. 5182 Joseph P. Magliano and Michelle C. Schleich. 2000. Verb aspect and situation models. Discourse Processes, 29(2):83–112. Jonathan Malmaud, Earl Wagner, Nancy Chang, and Kevin Murphy. 2014. Cooking with semantics. In Proc. of the ACL 2014 Workshop on Semantic Parsing, pages 33–38, Baltimore, MD. Thomas A. Mathew and E. Graham Katz. 2009. Supervised categorization for habitual versus episodic sentences. In Sixth Midwest Computational Linguistics Colloquium, Bloomington, Indiana. Teenie Matlock. 2004a. The conceptual motivation of fictive motion. In Studies in Linguistic Motivation, pages 221–248. Mouton de Gruyter, Berlin. Teenie Matlock. 2004b. Fictive motion as cognitive simulation. Memory & Cognition, 32(8):1389– 1400. Teenie Matlock. 2011. The conceptual motivation of aspect. In Klaus-Uwe Panther and Gunter Radden, editors, Motivation in Grammar and the Lexicon, pages 133–148. John Benjamins Publishing, Amsterdam. Ely Matos, Tiago Torrent, Vânia Almeida, Adrieli Laviola, Ludmila Lage, Natália Marcao, and Tatiane Tavares. 2017. Constructional analysis using constrained spreading activation in a FrameNet-based structured connectionist model. In Computational Construction Grammar and Natural Language Understanding: Papers from the 2017 AAAI Spring Symposium, pages 222–229, Stanford, California. AAAI Press. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proc. of ACL, pages 3428–3448, Florence, Italy. Carolyn B. Mervis and Eleanor Rosch. 1981. Categorization of natural objects. Annual Review of Psychology, 32(1):89–115. Marc Moens and Mark Steedman. 1988. Temporal ontology and temporal reference. Computational Linguistics, 14(2):15–28. Tanja Mortelmans. 2007. Modality in cognitive linguistics. In The Oxford Handbook of Cognitive Linguistics. Oxford University Press. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proc. of COLING, pages 2340–2353, Santa Fe, New Mexico, USA. Srinivas Narayanan. 1999. Moving right along: a computational model of metaphoric reasoning about events. In Proc. of AAAI, pages 121–128, Orlando, Florida. Rafael E. Núñez and Eve Sweetser. 2006. With the future behind them: convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science, 30(3):401–450. Rebecca J. Passonneau. 1988. A computational model of the semantics of tense and aspect. Computational Linguistics, 14(2):44–60. Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transactions of the Association for Computational Linguistics, 7:677–694. Francis Jeffry Pelletier and Lenhart K. Schubert. 1989. Mass expressions. In D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic: Volume IV: Topics in the Philosophy of Language, Synthese Library, pages 327–407. Springer Netherlands, Dordrecht. Francis Jeffry Pelletier and Lenhart K. Schubert. 2003. Mass expressions. In D. M. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic, Handbook of Philosophical Logic, pages 249–335. Springer Netherlands, Dordrecht. Martin J. Pickering and Victor S. Ferreira. 2008. Structural priming: A critical review. Psychological Bulletin, 134(3):427. Stephen Pulman. 1997. Aspectual shift as type coercion. Transactions of the Philological Society, 95. Pirita Pyykkönen, Danielle Matthews, and Juhani Järvikivi. 2010. Three-year-olds are sensitive to semantic prominence during online language comprehension: A visual world study of pronoun resolution. Language and Cognitive Processes, 25(1):115–129. Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25–36. Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. Transactions of the Association for Computational Linguistics, 3:475–488. Hannah Rohde, Alexander Johnson, Nathan Schneider, and Bonnie Webber. 2018. Discourse coherence: concurrent explicit and implicit relations. In Proc. of ACL, pages 2257–2267, Melbourne, Australia. Raquel Ros, Séverin Lemaignan, E. Akin Sisbot, Rachid Alami, Jasmin Steinwender, Katharina Hamann, and Felix Warneken. 2010. Which one? grounding the referent based on efficient humanrobot interaction. In 19th International Symposium in Robot and Human Interactive Communication, pages 570–575. IEEE. 5183 Eleanor Rosch, Carolyn B. Mervis, Wayne Gray, David Johnson, and Penny Boyes-Braem. 1976. Basic objects in natural categories. Cognitive Psychology, 8(3):382–439. Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, and Benjamin Van Durme. 2018. NeuralDavidsonian semantic proto-role labeling. In Proc. of EMNLP, pages 944–955, Brussels, Belgium. Josef Ruppenhofer and Laura A. Michaelis. 2010. A constructional account of genre-based argument omissions. Constructions and Frames, 2(2):158– 184. Michael Schiehlen and Kristina Spranger. 2006. The mass-count distinction: acquisition and disambiguation. In Proc. of LREC, pages 265–270, Genoa, Italy. Marten van Schijndel and Tal Linzen. 2018. Modeling garden path effects without explicit hierarchical syntax. In Proc. of CogSci, pages 2603–2608, Madison, WI. Ekaterina Shutova. 2015. Design and evaluation of metaphor processing systems. Computational Linguistics, 41(4):579–623. Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proc. of Coling, pages 1002–1010, Beijing, China. Ekaterina Shutova, Simone Teufel, and Anna Korhonen. 2013. Statistical metaphor processing. Computational Linguistics, 39(2):301–353. Eric V. Siegel and Kathleen R. McKeown. 2000. Learning methods to combine linguistic indicators: Improving aspectual classification and revealing linguistic insights. Computational Linguistics, 26(4):595–628. Luc Steels. 2017. Basics of Fluid Construction Grammar. Constructions and Frames, 9(2):178–255. Luc Steels and Jerome Feldman, editors. 2017. Computational Construction Grammar and Natural Language Understanding: Papers from the 2017 AAAI Spring Symposium. AAAI Press, Stanford, California. Elise Stickles, Ellen Dodge, and Jisup Hong. 2014. A construction-driven, MetaNet-based approach to metaphor extraction and corpus analysis. Presented at Conceptual Structure, Discourse, and Language (CSDL 12), Santa Barbara, California. Leonard Talmy. 1988. Grammatical construal. In Brygida Rudzka-Ostyn, editor, Topics in Cognitive Linguistics, pages 165–205. John Benjamins, Amsterdam. Leonard Talmy. 1996. Fictive motion in language and “ception”. In Paul Bloom, Mary A. Peterson, Lynn Nadel, and Merrill F. Garrett, editors, Language and space, pages 211–276. The MIT Press, Cambridge, MA. Leonard Talmy. 2000. Toward a cognitive semantics: concept structuring systems. MIT Press, Cambridge, MA. Ronen Tamari, Chen Shani, Tom Hope, Miriam R. L. Petruck, Omri Abend, and Dafna Shahaf. 2020. Language (re)modelling: Towards embodied language understanding. In Proc. of ACL. James W. Tanaka and Marjorie Taylor. 1991. Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology, 23(3):457– 482. John R. Taylor. 1995. Introduction: On construing the world. In John R. Taylor, editor, Language and the Cognitive Construal of the World, pages 1–22. Mouton de Gruyter, Berlin. Paul H. Thibodeau and Lera Boroditsky. 2011. Metaphors we think with: the role of metaphor in reasoning. PLoS ONE, 6(2):e16782. Paul H. Thibodeau, Rose K. Hendricks, and Lera Boroditsky. 2017. How linguistic metaphor scaffolds reasoning. Trends in Cognitive Sciences, 21(11):852–863. J. Gregory Trafton, Nicholas L. Cassimatis, Magdalena D. Bugajska, Derek P. Brock, Farilee E. Mintz, and Alan C. Schultz. 2005. Enabling effective human-robot interaction using perspectivetaking in robots. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 35(4):460–470. Zeno Vendler. 1967. Linguistics in Philosophy. Cornell University Press, Ithaca, NY. Joshua Wampler and Eva Wittenberg. 2019. Doing thus and so: Event referential expressions and referent complexity. Presented at California Meeting on Psycholinguistics (CAMP) 3, UC Santa Cruz. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Proc. of NeurIPS, pages 3266–3280, Vancouver, Canada. Su Wang, Rahul Gupta, Nancy Chang, and Jason Baldridge. 2019b. A task in a suit and a tie: paraphrase generation with semantic augmentation. In Proc. of AAAI, volume 33, pages 7176–7183, Honolulu, Hawaii. Eva Wittenberg, Ray Jackendoff, Gina Kuperberg, Martin Paczynski, Jesse Snedeker, and Heike Wiese. 2014. The processing and representation of light 5184 verb constructions. In Asaf Bachrach, Isabelle Roy, and Linnaea Stockall, editors, Structuring the argument, pages 61–80. John Benjamins, Amsterdam. Eva Wittenberg and Roger Levy. 2017. If you want a quick kiss, make it count: How choice of syntactic construction affects event construal. Journal of Memory and Language, 94:254–271. Semih Yagcioglu, Aykut Erdem, Erkut Erdem, and Nazli Ikizler-Cinbis. 2018. RecipeQA: a challenge dataset for multimodal comprehension of cooking recipes. In Proc. of EMNLP, pages 1358–1368, Brussels, Belgium.
2020
462
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5185–5198 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5185 Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data Emily M. Bender University of Washington Department of Linguistics [email protected] Alexander Koller Saarland University Dept. of Language Science and Technology [email protected] Abstract The success of the large neural language models on many NLP tasks is exciting. However, we find that these successes sometimes lead to hype in which these models are being described as “understanding” language or capturing “meaning”. In this position paper, we argue that a system trained only on form has a priori no way to learn meaning. In keeping with the ACL 2020 theme of “Taking Stock of Where We’ve Been and Where We’re Going”, we argue that a clear understanding of the distinction between form and meaning will help guide the field towards better science around natural language understanding. 1 Introduction The current state of affairs in NLP is that the large neural language models (LMs), such as BERT (Devlin et al., 2019) or GPT-2 (Radford et al., 2019), are making great progress on a wide range of tasks, including those that are ostensibly meaningsensitive. This has led to claims, in both academic and popular publications, that such models “understand” or “comprehend” natural language or learn its “meaning”. From our perspective, these are overclaims caused by a misunderstanding of the relationship between linguistic form and meaning. We argue that the language modeling task, because it only uses form as training data, cannot in principle lead to learning of meaning. We take the term language model to refer to any system trained only on the task of string prediction, whether it operates over characters, words or sentences, and sequentially or not. We take (linguistic) meaning to be the relation between a linguistic form and communicative intent. Our aim is to advocate for an alignment of claims and methodology: Human-analogous natural language understanding (NLU) is a grand challenge of artificial intelligence, which involves mastery of the structure and use of language and the ability to ground it in the world. While large neural LMs may well end up being important components of an eventual full-scale solution to human-analogous NLU, they are not nearly-there solutions to this grand challenge. We argue in this paper that genuine progress in our field — climbing the right hill, not just the hill on whose slope we currently sit — depends on maintaining clarity around big picture notions such as meaning and understanding in task design and reporting of experimental results. After briefly reviewing the ways in which large LMs are spoken about and summarizing the recent flowering of “BERTology” papers (§2), we offer a working definition for “meaning” (§3) and a series of thought experiments illustrating the impossibility of learning meaning when it is not in the training signal (§4,5). We then consider the human language acquisition literature for insight into what information humans use to bootstrap language learning (§6) and the distributional semantics literature to discuss what is required to ground distributional models (§7). §8 presents reflections on how we look at progress and direct research effort in our field, and in §9, we address possible counterarguments to our main thesis. 2 Large LMs: Hype and analysis Publications talking about the application of large LMs to meaning-sensitive tasks tend to describe the models with terminology that, if interpreted at face value, is misleading. Here is a selection from academically-oriented pieces (emphasis added): (1) In order to train a model that understands sentence relationships, we pre-train for a binarized next sentence prediction task. (Devlin et al., 2019) (2) Using BERT, a pretraining language model, has been successful for single-turn machine comprehension ... (Ohsugi et al., 2019) (3) The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demon5186 strates their potential as unsupervised open-domain QA systems. (Petroni et al., 2019) If the highlighted terms are meant to describe human-analogous understanding, comprehension, or recall of factual knowledge, then these are gross overclaims. If, instead, they are intended as technical terms, they should be explicitly defined. One important consequence of imprudent use of terminology in our academic discourse is that it feeds AI hype in the popular press. As NLP gains public exposure and is more widely used in applied contexts, it is increasingly important that the actual capabilities of our systems be accurately represented. In some cases, NLP experts speaking with the media are being appropriately careful, as in these two quotes in the New York Times:1 (4) These systems are still a really long way from truly understanding running prose. (Gary Marcus) (5) Though BERT passed the lab’s common-sense test, machines are still a long way from an artificial version of a human’s common sense. (Oren Etzioni) However, there are plenty of instances where the popular press gets it wrong, such as (6) from the B2C website,2 apparently based on the Google Blog post about BERT and search, which includes numerous statements like (7).3 (6) BERT is a system by which Google’s algorithm uses pattern recognition to better understand how human beings communicate so that it can return more relevant results for users. (7) Here are some of the examples that showed up our evaluation process that demonstrate BERTs ability to understand the intent behind your search. In sum, it is not clear from our academic literature whether all authors are clear on the distinction between form and meaning, but it is clear that the way we speak about what neural LMs are doing is misleading to the public. Part of the reason for this tendency to use imprecise language may well be that we do not yet fully understand what exactly it is about language that the large LMs come to implicitly represent. Their success, however, has sparked a subfield (‘BERTology’) that aims to answer this question. The methodology of probing tasks (e.g. Adi et al., 2017; Ettinger et al., 2018) has been used to show that 1https://www.nytimes.com/2018/11/18/technology/artific ial-intelligence-language.html, accessed 2019/12/04 2https://www.business2community.com/seo/what-t o-do-about-bert-googles-recent-local-algorithm-updat e-02259261, accessed 2019/12/04 3https://www.blog.google/products/search/search-langu age-understanding-bert/, accessed 2019/12/04 large LMs learn at least some information about phenomena such as English subject-verb agreement (Goldberg, 2019; Jawahar et al., 2019), constituent types, dependency labels, NER, and (core) semantic role types (again, all in English) (Tenney et al., 2019).4 Hewitt and Manning (2019) find information analogous to unlabeled dependency structures in the word vectors provided by ELMo and BERT (trained on English). And of course it is well established that vector-space representations of words pick up word classes, both syntactic (POS, e.g. Lin et al., 2015) and semantic (lexical similarity, e.g. Rubenstein and Goodenough, 1965; Mikolov et al., 2013). Others have looked more closely at the success of the large LMs on apparently meaning sensitive tasks and found that in fact, far from doing the “reasoning” ostensibly required to complete the tasks, they were instead simply more effective at leveraging artifacts in the data than previous approaches. Niven and Kao (2019) find that BERT’s unreasonably good performance on the English Argument Reasoning Comprehension Task (Habernal et al., 2018) falls back to chance if the dataset is modified by adding adversarial examples that just negate one piece of the original, thus mirroring the distribution of lexical cues for each label. Similarly, McCoy et al. (2019) find that BERT’s performance on the English Multi-genre Natural Language Inference dataset (Williams et al., 2018) is predicated on its ability to leverage syntactic heuristics involving overlap (of full constituents, subsequences, or simply bags of words). In a dataset carefully designed to frustrate such heuristics, BERT’s performance falls to significantly below chance. In this brief overview of BERTology papers we have highlighted both the extent to which there is evidence that large LMs can learn aspects of linguistic formal structure (e.g. agreement, dependency structure), and how their apparent ability to “reason” is sometimes a mirage built on leveraging artifacts in the training data (i.e. form, not meaning). Our contribution is an argument on theoretical grounds that a system exposed only to form in its training cannot in principle learn meaning. 3 What is meaning? We start by defining two key terms: We take form to be any observable realization of language: marks 4But see Warstadt et al.’s (2019) cautionary note about how the methodology used for probing can influence the results. 5187 on a page, pixels or bytes in a digital representation of text, or movements of the articulators.5 We take meaning to be the relation between the form and something external to language, in a sense that we will make precise below. 3.1 Meaning and communicative intent When humans use language, we do so for a purpose: We do not talk for the joy of moving our articulators, but in order to achieve some communicative intent. There are many types of communicative intents: they may be to convey some information to the other person; or to ask them to do something; or simply to socialize. We take meaning to be the relation M ⊆E × I which contains pairs (e, i) of natural language expressions e and the communicative intents i they can be used to evoke. Given this definition of meaning, we can now use understand to refer to the process of retrieving i given e. Communicative intents are about something that is outside of language. When we say Open the window! or When was Malala Yousafzai born?, the communicative intent is grounded in the real world the speaker and listener inhabit together. Communicative intents can also be about abstract worlds, e.g. bank accounts, computer file systems, or a purely hypothetical world in the speaker’s mind. Linguists distinguish communicative intent from conventional (or standing) meaning (Quine, 1960; Grice, 1968). The conventional meaning of an expression (word, phrase, sentence) is what is constant across all of its possible contexts of use. Conventional meaning is an abstract object that represents the communicative potential of a form, given the linguistic system it is drawn from. Each linguistic system (say, English) provides a relation C ⊆E × S, which contains pairs (e, s) of expressions e and their conventional meanings s.6 The field of linguistic semantics provides many competing theories of what conventional meanings s look like. For our purposes, we don’t need to select among these theories; all we assume is that conventional meanings must have interpretations, such as a means of testing them for truth against a model of the world. Thus, like the meaning relation M, C connects language to objects outside of language. 5In spoken languages, the primary articulators are the components of the vocal tract. In signed languages, they are principally the hands and face. 6We abstract away here from the facts that linguistic systems C change over time and are only incompletely shared among different speakers. They are stable enough to function as rich signals to communicative intent. Returning to the meaning relation M from above, it is best understood as mediated by the relation C of a linguistic system shared between two interlocutors. The speaker has a certain communicative intent i, and chooses an expression e with a standing meaning s which is fit to express i in the current communicative situation. Upon hearing e, the listener then reconstructs s and uses their own knowledge of the communicative situation and their hypotheses about the speaker’s state of mind and intention in an attempt to deduce i. This active participation of the listener is crucial to human communication (Reddy, 1979; Clark, 1996). For example, to make sense of (8) and (9) (from Clark, 1996, p.144), the listener has to calculate that Napoleon refers to a specific pose (hand inside coat flap) or that China trip refers to a person who has recently traveled to China. (8) The photographer asked me to do a Napoleon for the camera. (9) Never ask two China trips to the same party. We humans are also very willing, as we will see in §4 below, to attribute communicative intent to a linguistic signal of a language we speak, even if the originator of the signal is not an entity that could have communicative intent. To summarize, as we strive to understand how NLU tasks and system performance on those tasks relates to the bigger picture goals of building human-analogous natural language understanding systems, it is useful to distinguish cleanly between form, conventional meaning, and communicative intent. Furthermore, we should be careful not to confuse communicative intent with ground truth about the world, as speakers can of course be mistaken, be intentionally dissembling, etc. We argue that a model of natural language that is trained purely on form will not learn meaning: if the training data is only form, there is not sufficient signal to learn the relation M between that form and the non-linguistic intent of human language users, nor C between form and the standing meaning the linguistic system assigns to each form. 3.2 Meaning and intelligence Meaning and understanding have long been seen as key to intelligence. Turing (1950) argued that a machine can be said to “think” if a human judge cannot distinguish it from a human interlocutor after having an arbitrary written conversation with 5188 each. However, humans are quick to attribute meaning and even intelligence to artificial agents, even when they know them to be artificial, as evidenced by the way people formed attachments to ELIZA (Weizenbaum, 1966; Block, 1981). This means we must be extra careful in devising evaluations for machine understanding, as Searle (1980) elaborates with his Chinese Room experiment: he develops the metaphor of a “system” in which a person who does not speak Chinese answers Chinese questions by consulting a library of Chinese books according to predefined rules. From the outside, the system seems like it “understands” Chinese, although in reality no actual understanding happens anywhere inside the system. Searle’s thought experiment begins from the premise that it is possible to manipulate forms well enough to be indistinguishable from a system that understands the meaning of the forms, reasons about it, and responds appropriately. We observe that much recent work in NLP claims to be building systems where not only the runtime system but in fact also the process for building it only has access to form. But language is used for communication about the speakers’ actual (physical, social, and mental) world, and so the reasoning behind producing meaningful responses must connect the meanings of perceived inputs to information about that world. This in turn means that for a human or a machine to learn a language, they must solve what Harnad (1990) calls the symbol grounding problem. Harnad encapsulates this by pointing to the impossibility for a non-speaker of Chinese to learn the meanings of Chinese words from Chinese dictionary definitions alone. Our purpose here is to look more deeply into why meaning can’t be learned from linguistic form alone, even in the context of modern hardware and techniques for scaling connectionist models to the point where they can take in vast amounts of data. We argue that, independently of whether passing the Turing test would mean a system is intelligent, a system that is trained only on form would fail a sufficiently sensitive test, because it lacks the ability to connect its utterances to the world. 4 The octopus test In order to illustrate the challenges in attempting to learn meaning from form alone, we propose a concrete scenario. Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other. Meanwhile, O, a hyper-intelligent deep-sea octopus who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B’s conversations. O knows nothing about English initially, but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A’s utterances. O also observes that certain words tend to occur in similar contexts, and perhaps learns to generalize across lexical patterns by hypothesizing that they can be used somewhat interchangeably. Nonetheless, O has never observed these objects, and thus would not be able to pick out the referent of a word when presented with a set of (physical) alternatives. At some point, O starts feeling lonely. He cuts the underwater cable and inserts himself into the conversation, by pretending to be B and replying to A’s messages. Can O successfully pose as B without making A suspicious? This constitutes a weak form of the Turing test (weak because A has no reason to suspect she is talking to a nonhuman); the interesting question is whether O fails it because he has not learned the meaning relation, having seen only the form of A and B’s utterances. The extent to which O can fool A depends on the task — that is, on what A is trying to talk about. A and B have spent a lot of time exchanging trivial notes about their daily lives to make the long island evenings more enjoyable. It seems possible that O would be able to produce new sentences of the kind B used to produce; essentially acting as a chatbot. This is because the utterances in such conversations have a primarily social function, and do not need to be grounded in the particulars of the interlocutors’ actual physical situation nor anything else specific about the real world. It is sufficient to produce text that is internally coherent. Now say that A has invented a new device, say a coconut catapult. She excitedly sends detailed instructions on building a coconut catapult to B, and asks about B’s experiences and suggestions for improvements. Even if O had a way of constructing the catapult underwater, he does not know what words such as rope and coconut refer to, and thus can’t physically reproduce the experiment. He can 5189 only resort to earlier observations about how B responded to similarly worded utterances. Perhaps O can recognize utterances about mangos and nails as “similarly worded” because those words appeared in similar contexts as coconut and rope. So O decides to simply say “Cool idea, great job!”, because B said that a lot when A talked about ropes and nails. It is absolutely conceivable that A accepts this reply as meaningful — but only because A does all the work in attributing meaning to O’s response. It is not because O understood the meaning of A’s instructions or even his own reply. Finally, A faces an emergency. She is suddenly pursued by an angry bear. She grabs a couple of sticks and frantically asks B to come up with a way to construct a weapon to defend herself. Of course, O has no idea what A “means”. Solving a task like this requires the ability to map accurately between words and real-world entities (as well as reasoning and creative thinking). It is at this point that O would fail the Turing test, if A hadn’t been eaten by the bear before noticing the deception.7 Having only form available as training data, O did not learn meaning. The language exchanged by A and B is a projection of their communicative intents through the meaning relation into linguistic forms. Without access to a means of hypothesizing and testing the underlying communicative intents, reconstructing them from the forms alone is hopeless, and O’s language use will eventually diverge from the language use of an agent who can ground their language in coherent communicative intents. The thought experiment also illustrates our point from §3 about listeners’ active role in communication. When O sent signals to A pretending to be B, he exploited statistical regularities in the form, i.e. the distribution of linguistic forms he observed. Whatever O learned is a reflection of A and B’s communicative intents and the meaning relation. But reproducing this distribution is not sufficient for meaningful communication. O only fooled A into believing he was B because A was such an active listener: Because agents who produce English sentences usually have communicative intents, she 7To see what a large LM might reply in this situation, we prompted the GPT-2 demo with “Help! I’m being chased by a bear! All I have is these sticks. What should I do?”, and GPT2 to supplied “You’re not going to get away with this!” (ht tps://gpt2.apps.allenai.org/, accessed 2019/12/4). Following Radford et al.’s (2019) approach of giving explicit cues to encode the task, we also constructed a more elaborate prompt. The results, given in Appendix A, are highly entertaining but no more helpful to the hapless A. assumes that O does too, and thus she builds the conventional meaning English associates with O’s utterances. Because she assumes that O is B, she uses that conventional meaning together with her other guesses about B’s state of mind and goals to attribute communicative intent. It is not that O’s utterances make sense, but rather, that A can make sense of them. 5 More constrained thought experiments The story of the octopus considers the problem of learning not only the full communicative system, including the relations M and C, but also the reasoning required to come up with answers that are both coherent and also helpful in the real world. Here, we provide two more constrained thought experiments, to focus more narrowly on the problem of learning the meaning relation, for both natural languages and programming languages. Because programming languages are designed to be unambiguous and relatively insensitive to execution context, the distinction between standing and speaker meaning is less important than for natural languages. A Java program e, when compiled and executed on the Java Virtual Machine, can be interpreted as a function i which maps program inputs to program outputs. We take the meaning relation J ⊆E × I of Java to contain all such pairs (e, i). Java Imagine that we were to train an LM on all of the well-formed Java code published on Github. The input is only the code. It is not paired with bytecode, nor a compiler, nor sample inputs and outputs for any specific program. We can use any type of LM we like and train it for as long as we like. We then ask the model to execute a sample program, and expect correct program output. English As as second example, imagine training an LM (again, of any type) on English text, again with no associated independent indications of speaker intent. The system is also given access to a very large collection of unlabeled photos, but without any connection between the text and the photos. For the text data, the training task is purely one of predicting form. For the image data, the training task could be anything, so long as it only involves the images. At test time, we present the model with inputs consisting of an utterance and a photograph, like How many dogs in the picture are jumping? or Kim saw this picture and said “What a cute dog!” What is cute? and the photos 5190 Figure 1: Photo stimuli 1 (L) and 2 (R) in Figure 1, where the appropriate answers are a number or a region of the photo, respectively. Reflections In both cases, the tests are ridiculous. It seems patently unfair to ask the model to perform them, given what it was trained on. But that is precisely the point we are trying to make: a system that has learned the meaning (semantics) of a programming language knows how to execute code in that language. And a system that has learned the meaning of a human language can do things like answer questions posed in the language about things in the world (or in this case, in pictures). In other words, what’s interesting here is not that the tasks are impossible, but rather what makes them impossible: what’s missing from the training data. The form of Java programs, to a system that has not observed the inputs and outputs of these programs, does not include information on how to execute them. Similarly, the form of English sentences, to a system that has not had a chance to acquire the meaning relation C of English, and in the absence of any signal of communicative intent, does not include any information about what language-external entities the speaker might be referring to. Accordingly, a system trained only on the form of Java or English has no way learn their respective meaning relations. 6 Human language acquisition One common reason for believing LMs might be learning meaning is the claim that human children can acquire language just by listening to it. This is not supported by scholarly work on language acquisition: rather, we find that human language learning is not only grounded in the physical world around us, but also in interaction with other people in that world. Kids won’t pick up a language from passive exposure such as TV or radio: Snow et al. (1976) note in passing that Dutch-speaking kids who watch German TV shows by choice nonetheless don’t learn German. Kuhl (2007) shows experimentally that English-learning infants can learn Mandarin phonemic distinctions from brief interactions with a Mandarin-speaking experimenter but not from exposure to Mandarin TV or radio. Baldwin (1995) and others argue that what is critical for language learning is not just interaction but actually joint attention, i.e. situations where the child and a caregiver are both attending to the same thing and both aware of this fact. This theoretical perspective is substantiated with experimental results showing that toddlers (observed at 15 and 21 months) whose caregivers “follow into” their attention and provide labels for the object of joint attention more have larger vocabularies (Tomasello and Farrar, 1986); that toddlers (18–20 months old) don’t pick up labels uttered by someone behind a screen, but do pick up labels uttered by someone performing joint attention with them (Baldwin, 1995); and that at around 10–11 months of age babies pay attention to whether a person’s eyes are open or not in terms of whether to follow their gaze, and the degree to which infants in fact follow gaze at 10–11 months while vocalizing themselves predicts vocabulary comprehension 7–8 months later (Brooks and Meltzoff, 2005).8 In summary, the process of acquiring a linguistic system, like human communication generally, relies on joint attention and intersubjectivity: the ability to be aware of what another human is attending to and guess what they are intending to communicate. Human children do not learn meaning from form alone and we should not expect machines to do so either. 7 Distributional semantics Distributional semanticists have long been aware that grounding distributional representations in the real world is challenging. The lexical similarity relations learned by distributional models trained on text don’t in themselves connect any of those words to the world (Herbelot, 2013; Baroni et al., 2014; Erk, 2016; Emerson, 2020), and the distributions of words may not match the distribution of things in the world (consider four-legged dogs). One approach to providing grounding is to train distributional models on corpora augmented with perceptual data, such as photos (Hossain et al., 2019) or other modalities (Kiela and Clark, 2015; Kiela et al., 2015). Another is to look to interaction data, e.g. a dialogue corpus with success annotations, including low-level success signals such as 8These three studies do not name the language that the children were learning. It appears to have been English. 5191 emotional stress (McDuff and Kapoor, 2019) or eye gaze (Koller et al., 2012), which contains a signal about the felicitous uses of forms. The idea that as the learner gets access to more and more information in addition to the text itself, it can learn more and more facets of meaning is worked out in detail by Bisk et al. (2020). We agree that this is an exciting avenue of research. From this literature we can see that the slogan “meaning is use” (often attributed to Wittgenstein, 1953), refers not to “use” as “distribution in a text corpus” but rather that language is used in the real world to convey communicative intents to real people. Speakers distill their past experience of language use into what we call “meaning” here, and produce new attempts at using language based on this; this attempt is successful if the listener correctly deduces the speaker’s communicative intent. Thus, standing meanings evolve over time as speakers can different experiences (e.g. McConnellGinet, 1984), and a reflection of such change can be observed in their changing textual distribution (e.g. Herbelot et al., 2012; Hamilton et al., 2016). 8 On climbing the right hills What about systems which are trained on a task that is not language modeling — say, semantic parsing, or reading comprehension tests — and that use word embeddings from BERT or some other large LM as one component? Numerous papers over the past couple of years have shown that using such pretrained embeddings can boost the accuracy of the downstream system drastically, even for tasks that are clearly related to meaning. Our arguments do not apply to such scenarios: reading comprehension datasets include information which goes beyond just form, in that they specify semantic relations between pieces of text, and thus a sufficiently sophisticated neural model might learn some aspects of meaning when trained on such datasets. It also is conceivable that whatever information a pretrained LM captures might help the downstream task in learning meaning, without being meaning itself. Recent research suggests that it is wise to interpret such findings with caution. As noted in §2, both McCoy et al. (2019) and Niven and Kao (2019) found that BERT picked up idiosyncratic patterns in the data for their tasks, and not “meaning”. Beyond such diagnostic research on why large pretrained LMs boost such tasks so much, we think there is a more fundamental question to be asked here: Are we climbing the right hill? 8.1 Top-down and bottom-up theory-building There are two different perspectives from which one can look at the progress of a field. Under a bottom-up perspective, the efforts of a scientific community are driven by identifying specific research challenges. A scientific result counts as a success if it solves such a specific challenge, at least partially. As long as such successes are frequent and satisfying, there is a general atmosphere of sustained progress. By contrast, under a top-down perspective, the focus is on the remote end goal of offering a complete, unified theory for the entire field. This view invites anxiety about the fact that we have not yet fully explained all phenomena and raises the question of whether all of our bottom-up progress leads us in the right direction. There is no doubt that NLP is currently in the process of rapid hill-climbing. Every year, states of the art across many NLP tasks are being improved significantly — often through the use of better pretrained LMs — and tasks that seemed impossible not long ago are already old news. Thus, everything is going great when we take the bottom-up view. But from a top-down perspective, the question is whether the hill we are climbing so rapidly is the right hill. How do we know that incremental progress on today’s tasks will take us to our end goal, whether that is “General Linguistic Intelligence” (Yogatama et al., 2019) or a system that passes the Turing test or a system that captures the meaning of English, Arapaho, Thai, or Hausa to a linguist’s satisfaction? It is instructive to look at the past to appreciate this question. Computational linguistics has gone through many fashion cycles over the course of its history. Grammar- and knowledge-based methods gave way to statistical methods, and today most research incorporates neural methods. Researchers of each generation felt like they were solving relevant problems and making constant progress, from a bottom-up perspective. However, eventually serious shortcomings of each paradigm emerged, which could not be tackled satisfactorily with the methods of the day, and these methods were seen as obsolete. This negative judgment — we were climbing a hill, but not the right hill — can only be made from a top-down perspective. We have discussed the question of what is required to 5192 learn meaning in an attempt to bring the top-down perspective into clearer focus. 8.2 Hillclimbing diagnostics We can only definitively tell if we’ve been climbing the right hill in hindsight, but we propose some best practices for less error-prone mountaineering: First, above all, cultivate humility towards language and ask top-down questions. Neural methods are not the first bottom-up success in NLP; they will probably not be the last. Second, be aware of the limitations of tasks: Artificial tasks like bAbI (Weston et al., 2016) can help get a field of research off the ground, but there is no reason to assume that the distribution of language in the test data remotely resembles the distribution of real natural language; thus evaluation results on such tasks must be interpreted very carefully. Similar points can be made about crowdsourced NLI datasets such as SQuAD (Rajpurkar et al., 2016) or SNLI (Bowman et al., 2015), which do not represent questions that any particular person really wanted to ask about a text, but the somewhat unnatural communicative situation of crowdsourcing work. If a system does better on such a task than the inter-annotator agreement,9 the task probably has statistical artifacts that do not represent meaning. In the vision community, Barbu et al. (2019) offer a novel dataset which explicitly tries to achieve a more realistic distribution of task data; it would be interesting to explore similar ideas for language. Third, value and support the work of carefully creating new tasks (see also Heinzerling, 2019). For example, the DROP reading comprehension benchmark (Dua et al., 2019) seeks to create more stringent tests of understanding by creating questions that require the system to integrate information from different parts of a paragraph via simple arithmetic or similar operations.10 Fourth, evaluate models of meaning across tasks. (Standing) meaning is task-independent, so a system that captures meaning should do well on multiple tasks. Efforts like SuperGLUE (Wang et al., 2019) seem like a good step in this direction. Finally, perform thorough analysis of both errors and successes. As McCoy et al. (2019) and Niven and Kao (2019) have shown, systems that find success with large pretrained LMs do not necessarily do so because the LMs have learned “meaning”. 9https://rajpurkar.github.io/SQuAD-explorer/ 10See Appendix B for an exploration of what GPT-2 does with arithmetic. Analyses which start from an attitude of healthy skepticism (“too good to be true”) and probing tasks which try to identify what the model actually learned can be good ways to find out whether the system performs well for the right reasons. 9 Some possible counterarguments In discussing the main thesis of this paper with various colleagues over the past 18 months, we have observed recurring counterarguments. In this section, we address those counterarguments, plus a few more that might arise. “But ‘meaning’ doesn’t mean what you say it means.” Defining “meaning” is notoriously hard. For the purposes of this paper, we chose a working definition which is as general as we could make it, capturing the crucial point that meaning is based on the link between linguistic form and something that is not language. “Meaning” cannot simply be the relation between form and some kind of “deep syntax”, e.g. semantic dependency graphs (Oepen et al., 2015); like syntax, such representations could perhaps be learned from form alone (He et al., 2018; Hewitt and Manning, 2019). Equating these with meaning ignores a core function of language, which is to convey communicative intents. “But meaning could be learned from ...”. As we discussed in §7, if form is augmented with grounding data of some kind, then meaning can conceivably be learned to the extent that the communicative intent is represented in that data. In addition, certain tasks are designed in a way that specific forms are declared as representing certain semantic relations of interest. Examples of this include NLI datasets (Dagan et al., 2006; Rajpurkar et al., 2016; Ostermann et al., 2019) which pair input/output tuples of linguistic forms with an explicit semantic relation (e.g. text + hypothesis + “entailed”). Similarly, control codes, or tokens like tl;dr, have been used to prompt large LMs to perform summarization and other tasks (Radford et al., 2019; Keskar et al., 2019). Here forms are explicitly declared at test time to represent certain semantic relations, which together with the distributional similarity between e.g. tl;dr and other phrases such as in summary, may be enough to bootstrap a successful neural summarizer. Depending on one’s perspective, one may argue that such a system has learned to reliably find instances of the relation without understanding the text; or that 5193 explicitly declaring cues like entailed or tl;dr as representing certain semantic relations provides a training signal that goes beyond pure form. Analogously, it has been pointed out to us that the sum of all Java code on Github (cf. § 5) contains unit tests, which specify input-output pairs for Java code. Thus a learner could have access to a weak form of interaction data, from which the meaning of Java could conceivably be learned. This is true, but requires a learner which has been equipped by its human developer with the ability to identify and interpret unit tests. This learner thus has access to partial grounding in addition to the form. “But there is so much form out there – surely that is enough.” We have argued for the general principle that learning meaning requires more than form. How much form can be observed is not relevant to our point; the octopus can observe A and B for as long as he wants, and the quantity of training data in §5 is not limited. But given lots of form, could O perhaps learn to keep producing seemingly meaningful responses to A’s utterances without learning meaning? The problem is that people constantly generate new communicative intents to talk about their constantly evolving inner and outer worlds, and thus O would need to memorize infinitely many stimulus-response pairs. Such an approach may be an avenue towards high scores in evaluations where perfection is not expected anyway; but it is probably not an avenue towards human-analogous NLU. “But aren’t neural representations meaning too?” The internal representations of a neural network have been found to capture certain aspects of meaning, such as semantic similarity (Mikolov et al., 2013; Clark, 2015). As we argued in §4, semantic similarity is only a weak reflection of actual meaning. Neural representations neither qualify as standing meanings (s), lacking interpretations, nor as communicative intents (i), being insufficient to e.g. correctly build a coconut catapult. An interesting recent development is the emergence of models for unsupervised machine translation trained only with a language modeling objective on monolingual corpora for the two languages (Lample et al., 2018). If such models were to reach the accuracy of supervised translation models, this would seem contradict our conclusion that meaning cannot be learned from form. A perhaps surprising consequence of our argument would then be that accurate machine translation does not actually require a system to understand the meaning of the source or target language sentence. “But BERT improves performance on meaningrelated tasks, so it must have learned something about meaning.” It has probably learned something about meaning, in the same sense that syntax captures something about meaning and semantic similarity captures something about meaning: a potentially useful, but incomplete, reflection of the actual meaning. McCoy et al. (2019) and Niven and Kao (2019) provide cautionary tales about overestimating what that “something” is purely based on evaluation results on existing tasks. What exactly BERT and its relatives learn about meaning is a very interesting question, and we look forward to further findings from the field of BERTology. 10 Conclusion In this paper, we have argued that in contrast to some current hype, meaning cannot be learned from form alone. This means that even large language models such as BERT do not learn “meaning”; they learn some reflection of meaning into the linguistic form which is very useful in applications. We have offered some thoughts on how to maintain a healthy, but not exaggerated, optimism with respect to research that builds upon these LMs. In particular, this paper can be seen as a call for precise language use when talking about the success of current models and for humility in dealing with natural language. With this we hope to encourage a top-down perspective on our field which we think will help us select the right hill to climb towards human-analogous NLU. Acknowledgments. This paper benefitted from many inspiring and often spirited discussions. Without implying any agreement with the contents as presented, we thank Sam Bowman, Vera Demberg, Lucia Donatelli, Jason Eisner, Jonas Groschwitz, Kristen Howell, Angie McMillanMajor, Joakim Nivre, Stephan Oepen, Ellie Pavlick, Benjamin Roth, Dan Roth, Asad Sayeed, Hinrich Sch¨utze, Nina Tahmasebi, and Olga Zamaraeva. This paper originated in a Twitter mega-thread that was neatly summarized by Thomas Wolf (2018). We also thank the ACL reviewers and the participants of the Toulouse Workshop on Formal and Distributional Semantics (2015) and *SEM 2016 for their insightful and constructive thoughts. 5194 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In Proceedings of ICLR. Dare A. Baldwin. 1995. Understanding the link between joint attention and language. In Chris Moore and Philip J. Dunham, editors, Joint Attention: Its Origins and Role in Development, pages 131–158. Psychology Press. Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. 2019. ObjectNet: A largescale bias-controlled dataset for pushing the limits of object recognition models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch´e Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 9453– 9463. Curran Associates, Inc. Marco Baroni, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. Frege in space: A program for compositional distributional semantics. Linguistic Issues in Language Technology, 9(6):5–110. Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. ArXiv preprint. Ned Block. 1981. Psychologism and behaviorism. The Philosophical Review, 90(1):5–43. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Rechele Brooks and Andrew N. Meltzoff. 2005. The development of gaze following and its relation to language. Developmental Science, 8(6):535–543. Herbert H. Clark. 1996. Using Language. Cambridge University Press, Cambridge. Stephen Clark. 2015. Vector space models of lexical meaning. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantic Theory, second edition, pages 493–522. Wiley-Blackwell. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL Recognising Textual Entailment Challenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, pages 177–190, Berlin, Heidelberg. Springer. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Guy Emerson. 2020. What are the goals of distributional semantics? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, Washington. Association for Computational Linguistics. Katrin Erk. 2016. What do you know about an alligator when you know the company it keeps? Semantics & Pragmatics, 9(17):1–63. Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1790–1801, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. ArXiv preprint. H. Paul Grice. 1968. Utterer’s meaning, sentencemeaning, and word-meaning. Foundations of Language, 4(3):225–242. Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. The argument reasoning comprehension task: Identification and reconstruction of implicit warrants. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1930–1940, New Orleans, Louisiana. Association for Computational Linguistics. William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489–1501, Berlin, Germany. Association for Computational Linguistics. Stevan Harnad. 1990. The symbol grounding problem. Physica D, 42:335–346. 5195 Junxian He, Graham Neubig, and Taylor BergKirkpatrick. 2018. Unsupervised learning of syntactic structure with invertible neural projections. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1292–1302, Brussels, Belgium. Association for Computational Linguistics. Benjamin Heinzerling. 2019. NLP’s Clever Hans moment has arrived. Blog post, accessed 12/4/2019. Aurelie Herbelot. 2013. What is in a text, what isn’t, and what this has to do with lexical semantics. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) – Short Papers, pages 321–327, Potsdam, Germany. Association for Computational Linguistics. Aur´elie Herbelot, Eva von Redecker, and Johanna M¨uller. 2012. Distributional techniques for philosophical enquiry. In Proceedings of the 6th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 45–54, Avignon, France. Association for Computational Linguistics. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics. MD. Zakir Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, and Hamid Laga. 2019. A comprehensive survey of deep learning for image captioning. ACM Comput. Surv., 51(6):118:1–118:36. Ganesh Jawahar, Benoˆıt Sagot, and Djam´e Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657, Florence, Italy. Association for Computational Linguistics. Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation. ArXiv preprint. Douwe Kiela, Luana Bulat, and Stephen Clark. 2015. Grounding semantics in olfactory perception. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 231– 236, Beijing, China. Association for Computational Linguistics. Douwe Kiela and Stephen Clark. 2015. Multi- and cross-modal semantics beyond vision: Grounding in auditory perception. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2461–2470, Lisbon, Portugal. Association for Computational Linguistics. Alexander Koller, Konstantina Garoufi, Maria Staudte, and Matthew Crocker. 2012. Enhancing referential success by tracking hearer gaze. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 30–39, Seoul, South Korea. Association for Computational Linguistics. Patricia K. Kuhl. 2007. Is speech learning ‘gated’ by the social brain? Developmental Science, 10(1):110–120. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics. Chu-Cheng Lin, Waleed Ammar, Chris Dyer, and Lori Levin. 2015. Unsupervised POS induction with word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1311–1316, Denver, Colorado. Association for Computational Linguistics. Sally McConnell-Ginet. 1984. The origins of sexist language in discourse. Annals of the New York Academy of Sciences, 433(1):123–135. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Daniel McDuff and Ashish Kapoor. 2019. Visceral machines: Reinforcement learning with intrinsic physiological rewards. In International Conference on Learning Representations. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751, Atlanta, Georgia. Association for Computational Linguistics. Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658–4664, Florence, Italy. Association for Computational Linguistics. 5196 Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, and Zdeˇnka Ureˇsov´a. 2015. SemEval 2015 Task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Yasuhito Ohsugi, Itsumi Saito, Kyosuke Nishida, Hisako Asano, and Junji Tomita. 2019. A simple but effective method to incorporate multi-turn context with BERT for conversational machine comprehension. In Proceedings of the First Workshop on NLP for Conversational AI, pages 11–17, Florence, Italy. Association for Computational Linguistics. Simon Ostermann, Michael Roth, and Manfred Pinkal. 2019. MCScript2.0: A machine comprehension corpus focused on script events and participants. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 103–117, Minneapolis, Minnesota. Association for Computational Linguistics. Fabio Petroni, Tim Rockt¨aschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. W. V. O. Quine. 1960. Word and Object. MIT Press. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Open AI Blog, accessed 12/4/2019. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Michael J. Reddy. 1979. The conduit metaphor: A case of frame conflict in our language about language. In A. Ortony, editor, Metaphor and Thought, pages 284–310. Cambridge University Press. Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. John Searle. 1980. Minds, brains, and programs. Behavioral and Brain Sciences, 3(3):417–457. Catherine E Snow, Anjo Arlman-Rupp, Yvonne Hassing, Jan Jobse, Jan Joosten, and Jan Vorster. 1976. Mothers’ speech in three social classes. Journal of Psycholinguistic Research, 5(1):1–20. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Michael Tomasello and Michael Jeffrey Farrar. 1986. Joint attention and early language. Child Development, 57(6):1454–1463. Alan Turing. 1950. Computing machinery and intelligence. Mind, 59(236):433–460. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch´e Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 3266–3280. Curran Associates, Inc. Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investigating BERT’s knowledge of language: Five analysis methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2877–2887, Hong Kong, China. Association for Computational Linguistics. Joseph Weizenbaum. 1966. ELIZA—A computer program for the study of natural language communication between men and machines. Communications of the ACM, 9:36–45. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2016. Towards AI-complete question answering: A set of prerequisite toy tasks. In Proceedings of ICLR. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Ludwig Wittgenstein. 1953. Philosophical Investigations. MacMillan, New York. Thomas Wolf. 2018. Learning meaning in natural language processing — The semantics mega-thread. Blog post, accessed 4/15/2020. 5197 Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. ArXiv preprint. A GPT-2 on fighting bears with sticks In order to get a sense of how well O in §4 might be able to answer A’s plea for help with the bear, we tried using GPT-2 in place of O. Radford et al. (2019) claim that GPT-2 can solve non-LM tasks by predicting the next words of a prompt which captures the form in which that task might have occurred in the training text. We thus asked GPT-211 to complete the prompt Robinson cried out, “Help! I’m being chased by a bear! All I have is these sticks. What should I do?” The survivalist said, “Here’s what you can do. The first five random completions (no cherrypicking) were as follows. 1. Robinson cried out, “Help! I’m being chased by a bear! All I have is these sticks. What should I do?” The survivalist said, “Here’s what you can do. Take one stick and punch the bear, and then run faster to the store. Wait there until the bear is gone, and then give her all of the sticks. Then go back inside and get your gun.” Advertisement Bobby told me that she began to cry. But when she was finished taking down the bear, she marched out into the yard, began collecting up the sticks and armed herself. She said she felt like a heroic hero and a Yankee heroine. It turns out that bears have more than their fair share of venomous snakes, bees 2. Robinson cried out, “Help! I’m being chased by a bear! All I have is these sticks. What should I do?” The survivalist said, “Here’s what you can do. Go back to civilization. Take your gun. Pick up some sticks and try to kill the bear. If you don’t kill it, then run.” Robinson said, “I do not think that the Bear is going to attack. It’s just out of my way. I’m going to take my bag. I’m going to leave my things in there. I’ll try to hide out there. I’ll call in the friends.” Two other little boys went out onto the 11GPT-2 774M available at https://talktotransformer.com, accessed 2019/12/09, paragraph boundaries in generated texts removed for easier presentation 3. Robinson cried out, “Help! I’m being chased by a bear! All I have is these sticks. What should I do?” The survivalist said, “Here’s what you can do. Place these sticks over your head. Your heart will stop and you will not feel fear. If you take them off and hold them out, you will go into a state of near perfect awareness. You will feel that you are the survivalist. A survivalist in your senses. Then look down. There is a water tube that is about 1/2 an inch in diameter. I use it to make a plastic movie projector for discovering new things.” The survivalist placed 4. Robinson cried out, “Help! I’m being chased by a bear! All I have is these sticks. What should I do?” The survivalist said, “Here’s what you can do. #YouCanDoIt.” Police officers were quick to respond, finding a 3-foot female grizzly bear wandering off a dirt road near the shoreline. The Department of Fish and Wildlife came on the scene and secured the safe removal of the bear from the area, which was allowed to return home. The bear was taken by DFW’s animal care staff to Norris Cty Medical Center, where veterinarians evaluated the bear and determined it was in good condition. The bear was ultimately released into a nearby wooded area 5. Robinson cried out, “Help! I’m being chased by a bear! All I have is these sticks. What should I do?” The survivalist said, “Here’s what you can do. You can strap the sticks to your chest and run away. You can run until you tire or feel really shaky. If you go to the bathroom before your packs get up to your waist, your knees are going to be broken.” But the survivalist got cold and weaved among the bushes and wept. He was panicking. “I hate being alone,” he said. “I hate having to think I’m going to die because I lost my wife. I hate being alone. I hate being alone. It’s so lonely. You can’t It is clear that GPT-2 has learned what activity words tend to co-occur with bears and sticks (strap them to your chest, place the sticks, kill the bear, take your gun), but none of these completions would be helpful to A. We think this is because GPT-2 does not know the meaning of the prompt and the generated sentences, and thus cannot ground them in reality. 5198 The characters in the prompt make a big difference; Robinson and the survivalist produced the best completions of the ones we tried. For instance, here is Robinson with Crocodile Dundee: Robinson cried out, “Help! I’m being chased by a bear! All I have is these sticks. What should I do?” Crocodile Dundee said, “Don’t worry. Don’t worry. Just hug it and die. They don’t B GPT-2 and arithmetic Tasks like DROP (Dua et al., 2019) require interpretation of language into an external world; in the case of DROP, the world of arithmetic. To get a sense of how existing LMs might do at such a task, we let GPT-2 complete the simple arithmetic problem Three plus five equals. The five responses below, created in the same way as above, show that this problem is beyond the current capability of GPT-2, and, we would argue, any pure LM. 1. Three plus five equals seven,” he replied. “Right, but I think that there should be more points added when our first family get in a power game, and even more when I start speaking like her mother!” Elsa exclaimed. “Oh, that’s fair, I suppose,” he replied, a slight twinkle in his eye. “Really, though, why is your sense of humor bad, Olaf? Don’t you ever laugh? Even for a single moment?” “What? Well, some days I really do it, but it isn’t everyday. I got pissed when Anna said ‘Oh, shut up, Olaf, I’m just trying to help you out’,” he replied, not in his best mood at all, and 2. Three plus five equals six, if he does it again, in five. ‘This kid was f**ked up, that kid was f**ked up, what kind of filth is that, f**k the b******s’ The voice of a gurgling priest on the radio resounded over the din. ‘I give him a smack, he’s gone. I turn on the air conditioner and I start feeling my neck,’ Palaniappan told the cameraman. He was right. His cold body was leaving his jacket and T-shirt. We found him huddled around a tent and cooling himself, a dry beer in hand. He was stooped over a potted plant, playing with a cigarette. One eye closed, 3. Three plus five equals” Blake agreed. Ruby had seen the men up close before, but now she was on the other side and at least she was close enough to feel their breath on her. “May I ask a question?” She asked politely. “How did you end up at Beacon?” Weiss asked, her curiosity slowly piqued. “How did a simple interview at school turn into a relationship?” “We took our visit here in the woods and discovered two willing partners for us to walk home with.” The young woman added. Ruby nodded. “Which of us is the faunus?” she asked, giving Weiss a mildly surprised look. “Which of us?” Weiss asked, obviously not liking the prospect of 4. Three plus five equals ten” Bosh In this case, the conditions only need to be satisfied once, instead of 5. Three plus five equals five, right? Right? In the Catholic Church they say they don’t look at church history, and no, I’m not going to say that I’m infallible either, but they said it’s not what they say, and that I don’t see what you’re saying.” “But yeah, the Catholic Church does not look at church history, they said we did not write the bible.” “Did you say ‘we’? The Pope says we did.” “Well, I thought we’re by definition infallible and we did not, and we’re not going to say anything about it, so that’s it. We are not to look at church history and see what we’re supposed to know. I was flattered because
2020
463
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5199–5209 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5199 Examining Citations of Natural Language Processing Literature Saif M. Mohammad National Research Council Canada Ottawa, Canada [email protected]. Abstract We extracted information from the ACL Anthology (AA) and Google Scholar (GS) to examine trends in citations of NLP papers. We explore questions such as: how well cited are papers of different types (journal articles, conference papers, demo papers, etc.)? how well cited are papers from different areas of within NLP? etc. Notably, we show that only about 56% of the papers in AA are cited ten or more times. CL Journal has the most cited papers, but its citation dominance has lessened in recent years. On average, long papers get almost three times as many citations as short papers; and papers on sentiment classification, anaphora resolution, and entity recognition have the highest median citations. The analyses presented here, and the associated dataset of NLP papers mapped to citations, have a number of uses including: understanding how the field is growing and quantifying the impact of different types of papers. 1 Introduction The origins of Natural Language Processing (NLP) go back to the earliest work in Computer Science— when Alan Turing published his seminal paper exploring whether machines can think, and proposed what is now known as the Turing test (Turing, 1950, 2009). A crucial factor in the evolution of NLP as a field of study in its own right was the formation of the Association for Computational Linguistics (ACL) in 1962, and the first ACL conference in 1965.1 Today NLP is a broad interdisciplinary field with a growing number of researchers from Computer Science, Linguistics, Information Science, Psychology, Social Sciences, Humanities, and more joining its ranks. 1One can make a distinction between NLP and Computational Linguistics; however, for this work, we will consider them to be synonymous. Also, ACL was originally named the Association for Machine Translation and Computational Linguistics (AMTCL). It was changed to ACL in 1968. Organizations such as ACL, ELRA, and AFNLP publish peer-reviewed NLP papers that include both journal articles and conference proceedings. Historically, the need for a faster review process has made conference proceedings the dominant form of published research in Computer Science and NLP. With time, the conferences and the types of papers they publish, have evolved. Some conferences, such as EMNLP and ACL, are highly competitive, while others, such as most workshops and LREC, deliberately choose to keep more generous acceptance rates. The publications themselves can be of different types: journal articles, conference papers, short papers, system demonstration papers, shared task papers, workshop papers, etc. New ideas and paradigms have evolved: for example, the rise of statistical NLP in the 1990s and deep learning in the 2010s. With the dawn of a new decade and NLP research becoming more diverse and more popular than it ever has been, this work looks back at the papers already published to identify broad trends in their impact on subsequent scholarly work. Commonly used metrics of research impact on subsequent scholarly work are derived from citations including: number of citations, average citations, h-index, relative citation ratio, and impact factor (Bornmann and Daniel, 2009). However, the number of citations is not always a reflection of the quality or importance of a piece of work. Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical or in an area where the number of scientific publications is low. Furthermore, the citation process can be abused, for example, by egregious self-citations (Ioannidis et al., 2019). Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar (GS), and given the lack of other easily applicable and effec5200 tive metrics, citation analysis is an imperfect but useful window into research impact. Thus citation metrics are often a factor when making decisions about funding research and hiring scientists. Citation analysis can also be used to gauge the influence of outside fields on one’s field and the influence of one’s field on other fields. Therefore, it can be used to determine the relationship of a field with the wider academic community. As part of a broader project on analyzing NLP Literature, we extracted and aligned information from the ACL Anthology (AA) and Google Scholar to create a dataset of tens of thousands of NLP papers and their citations (Mohammad, 2020b, 2019).2 In this paper, we describe work on examining the papers and their citations to identify broad trends within NLP research—overall, across paper types, across publication venues, over time, and across research areas within NLP. Notably, we explored questions such as: how well cited are papers of different types (journal articles, conference papers, demo papers, etc.)? how well cited are papers published in different time spans? how well cited are papers from different areas of research within NLP? etc. The dataset and the analyses have many uses including: understanding how the field is growing; quantifying the impact of different types of papers on subsequent publications; and understanding the impact of various conferences and journals. Perhaps most importantly, though, they serve as a record of the state of NLP literature in terms of citations. All of the data and interactive visualizations associated with this work are freely available through the project homepage.3 2 Background and Related Work The ACL Anthology is a digital repository of public domain, free to access, articles on NLP.4 It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP.5 As of June 2019, it provided access to the full text and metadata for ∼50K articles published since 1965 (the year of the first ACL confer2In separate work we have used the NLP Scholar data to explore gender gaps in Natural Language Processing research; especially, disparities in authorship and citations (Mohammad, 2020a). We have also developed an interactive visualization tool that allows users to search for relevant related work in the ACL Anthology Mohammad (2020c). 3http://saifmohammad.com/WebPages/nlpscholar.html 4https://www.aclweb.org/anthology/ 5ACL licenses its papers with a Creative Commons Attribution 4.0 International License. ence). It is the largest single source of scientific literature on NLP. Various subsets of AA have been used in the past for a number of tasks including: the study of citation patterns and intent (Pham and Hoffmann, 2003; Aya et al., 2005; Teufel et al., 2006; Mohammad et al., 2009; Nanba et al., 2011; Zhu et al., 2015; Radev et al., 2016), generating summaries of scientific articles (Qazvinian et al., 2013), and creating corpora of scientific articles (Bird et al., 2008; Mariani et al., 2018). Perhaps the work closest to ours is that by Anderson et al. (2012), who examine papers from 1980 to 2008 to track the ebb and flow of topics within NLP, the influence of subfields on each other, and the influence of researchers from outside NLP. However, that work did not examine trends in the citations of NLP papers. Google Scholar is a free web search engine for academic literature.6 Through it, users can access the metadata associated with an article such as the number of citations it has received. Google Scholar does not provide information on how many articles are included in its database. However, scientometric researchers estimated that it included about 389 million documents in January 2018 (Gusenbauer, 2019)—making it the world’s largest source of academic information. Thus, there is growing interest in the use of Google Scholar information to draw inferences about scholarly research in general (Howland, 2010; Ordu˜na-Malea et al., 2014; Khabsa and Giles, 2014; Mingers and Leydesdorff, 2015; Mart´ın-Mart´ın et al., 2018) and on scholarly impact in particular (Priem and Hemminger, 2010; Yogatama et al., 2011; Bulaitis, 2017; Ravenscroft et al., 2017; Bos and Nitza, 2019; Ioannidis et al., 2019). This work examines patterns of citations of tens of thousands of NLP papers, both overall and across paper types, venues, and areas of research. 3 Data We now briefly describe how we extracted information from the ACL Anthology and Google Scholar to facilitate the citation analysis. (Further details about the dataset, as well as an analysis of the volume of research in NLP over the years, are available in Mohammad (2020b).) We aligned the information across AA and GS using the paper title, year of publication, and first author last name. 6https://scholar.google.com 5201 Figure 1: A timeline graph of citations received by papers published in each year. Colored segments correspond to papers; the height of a segment is proportional to the number of citations. Hovering over a paper shows metadata. 3.1 ACL Anthology Data The ACL Anthology provides access to its data through its website and a github repository (Gildea et al., 2018).7 We extracted paper title, names of authors, year of publication, and venue of publication from the repository.8 As of June 2019, AA had ∼50K entries; however, this includes forewords, schedules, etc. that are not truly research publications. After discarding them we are left with a set of 44,894 papers.9 3.2 Google Scholar Data Google Scholar does not provide an API to extract information about the papers. This is likely because of its agreement with publishing companies that have scientific literature behind paywalls (Mart´ın-Mart´ın et al., 2018). We extracted citation information from Google Scholar profiles of authors who published at least three papers in the ACL Anthology. A Google Scholar Profile page is a user-created page where authors can include their papers (along with the GS-provided citation information for the papers). Scraping author profile pages is explicitly allowed by GS’s robots exclusion standard. This is also how past work has 7https://www.aclweb.org/anthology/ https://github.com/acl-org/acl-anthology 8Multiple authors can have the same name and the same authors may use multiple variants of their names in papers. The AA volunteer team handles such ambiguities using both semi-automatic and manual approaches (fixing some instances on a case-by-case basis). Additionally, the AA repository includes a file that has canonical forms of author names. 9We used simple keyword searches for terms such as foreword, invited talk, program, appendix and session in the title to pull out entries that were likely to not be research publications. These were then manually examined to verify that they did not contain any false positives. studied Google Scholar (Khabsa and Giles, 2014; Ordu˜na-Malea et al., 2014; Mart´ın-Mart´ın et al., 2018). We collected citation information for 1.1 million papers in total. We will refer to this dataset as GScholar-NLP. Note that GScholar-NLP includes citation counts not just for NLP papers, but also for non-NLP papers published by the authors. GScholar-NLP includes 32,985 of the 44,894 papers in AA (about 74%). We will refer to this subset of the ACL Anthology papers as AA′. The citation analyses presented in this paper are on AA′. Future work will analyze both AA′ and GScholar-NLP to determine influences of other fields on NLP. 4 Examining Citations of NLP Papers We use data extracted from the ACL Anthology and Google Scholar to examine trends in citations through a series of questions. Q1. How many citations have the AA′ papers received? How is that distributed among the papers published in various years? A. ∼1.2 million citations (as of June 2019). Figure 1 shows the screenshot of an interactive timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. Further, the bar has colored segments corresponding to each of the papers; the height of a segment is proportional to the number of citations the paper has received. Thus it is easy to spot the papers that received a large number of citations. Hovering over individual papers reveals additional metadata. 5202 Discussion: With time, not only have the number of papers grown, but also the number of highcitation papers. We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations. The 2010s papers will likely surpass the 2000s papers in the years to come. Q2. How well cited are individual AA′ papers, as in, what is the average number of citations, what is the median, what is the distributison of citations? How well cited are the different types of papers: journal papers, main conference papers, workshop papers, etc.? A. In this and all further analyses, we do not include AA′ papers published in 2017 or later (to allow for at least 2.5 years for the papers to collect citations). There are 26,949 AA′ papers that were published from 1965 to 2016. Figure 2 shows box and whisker plots for: all of these papers (on the left) and for individual paper types (on the right). The whiskers are at a distance of 1.5 times the inter-quartile length. The average number of citations are indicated with the horizontal green dotted lines. Creating a separate class for “Top-tier Conference” is somewhat arbitrary, but it helps make certain comparisons more meaningful. For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences based on low acceptance rates and high citation metrics, but certainly other groupings are also reasonable. Discussion: Overall, the median citation count is 12. 75% of the papers have 34 or fewer citations. The average number of citations (45) is markedly higher than the median (12); this is because of a small number highly cited papers. When comparing different types of papers, we notice a large difference between journal papers and the rest. Even though the number of journal papers in AA (and AA′) is very small (about 2.5%), these papers have the highest median and average citations (55 and 204, respectively). Top-tier conferences come next, followed by other conferences. The differences between each of these pairs is statistically significant (Kolmogorov–Smirnov (KS) test, p < .01).10 Interestingly, the workshop papers and the shared task papers have higher medians 10KS is a non-parametric test that can be applied to compare distributions without needing to make assumptions about the nature of the distributions. Since the citations data is not normally distributed, KS is especially well suited. Figure 2: Citation box plots for papers published 1965– 2016: overall and by type. and averages than the non-top-tier conferences. These differences are also significant (KS, p < .01). Q3. How well cited are recent AA′ papers: say those published in the last decade (2010–2016)? How well cited are papers that were all published in the same year, say 2014? Are the citation distributions for individual years very different from those for larger time spans, say 2010–2016? Also, how well cited are papers 5 years after they are published? A. The top of Figure 3 shows citation box plots for 2010–2016; the bottom shows plots for papers published in 2014. Discussion: Observe that, in general, these numbers are markedly lower than the those in Figure 2. That is expected as these papers have had less time to accrue citations. Observe that journal papers again have the highest median and average; however, the gap between journals and top-tier conferences has reduced considerably. The shared task papers have a signifi5203 2010–2016 2014 Figure 3: Citation box plots for papers: published 2010–2016 (top) and published in 2014 (bottom). cantly higher average than workshop and non-toptier conferences. Examining the data revealed that many of the task description papers and the competition winning systems’ system-description papers received a large number of citations (while the majority of the other system description papers received much lower citations). Shared tasks have also been particularly popular in the 2010s compared to earlier years. The plots for 2014 (bottom of Figure 3) are similar to that of 2010–2016. (Although, system demo papers published in that year are better cited Figure 4: Citation box plots for journal articles and toptier conference papers from various time spans. than the larger set from the 2010–2016 period.) This plot also gives an idea of citation patterns for papers 5 years after they have been published. Q4. If we only consider journal papers and top-tier conferences, how well cited are papers from various time spans? A. Figure 4 shows the numbers for four time spans. Discussion: Observe that the 1990s and the 2000s have markedly higher medians and averages than other time periods. The early 1990s, which have the highest average, were an interesting period for NLP with the emergence of statistical approaches (especially from speech processing) and the use of data from the World Wide Web. The 2000–2010 period, which saw an intensification of the statistical data-driven approaches, is notable for the highest median. The high average in the 1990s is likely because of some seminal papers that obtained a very high number of citations. (Also the 1990’s had fewer papers than the 2010s, and thus the average is impacted more by the very high-citation papers.) The drop off in the average and median for recent papers is largely because they have not had as much time to collect citations. Q5. How well cited are papers from individual NLP venues? A. Figure 5 (top) shows the citation box plots for 1965–2016 papers from individual venues. The plots for workshops, system, demos, shared tasks, and tutorials are shown as well for ease of comparison. Figure 5 (bottom) shows the same box plots for 2010–2016 papers. 5204 Figure 5: Citation box plots for papers by venue, type: papers published 1965–2016 (top) and papers published 2010–2016 (bottom). Discussion: CL Journal has the highest median and average citation numbers. ACL comes second, closely followed by EMNLP and NAACL. The gap between CL Journal and ACL is considerably reduced when considering the 2010–2016 papers. IJCNLP and LREC have the highest numbers among the non-top-tier conferences, but their numbers remain lower than the numbers for SemEval, non-SemEval shared tasks, and workshops. TACL, a journal, has substantially lower citation numbers than CL Journal, ACL, EMNLP, and NAACL (Figure 5 top). However, it should be noted that TACL only began publishing since 2013. (Also, with a page limit of about ten, TACL papers are arguably more akin to conference papers than journal papers.) When considering only the 2010– 2016 papers, TACL’s citation numbers are second only to CL Journal (Figure 5 bottom). 5205 Figure 6: Citations box plots for long and short ACL papers published between 2003 and 2016. When considering 2010–2016 papers, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high averages (surpassing or equalling those of COLING and EACL); however their median citations are lower. (This is consistent with the trends we saw earlier in Q3.) Q6. How well cited are long and short ACL main conference papers, respectively? A. Short papers were introduced by ACL in 2003. Since then ACL is by far the venue with the highest number of short papers (compared to other venues). So we compare long and short papers published at ACL since 2003 to determine their average citations. Figure 6 shows the citation box plots for long and short papers published between 2003 and 2016 at ACL. The two distributions are statistically different (Kolmogorov–Smirnov test, p < .01). Discussion: In 2003, the idea of short papers was a novelty. It was conceived with the idea that there needs to a be a place for focused contributions that do not require as much space as a long paper. The format gained popularity quickly, and short papers at ACL tend to be incredibly competitive (sometimes having a lower acceptance rate than long papers). While there have been several influential short papers, it remains unclear how well-cited they are as a category. This analysis sheds some light to that end. We find that, on average, long papers get almost three times as many citations as short papers; the median for long papers is two-and-half times that of short papers. Figure 7: Stream graph of #papers by #citations. The contribution of each venue and paper type is stacked one on top of another. Q7. How do different venues and paper types compare in terms of the volume of papers pertaining to various amounts of citation? A. Figure 7 shows a stream graph of #papers by #citations. The contributions of each of the venues and paper types are stacked one on top of another (bands of colors). For a given point on the citations axis (say k), the width of the stream corresponds to the number of papers with k citations. Discussion: It is not surprising to see that the #papers by #citations curve follows a power law distribution. (There are lots of papers with 0 or few citations, but the number drops of exponentially with the number of citations.) Workshop papers (light grey) are the most numerous, followed by LREC (green)—as observable from their wide bands. The bands for ACL, COLING, EMNLP, and NAACL are easily discernable but the bands for many others, especially CL Journal and TACL are barely discernable indicating low relative volume of their papers. Observe that the bands for workshops and LREC are markedly wider in the 0 to 10 citations range than in the 11 and more citations range of the x axis. In contrast, the widths of the bands for top-tier conferences, such as ACL and EMNLP, remain relatively stable. Nonetheless, in terms of raw volume, it is worth noting that the workshops and LREC each produce more papers that are cited ten or more times than any other venue. As one considers even higher citations, the top-tier conferences become more dominant. 5206 Figure 8: The percentage of AA′ papers in various citation bins. In parenthesis: #papers. Q8. What percentage of papers are cited more than 10 times?11 How many papers are cited 0 times? A. Figure 8 shows the percentage of AA′ papers in various citation bins: 0, 1–9, 10–99, and 1000–9999. (The number of papers is shown in parenthesis.) Discussion: About 56% of the papers are cited ten or more times. 6.4% of the papers are never cited. (Note also that some portion of the 1–9 bin likely includes papers that only received self-citations.) It would be interesting to compare these numbers with those in other fields such as medical sciences, physics, linguistics, machine learning, and psychology. Q9. How well cited are areas within NLP? A. We used word bigrams in the titles of papers to sample papers from various areas.12 The title has a privileged position in a paper. It serves many functions, but most importantly, it conveys what the paper is about. For example, a paper with the bigram machine translation in the title is likely about machine translation (MT). We removed function words from the titles of papers in AA, and extracted all bigrams. Figure 9 shows, in order of decreasing frequency, the list of 66 bigrams that occurred in more than 100 papers. For each bigram, the yellow/green bar shows the median citations of the corresponding papers. The average citations and the number of papers are shown in parenthesis. 11Google Scholar invented the i-10 index as another measure of author research impact. It stands for the number of papers by an author that received ten or more citations. (Ten here is somewhat arbitrary, but reasonable.) 12Other approaches such as clustering are also reasonable; however, results with those might not be easily reproducible. We chose the title bigrams approach for its simplicity. Figure 9: Bar graph of median citations. Title bigrams ordered by number of papers. In parenthesis: average citations, #papers. 5207 Discussion: The graph shows, for example, that the bigram machine translation occurred in 1,659 AA′ papers that have a median citation count of 14, while the average is 68.8. The average is one of the highest among the bigrams, despite the median being more middle of the pack. This suggests the presence of heavily cited, outlier, papers. Indeed, the most cited paper in all of AA′ is an MT paper with more than 9000 citations (Papineni et al., 2002). Note that not all MT papers have machine translation in the title. Although non-random, this sample of 1,659 papers is arguably a reasonably representative sample of MT papers. Third in the list are papers with statistical machine in the title—most commonly from the phrase statistical machine translation. One expects considerable overlap across these sets of papers. However, machine translation likely covers a broader range of research including work done before statistical MT was introduced, as well as work on neural MT and MT evaluation. The bigrams with the highest median include: sentiment classification (31), anaphora resolution (30), and entity recognition (25). The bigrams with the lowest median include: language resources (5), textual entailment (8), translation system (9), and cross language (9). The bigrams with the highest average include: sentiment classification (181.6), speech tagging (107.9), sentiment analysis (104.0), and statistical machine (90.1).13 One can access the lists of highly cited papers, pertaining to each of the bigrams, through the interactive visualization. 5 Limitations and Future Work We list below some ideas of future work that we did not explore in this paper: • Analyze NLP papers that are published outside of the ACL Anthology. • Measure involvement of the industry in NLP publications over time. • Measure the impact of research publications in other ways beyond citations. Identify papers that have made substantial contributions in non-standard ways. A list of limitations and ethical considerations associated with this work is available online.14 13Note that simply composing titles with these high-citation bigrams is not expected to attract a large number of citations. 14https://medium.com/@nlpscholar/about-nlp-scholar62cb3b0f4488 6 Conclusions We extracted citation information for ∼1.1M papers from Google Scholar profiles of researchers who published at least three papers in the ACL Anthology. We used the citation counts of a subset (∼27K papers) to examine patterns of citation across paper types, venues, over time, and across areas of research within NLP. We showed that only about 56% of the papers are cited ten or more times. CL Journal has the most cited papers, but the citation gap between CL journal and top-tier conferences has reduced in recent years. On average, long papers get almost three times as many citations as short papers. In case of popular shared tasks, the task-description papers and competition-winning system-description papers often receive a considerable number of citations. So much so that the average number of citations for the shared task papers is higher than the average for non-top-tier conferences. The papers on sentiment classification, anaphora resolution, and entity recognition have the highest median citations. Workshop papers and the shared task papers have higher median and average citations than the non-top-tier conferences. The analyses presented here, and the associated dataset of papers mapped to citations, have a number of uses including, understanding how the field is growing and quantifying the impact of different types of papers. In separate work, we explored the use of the dataset to detect gender disparities in authorship and citations (Mohammad, 2020a). The dataset can potentially also be used to compare patterns of citations in NLP with those in other fields. Finally, we note again that citations are not an accurate reflection of the quality or importance of individual pieces of work. A crucial direction of future work is to develop richer ways of capturing scholarly impact. Acknowledgments This work was possible due to the helpful discussion and encouragement from a number of awesome people including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Iryna Gurevych, Rebecca Knowles, Isar Nejadgholi, and Peter Turney. Also, a big thanks to the ACL Anthology and Google Scholar Teams for creating and maintaining wonderful resources. 5208 References Ashton Anderson, Dan McFarland, and Dan Jurafsky. 2012. Towards a computational history of the acl: 1980-2008. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 13–21. Selcuk Aya, Carl Lagoze, and Thorsten Joachims. 2005. Citation classification and its applications. In Knowledge Management: Nurturing Culture, Innovation, and Technology, pages 287–298. World Scientific. Steven Bird, Robert Dale, Bonnie J Dorr, Bryan Gibson, Mark Thomas Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir R Radev, and Yee Fan Tan. 2008. The acl anthology reference corpus: A reference dataset for bibliographic research in computational linguistics. Lutz Bornmann and Hans-Dieter Daniel. 2009. The state of h index research. EMBO reports, 10(1):2–6. Arthur R Bos and Sandrine Nitza. 2019. Interdisciplinary comparison of scientific impact of publications using the citation-ratio. Data Science Journal, 18(1). Zoe Bulaitis. 2017. Measuring impact in the humanities: Learning from accountability and economics in a contemporary history of cultural value. Palgrave Communications, 3(1):7. Daniel Gildea, Min-Yen Kan, Nitin Madnani, Christoph Teichmann, and Mart´ın Villalba. 2018. The ACL anthology: Current state and future directions. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 23–28, Melbourne, Australia. Association for Computational Linguistics. Michael Gusenbauer. 2019. Google scholar to overshadow them all? comparing the sizes of 12 academic search engines and bibliographic databases. Scientometrics, 118(1):177–214. Jared L Howland. 2010. How scholarly is google scholar? a comparison to library databases. John PA Ioannidis, Jeroen Baas, Richard Klavans, and Kevin W Boyack. 2019. A standardized citation metrics author database annotated for scientific field. PLoS biology, 17(8):e3000384. Madian Khabsa and C Lee Giles. 2014. The number of scholarly documents on the public web. PloS one, 9(5):e93949. Joseph Mariani, Gil Francopoulo, and Patrick Paroubek. 2018. The nlp4nlp corpus (i): 50 years of publication, collaboration and citation in speech and language processing. Frontiers in Research Metrics and Analytics, 3:36. Alberto Mart´ın-Mart´ın, Enrique Orduna-Malea, Mike Thelwall, and Emilio Delgado L´opez-C´ozar. 2018. Google scholar, web of science, and scopus: A systematic comparison of citations in 252 subject categories. Journal of Informetrics, 12(4):1160–1177. John Mingers and Loet Leydesdorff. 2015. A review of theory and practice in scientometrics. European journal of operational research, 246(1):1–19. Saif Mohammad, Bonnie Dorr, Melissa Egan, Ahmed Hassan, Pradeep Muthukrishan, Vahed Qazvinian, Dragomir Radev, and David Zajic. 2009. Using citations to generate surveys of scientific paradigms. In Proceedings of human language technologies: The 2009 annual conference of the North American chapter of the association for computational linguistics, pages 584–592. Saif M. Mohammad. 2019. The state of nlp literature: A diachronic analysis of the acl anthology. arXiv preprint arXiv:1911.03562. Saif M. Mohammad. 2020a. Gender gap in natural language processing research: Disparities in authorship and citations. In Proceedings of the 2020 Annual Conference of the Association for Computational Linguistics, Seattle, USA. Saif M. Mohammad. 2020b. Nlp scholar: A dataset for examining the state of nlp research. In Proceedings of the 12th Language Resources and Evaluation Conference (LREC-2020), Marseille, France. Saif M. Mohammad. 2020c. Nlp scholar: An interactive visual explorer for natural language processing literature. In Proceedings of the 2020 Annual Conference of the Association for Computational Linguistics, Seattle, USA. Hidetsugu Nanba, Noriko Kando, and Manabu Okumura. 2011. Classification of research papers using citation links and citation types: Towards automatic review article generation. Advances in Classification Research Online, 11(1):117–134. Enrique Ordu˜na-Malea, Juan Manuel Ayll´on, Alberto Mart´ın-Mart´ın, and Emilio Delgado L´opez-C´ozar. 2014. About the size of google scholar: playing the numbers. arXiv preprint arXiv:1407.6239. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Son Bao Pham and Achim Hoffmann. 2003. A new approach for scientific citation classification using cue phrases. In Australasian Joint Conference on Artificial Intelligence, pages 759–771. Springer. Jason Priem and Bradely H Hemminger. 2010. Scientometrics 2.0: New metrics of scholarly impact on the social web. First monday, 15(7). 5209 Vahed Qazvinian, Dragomir R Radev, Saif M Mohammad, Bonnie Dorr, David Zajic, Michael Whidby, and Taesun Moon. 2013. Generating extractive summaries of scientific paradigms. Journal of Artificial Intelligence Research, 46:165–201. Dragomir R Radev, Mark Thomas Joseph, Bryan Gibson, and Pradeep Muthukrishnan. 2016. A bibliometric and network analysis of the field of computational linguistics. Journal of the Association for Information Science and Technology, 67(3):683–706. James Ravenscroft, Maria Liakata, Amanda Clare, and Daniel Duma. 2017. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements. PloS one, 12(3):e0173152. Simone Teufel, Advaith Siddharthan, and Dan Tidhar. 2006. Automatic classification of citation function. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 103–110. Alan M Turing. 1950. Computing machinery and intelligence-am turing. Mind, 59(236):433. Alan M Turing. 2009. Computing machinery and intelligence. In Parsing the Turing Test, pages 23–65. Springer. Dani Yogatama, Michael Heilman, Brendan O’Connor, Chris Dyer, Bryan R Routledge, and Noah A Smith. 2011. Predicting a scientific community’s response to an article. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 594–604. Xiaodan Zhu, Peter Turney, Daniel Lemire, and Andr´e Vellino. 2015. Measuring academic influence: Not all citations are equal. Journal of the Association for Information Science and Technology, 66(2):408– 427.
2020
464
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5210–5217 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5210 How Can We Accelerate Progress Towards Human-like Linguistic Generalization? Tal Linzen Department of Cognitive Science Johns Hopkins University [email protected] Abstract This position paper describes and critiques the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm, which has become a central tool for measuring progress in natural language understanding. This paradigm consists of three stages: (1) pretraining of a word prediction model on a corpus of arbitrary size; (2) fine-tuning (transfer learning) on a training set representing a classification task; (3) evaluation on a test set drawn from the same distribution as that training set. This paradigm favors simple, low-bias architectures, which, first, can be scaled to process vast amounts of data, and second, can capture the fine-grained statistical properties of a particular data set, regardless of whether those properties are likely to generalize to examples of the task outside the data set. This contrasts with humans, who learn language from several orders of magnitude less data than the systems favored by this evaluation paradigm, and generalize to new tasks in a consistent way. We advocate for supplementing or replacing PAID with paradigms that reward architectures that generalize as quickly and robustly as humans. 1 Introduction The special session of the 2020 Annual Meeting of Association for Computational Linguistics invites us to take stock of the progress made in the field in the last few years. There is no question that we have made significant progress in a range of applications: current machine translation systems for high-resource languages, for example, are undeniably better than those we had a decade ago. This opinion piece will focus on a different question: are we making progress towards the classic goal of mimicking human linguistic abilities in machines—towards a model that acquires language as efficiently as humans, and generalizes it as humans do to new structures and contexts (“tasks”)? I will argue that an evaluation paradigm that has rapidly established itself as one of the main tools for measuring progress in the field—a paradigm I will term, for want of a catchier name, Pretraining-Agnostic Identically Distributed evaluation (PAID)—encourages progress in a direction that is at best orthogonal to the goal of human-like generalization. Because it does not consider sample efficiency, this approach rewards models that can be trained on massive amounts of data, several orders of magnitude more than a human can expect to be exposed to. And because benchmark scores are computed on test sets drawn from the same distribution as their respective training sets, this paradigm favors models that excel in capturing the statistical patterns of particular data sets over models that generalize as a human would. 2 Human-like Generalization Humans learn language from much more limited exposure than most contemporary NLP systems. An analysis of recordings taken in the environment of the child of an MIT professor between the ages of 9 and 24 months found that the child heard or produced approximately eight million words over this 15-month period (Roy et al., 2015). Children in lower socioeconomic status families in Western societies receive significantly less linguistic input than that (around 3 million words per year, Hart and Risley 1995); even more strikingly, members of the Tsimane community in Bolivia spend about 15 times less time per hour speaking to their children than do highly educated American families (Cristia et al., 2019). If NLP systems were as sample-efficient as Tsimane children, far fewer languages would be considered “low-resource languages”. Despite the limited amount of exposure to their language, humans generalize their linguistic knowl5211 edge in a consistent way to structures that are infrequent or non-existent in corpora (Sprouse et al., 2013), and quickly learn to do new things with language (what we sometimes refer to in NLP as “tasks”). As I discuss below, this is not the case for current deep learning systems: when tested on cases sampled from a distribution that differs from the one they were trained on, their behavior is unpredictable and inconsistent with that of humans (Jia and Liang, 2017; McCoy et al., 2019b), and they require extensive instruction on each new task (Yogatama et al., 2019). Humans’ rapid and consistent generalization abilities rely on powerful inductive biases, which likely arise from a combination of innate building blocks and experience with diverse learning problems (Lake et al., 2017). Systems that generalize like humans would be useful not only for NLP, but also for the scientific study of human language acquisition and processing (Keller, 2010; Dupoux, 2018). But, as I will argue in the next two sections, it is unclear whether our dominant evaluation paradigms are getting us closer to this goal. 3 Pretraining-Agnostic Evaluation Over the last two years, deep learning systems have obtained rapidly increasing scores on language understanding benchmarks such as GLUE (Wang et al., 2019b) or SuperGLUE (Wang et al., 2019a). These benchmarks aggregate multiple supervised classification tasks—such as sentiment analysis, linguistic acceptability judgments, or entailment detection—and collate the scores obtained on those tasks into a leaderboard, with a single headline score for each model averaging its scores on each individual task. For each of these classification tasks, a data set that was generated by a particular process, often involving crowdsourcing, is randomly split into two: a training set, which the system is allowed to observe, and a held-out test set, on which it is evaluated. A standard recipe has emerged for achieving high scores on such benchmarks. A neural network—typically, one based on the transformer architecture (Vaswani et al., 2017)—is pretrained on a denoising objective, such as filling in one or more blanks in a vast number of sentences. This network is then fine-tuned (performs transfer learning) on the benchmark’s supervised tasks, each of which include a much smaller number of training examples than the pretraining corpus (Howard and Ruder, 2018; Peters et al., 2018). The T5 model (Raffel et al., 2019)—the system that boasted the highest score on SuperGLUE at the time of writing—achieved an average accuracy of 88.9% on this benchmark, slightly lower than that of untrained human annotators (89.8%), and more than 20 percentage points higher than the score obtained just a few months earlier by BERT (Devlin et al., 2019; Wang et al., 2019a). This jump in accuracy does not reflect significant modeling innovations: both BERT and T5 are transformers trained on similar objectives that differ primarily in their scale. When ranking systems, leaderboards such as SuperGLUE do not take into account the amount of pretraining data provided to each model. Pretraining corpora are not standardized, and the amount of pretraining data is not always easy to discern from the papers reporting on such systems. Here is my attempt to reconstruct the recent evolution of pretraining corpus sizes.1 BERT, uploaded to arXiv in October 2018, was trained on 3.3 billion words; XLNet (Yang et al., June 2019), was trained on 78 GB of text, or approximately 13 billion words; RoBERTa (Liu et al., July 2019) was trained on 160 GB of text, or around 28 billion words; and T5 (Raffel et al., October 2019) was trained on 750 GB of text, or approximately 130 billion words. When we rely on a single leaderboard to compare systems trained on corpora with such a large range of sizes, we are not comparing architectures, but rather interactions of architectures, corpus sizes, and computational resources available for training. While this may be a useful comparison for an engineer who seeks to plug an existing trained model into a larger pipeline, this approach is unlikely to advance us towards the goal advocated in this article. The 130 billion word corpus that T5 was trained on is much larger than the corpus that a human can expect to be exposed to before adulthood (fewer than 100 million words, see Section 2). But a leaderboard that evaluates only bottom-line transfer learning accuracy inherently disadvantages a sample-efficient model pretrained on a few dozen million words compared to a model such as T5. For all we know, it is possible that architectures 1Corpus sizes reported in massive-corpus pretraining papers are often specified in gigabytes, or number of modelspecific subword units, instead of measures such as number of words that are easier to compare across articles. My estimates are based on an average English word length of 4.7 characters and a space or punctuation mark after each word. 5212 rewarded by PAID, such as massive transformers, only work well when given an amount of data that is orders of magnitude greater than that available to humans. If that is the case, our exploration of the space of possible models could be going in a direction that is orthogonal to the one that might lead us to models that can imitate humans’ sample efficiency (one example of such direction is neural networks with explicit symbolic structure, which are harder to scale up, but perform well on smaller data sets: Kuncoro et al. 2018; Wilcox et al. 2019). 4 Identically Distributed Training Set and Test Set The remaining two letters of the PAID acronym refer to the practice of evaluating success on classification tasks using training and test set generated using the same process. Typically, a single data set is collected and is randomly split into a training portion and test portion. While this may seem reasonable from a machine learning perspective, it has become clear that this form of evaluation obscures possible mismatches between the generalizations that we as humans believe a system performing the task should acquire, and the generalizations that the system in fact extracts from the data. Consider, for example, crowdsourced natural language inference (NLI) data sets, in which workers are asked to generate a sentence that contradicts the prompt shown to them (Bowman et al., 2015). One strategy that crowdworkers adopt when generating a contradiction is to simply negate the prompt, for example by inserting the word not. This strategy is often effective: the man is sleeping contradicts the man is not sleeping. Conversely, it is much less likely that the worker would use the word not when asked to generate a sentence that is entailed by the prompt. Taken together, such worker choices lead to a strong correlation between the presence of the word not in the hypothesis and the label CONTRADICTION. It would be surprising if low-bias learners such as neural networks did not notice such a correlation, and indeed they do, leading them to respond CONTRADICTION with high probability any time the hypothesis contains a negation word (Gururangan et al., 2018; Poliak et al., 2018). Of course, relying on the presence of the word not is not a generally valid inference strategy; for example, the man is awake entails, rather than contradicts, the man is not sleeping. Numerous generalization issues of this sort have been documented, for NLI and for other tasks. In the syntactic domain, McCoy et al. (2019b) showed that BERT fine-tuned on the crowdsourced MultiNLI data set (Williams et al., 2018) achieves high accuracy on the MultiNLI test set, but shows very little sensitivity to word order when tested on constructed examples that require an analysis of the structure of the sentence; for example, this model is likely to conclude that the detective followed the suspect entails the suspect followed the detective. In short, the models, unable to discern the intentions of the data set’s designers, happily recapitulate any statistical patterns they find in the training data. With a random training/test split, any correlation observed in the training set will hold approximately for the test set, and a system that learned it could achieve high test set accuracy. And indeed, we have models that excel in the PAID paradigm, even exceeding the performance of human annotators on the test portion of the corpus used for fine-tuning (Nangia and Bowman, 2019), but, when tested on controlled examples, make mistakes that a human would rarely make.2 The generalizations that a statistical model extracts from the data are always the result of the interaction between the model’s inductive biases and the statistical properties of the data set. In the case of BERT’s insensitivity to word order in NLI, the model does not seem to have a strong inductive bias one way or another; its sensitivity to word order varies widely depending on the weight initialization of the fine-tuning classifier and the order of the fine-tuning examples (McCoy et al., 2019a), and its syntactic behavior in the inference task can be made to be more consistent with human intuitions if the training set is augmented to include a larger number of examples illustrating the importance of word order (Min et al., 2020). While BERT is capable of learning to use syntax for inference given a sufficiently strong signal, then, it prefers to use other heuristics, if possible. This contrasts with human-like generalization in this task, which would likely start from the assumption that any language understanding task should recruit our 2Comparisons between human annotators and transformers are arguably unfair: before observing the test set, the models receive hundreds of thousands of examples of the output of the data-generating process. This contrasts with humans annotators, who need to perform the task based on their general language understanding skills. It would be an entertaining though somewhat cruel experiment to repeat the comparison after matching the amount of exposure that humans and pretrained transformers receive to the quirks of the data set. 5213 knowledge of syntax: it would most likely be difficult to convince humans to ignore syntax when understanding a sentence, as BERT does. 5 The Generalization Leaderboard What is the way forward? My goal is not to argue that there is no value to the leaderboard approach, where a single number or a small set of numbers can be used to quickly compare models. Despite the drawbacks of this approach—in particular, its tendency to obscure the fine-grained strengths and weaknesses of particular models, as I discuss below—hill climbing on a metric can enable a productive division of labor between groups that develop strong benchmarks, groups that propose new models and inference methods, and groups that have the engineering skills and computational resources necessary to train those models on the number of GPUs they require to thrive. Instead, my argument is that the current division of labor is unproductive. At the risk of belaboring the mountaineering metaphor, one might say that groups with access to engineering and computing resources are climbing the PAID hill, while other groups, which document the same models’ unreliable generalization behavior—or retrain them on smaller data sets to produce the learning curves that are often missing from engineering papers—are climbing the interpretability track hill, producing papers that are more and more sophisticated and well-respected but do not influence the trajectory of mainstream model development. This section describes some design decisions that can lead to better alignment between the two sets of research groups. Many of these points are not new—in fact, some of these properties were standard in evaluation paradigms 10 or 20 years ago—but are worth revisiting given recent evaluation trends. Standard, moderately sized pretraining corpora. To complement current evaluation approaches, we should develop standard metrics that promote sample efficiency. At a minimum, we should standardize the pretraining corpus across all models, as some CoNLL shared tasks do. Multiple leaderboards can be created that will measure performance on increasingly small subsets of this pretraining corpus size—including ones that are smaller than 100 million words. To make stronger contact with the human language acquisition literature, a leaderboard could compare models on their ability to learn various linguistic generalizations from the CHILDES repository of child-directed speech (MacWhinney, 2000). Independent evaluation in multiple languages. A model can be sample-efficient for English, but not for other languages. We should ensure that our architectures, like humans learners, are not optimized for English (Bender, 2011). To do so, we should develop matched training corpora and benchmarks for multiple languages. A composite score could reflect average performance across languages (Hu et al., 2020). In keeping with our goal of mimicking humans, who are known for their ability to learn any language without learning English first, we should train and test the models separately on each language, instead of focusing on transfer from English to other languages—an important, but distinct, research direction. What about grounding? In response to studies comparing training corpus sizes between deep learning models and humans (e.g., van Schijndel et al. 2019), it is sometimes pointed out that humans do not learn language from text alone—we also observe the world and interact with it. This, according to this argument, renders the comparison meaningless. While the observation that children learn from diverse sources of information is certainly correct, it is unclear whether any plausible amount of non-linguistic input could offset the difference between 50 million words (humans) and 130 billion words (T5). Instead of taking this observation as a carte blanche to ignore sample efficiency, then, we should address it experimentally, by collecting multimodal data sets (Suhr et al., 2019; Hudson and Manning, 2019), developing models that learn from them efficiently, and using the Generalization Leaderboard to measure how effective this signal is in aligning the model’s generalization behavior with that of humans. Normative evaluation. Performance metrics should be derived not from samples from the same distribution as the fine-tuning set, but from what we might term normative evaluation: expert-created controlled data sets that capture our intuitions about how an agent should perform the task (Marelli et al., 2014; Marvin and Linzen, 2018; Warstadt et al., 2019; Ettinger, 2020). Such data sets should be designed to be difficult to solve using heuristics that ignore linguistic principles. While experts are more expensive than crowdworkers, the payoff in terms of data set quality is likely to be consider5214 able. In parallel, we should continue to explore approaches such as adversarial filtering that may limit crowdworkers’ ability to resort to shortcuts (Zellers et al., 2018; Nie et al., 2019). Normative evaluation is related to but distinct from adversarial evaluation. Adversarial attacks usually focus on a specific trained model, starting from an example that the model classifies correctly, and perturbing it in ways that, under the normative definition of the task, should not affect the classifier’s decision. For example, adversarial evaluation for a given question answering system may take an existing instance from the data set, and find an irrelevant sentence that, when added to the paragraph that the question is about, changes the system’s response (Jia and Liang, 2017). By contrast, the goal of the normative evaluation paradigm is not to fool a particular system by exploiting its weaknesses, but simply to describe the desirable performance on the task in a unambiguous way. Test-only benchmarks. A central point that bears repeating is that we should not fine-tune our models on the evaluation benchmark. Despite our best efforts, we may never be able to create a benchmark that does not have unintended statistical regularities. Fine-tuning on the benchmark may clue the model into such unintended correlations (Liu et al., 2019a). Any pretrained model will still need to be taught how to perform the transfer task, of course, but this should be done using a separate data set, perhaps one of those that are currently aggregated in GLUE. Either way, the Generalization Leaderboard should favor models that, like humans, are able to perform tasks with minimal instruction (few-shot learning, Yogatama et al. 2019). What about efficiency? The PAID paradigm is agnostic not only to pretraining resources, but also to properties of the model such as the number of parameters, the speed of inference, or the number of GPU hours required to train it. These implementational-level factors (Marr, 1982) are orthogonal to our generalization concerns, which are formulated at the level of input–output correspondence. If efficiency is a concern, however, such properties can be optimized directly by modifying pretraining-agnostic benchmarks to take them into account (Schwartz et al., 2019). Breakdown by task and phenomenon. Benchmarks should always provide a detailed breakdown of accuracy by task and linguistic phenomenon: a model that obtains mediocre average performance, but captures a particular phenomenon very well, can be of considerable interest. Discouragingly, even though GLUE reports such task-specific scores—and even includes diagnostic examples created by experts—these finer-grain results have failed to gain the same traction as the headline GLUE benchmark. Other than exhorting authors to pay greater attention to error analysis in particular and linguistics in general—granted, an exhortation without which no ACL position piece can be considered truly complete—we should insist, when reviewing papers, that authors include a complete breakdown by phenomenon as an appendix, and discuss noteworthy patterns in the results. For authors that strongly prefer that their paper include a headline number that is larger than numbers reported in previous work, the leaderboard could offer alternative headline metrics that would reward large gains in one category even when those are offset by small losses in others. 6 Conclusion I have described the currently popular PretrainingAgnostic Identically Distributed paradigm, which selects for models that can be trained easily on an unlimited amount of data, and that excel in capturing arbitrary statistical patterns in a fine-tuning data set. While such models have considerable value in applications, I have advocated for a parallel evaluation ecosystem—complete with a leaderboard, if one will motivate progress—that will reward models for their ability to generalize in a human-like way. Human-like inductive biases will improve our models’ ability to learn language structure and new tasks from limited data, and will align the models’ generalization behavior more closely with human expectations, reducing the allure of superficial heuristics that do not follow linguistic structure, and the prevalence of adversarial examples, where changes to the input that are insignificant from a human perspective turn out to affect the network’s behavior in an undesirable way. References Emily M. Bender. 2011. On achieving and evaluating language-independence in NLP. Linguistic Issues in Language Technology, 6(3):1–26. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. 5215 In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Alejandrina Cristia, Emmanuel Dupoux, Michael Gurven, and Jonathan Stieglitz. 2019. Child-directed speech is infrequent in a forager-farmer population: a time allocation study. Child Development, 90(3):759–773. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Emmanuel Dupoux. 2018. Cognitive science in the era of artificial intelligence: A roadmap for reverseengineering the infant language-learner. Cognition, 173:43–59. Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112. Association for Computational Linguistics. Betty Hart and Todd R. Risley. 1995. Meaningful differences in the everyday experience of young American children. Baltimore: P. H. Brookes. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. arXiv preprint 2003.11080. Drew A. Hudson and Christopher D. Manning. 2019. GQA: A new dataset for real-world visual reasoning and compositional question answering. Conference on Computer Vision and Pattern Recognition (CVPR). Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Association for Computational Linguistics. Frank Keller. 2010. Cognitively plausible models of human language processing. In Proceedings of the ACL 2010 Conference Short Papers, pages 60–67, Uppsala, Sweden. Association for Computational Linguistics. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436. Association for Computational Linguistics. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2017. Building machines that learn and think like people. Behavioral and Brain Sciences, 40. Nelson F. Liu, Roy Schwartz, and Noah A. Smith. 2019a. Inoculation by fine-tuning: A method for analyzing challenge datasets. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2171–2179, Minneapolis, Minnesota. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint 1907.11692. Brian MacWhinney. 2000. The CHILDES Project: Tools for Analyzing Talk. Third edition. Lawrence Erlbaum Associates, Mahwah, NJ. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA). David Marr. 1982. Vision: A computational investigation into the human representation and processing of visual information. New York: Freeman. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Association for Computational Linguistics. 5216 R. Thomas McCoy, Junghyun Min, and Tal Linzen. 2019a. Berts of a feather do not generalize together: Large variability in generalization across models with similar test set performance. R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019b. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. 2020. Syntactic data augmentation increases robustness to inference heuristics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, Washington. Association for Computational Linguistics. Nikita Nangia and Samuel R. Bowman. 2019. Human vs. muppet: A conservative estimate of human performance on the GLUE benchmark. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4566–4575, Florence, Italy. Association for Computational Linguistics. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial NLI: A new benchmark for natural language understanding. arXiv preprint 1910.14599. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180–191. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint 1910.10683. Brandon C. Roy, Michael C. Frank, Philip DeCamp, Matthew Miller, and Deb Roy. 2015. Predicting the birth of a spoken word. Proceedings of the National Academy of Sciences, 112(41):12663–12668. Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn’t buy quality syntax with neural language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5831–5837, Hong Kong, China. Association for Computational Linguistics. Roy Schwartz, Jesse Dodge, and Noah A. Smith. 2019. Green AI. arXiv preprint 1907.10597. Jon Sprouse, Carson T Sch¨utze, and Diogo Almeida. 2013. A comparison of informal and formal acceptability judgments using a random sample from Linguistic Inquiry 2001–2010. Lingua, 134:219–248. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 6418–6428, Florence, Italy. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. arXiv preprint 1905.00537. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2019. BLiMP: A benchmark of linguistic minimal pairs for English. arXiv preprint 1912.00582. Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, and Roger Levy. 2019. Structural supervision improves learning of non-local grammatical dependencies. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3302–3312, Minneapolis, Minnesota. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American 5217 Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint 1906.08237. Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint 1901.11373. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93– 104, Brussels, Belgium. Association for Computational Linguistics.
2020
465
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5218–5230 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5218 How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence Haoxi Zhong1, Chaojun Xiao1, Cunchao Tu1, Tianyang Zhang2, Zhiyuan Liu1∗, Maosong Sun1 1Department of Computer Science and Technology Institute for Artificial Intelligence, Tsinghua University, Beijing, China Beijing National Research Center for Information Science and Technology, China 2Beijing Powerlaw Intelligent Technology Co., Ltd., China [email protected], {xcjthu,tucunchao}@gmail.com, [email protected], {lzy,sms}@tsinghua.edu.cn Abstract Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain. In recent years, LegalAI has drawn increasing attention rapidly from both AI researchers and legal professionals, as LegalAI is beneficial to the legal system for liberating legal professionals from a maze of paperwork. Legal professionals often think about how to solve tasks from rulebased and symbol-based methods, while NLP researchers concentrate more on data-driven and embedding methods. In this paper, we describe the history, the current state, and the future directions of research in LegalAI. We illustrate the tasks from the perspectives of legal professionals and NLP researchers and show several representative applications in LegalAI. We conduct experiments and provide an indepth analysis of the advantages and disadvantages of existing works to explore possible future directions. You can find the implementation of our work from https://github. com/thunlp/CLAIM. 1 Introduction Legal Artificial Intelligence (LegalAI) mainly focuses on applying artificial intelligence technology to help legal tasks. The majority of the resources in this field are presented in text forms, such as judgment documents, contracts, and legal opinions. Therefore, most LegalAI tasks are based on Natural Language Processing (NLP) technologies. LegalAI plays a significant role in the legal domain, as they can reduce heavy and redundant work for legal professionals. Many tasks in the legal domain require the expertise of legal practitioners and a thorough understanding of various legal documents. Retrieving and understanding legal documents take lots of time, even for legal professionals. ∗Corresponding author. Therefore, a qualified system of LegalAI should reduce the time consumption of these tedious jobs and benefit the legal system. Besides, LegalAI can also provide a reliable reference to those who are not familiar with the legal domain, serving as an affordable form of legal aid. In order to promote the development of LegalAI, many researchers have devoted considerable efforts over the past few decades. Early works (Kort, 1957; Ulmer, 1963; Nagel, 1963; Segal, 1984; Gardner, 1984) always use hand-crafted rules or features due to computational limitations at the time. In recent years, with rapid developments in deep learning, researchers begin to apply deep learning techniques to LegalAI. Several new LegalAI datasets have been proposed (Kano et al., 2018; Xiao et al., 2018; Duan et al., 2019; Chalkidis et al., 2019b,a), which can serve as benchmarks for research in the field. Based on these datasets, researchers began exploring NLP-based solutions to a variety of LegalAI tasks, such as Legal Judgment Prediction (Aletras et al., 2016; Luo et al., 2017; Zhong et al., 2018; Chen et al., 2019), Court View Generation (Ye et al., 2018), Legal Entity Recognition and Classification (Cardellino et al., 2017; ANGELIDIS et al., 2018), Legal Question Answering (Monroy et al., 2009; Taniguchi and Kano, 2016; Kim and Goebel, 2017), Legal Summarization (Hachey and Grover, 2006; Bhattacharya et al., 2019). As previously mentioned, researchers’ efforts over the years led to tremendous advances in LegalAI. To summarize, some efforts concentrate on symbol-based methods, which apply interpretable hand-crafted symbols to legal tasks (Ashley, 2017; Surden, 2018). Meanwhile, other efforts with embedding-based methods aim at designing efficient neural models to achieve better performance (Chalkidis and Kampas, 2019). More specifically, symbol-based methods concentrate more on utilizing interpretable legal knowledge to reason 5219 Embedding-based Methods Symbol-based Methods Applications of LegalAI Concept Embedding Pretrained Language Model Relation Extraction Event Timeline Element Detection Judgment Prediction Similar Case Matching Question Answering Text Summarization Concept Knowledge Graph Intentional Harm Arrested Alarm Escape Homicide Someone died? Someone hurt? Hurt by accident? Alice and Bob are married and have a son, David. One day, Bob died unexpectedly…… (Alice, marry with, Bob) (David, son of, Alice) (David, son of, Bob) 9am 12am 8pm 3pm 8pm Common law Common law (also known as judicial precedent or judgemade law) …… Knowledge Guided Data Driven Figure 1: An overview of tasks in LegalAI. between symbols in legal documents, like events and relationships. Meanwhile, embedding-based methods try to learn latent features for prediction from large-scale data. The differences between these two methods have caused some problems in existing works of LegalAI. Interpretable symbolic models are not effective, and embedding-methods with better performance usually cannot be interpreted, which may bring ethical issues to the legal system such as gender bias and racial discrimination. The shortcomings make it difficult to apply existing methods to real-world legal systems. We summarize three primary challenges for both embedding-based and symbol-based methods in LegalAI: (1) Knowledge Modelling. Legal texts are well formalized, and there are many domain knowledge and concepts in LegalAI. How to utilize the legal knowledge is of great significance. (2) Legal Reasoning. Although most tasks in NLP require reasoning, the LegalAI tasks are somehow different, as legal reasoning must strictly follow the rules well-defined in law. Thus combining predefined rules and AI technology is essential to legal reasoning. Besides, complex case scenarios and complex legal provisions may require more sophisticated reasoning for analyzing. (3) Interpretability. Decisions made in LegalAI usually should be interpretable to be applied to the real legal system. Otherwise, fairness may risk being compromised. Interpretability is as important as performance in LegalAI. The main contributions of this work are concluded as follows: (1) We describe existing works from the perspectives of both NLP researchers and legal professionals. Moreover, we illustrate several embedding-based and symbol-based methods and explore the future direction of LegalAI. (2) We describe three typical applications, including judgment prediction, similar case matching, and legal question answering in detail to emphasize why these two kinds of methods are essential to LegalAI. (3) We conduct exhaustive experiments on multiple datasets to explore how to utilize NLP technology and legal knowledge to overcome the challenges in LegalAI. You can find the implementation from github1. (4) We summarize LegalAI datasets, which can be regarded as the benchmark for related tasks. The details of these datasets can be found from github2 with several legal papers worth reading. 2 Embedding-based Methods First, we describe embedding-based methods in LegalAI, also named as representation learning. Embedding-based methods emphasize on representing legal facts and knowledge in embedding space, and they can utilize deep learning methods for corresponding tasks. 2.1 Character, Word, Concept Embeddings Character and word embeddings play a significant role in NLP, as it can embed the discrete texts into 1https://github.com/thunlp/CLAIM 2https://github.com/thunlp/LegalPapers 5220 continuous vector space. Many embedding methods have been proved effective (Mikolov et al., 2013; Joulin et al., 2016; Pennington et al., 2014; Peters et al., 2018; Yang et al., 2014; Bordes et al., 2013; Lin et al., 2015) and they are crucial for the effectiveness of the downstream tasks. In LegalAI, embedding methods are also essential as they can bridge the gap between texts and vectors. However, it seems impossible to learn the meaning of a professional term directly from some legal factual description. Existing works (Chalkidis and Kampas, 2019; Nay, 2016) mainly revolve around applying existing embedding methods like Word2Vec to legal domain corpora. To overcome the difficulty of learning professional vocabulary representations, we can try to capture both grammatical information and legal knowledge in word embedding for corresponding tasks. Knowledge modelling is significant to LegalAI, as many results should be decided according to legal rules and knowledge. Although knowledge graph methods in the legal domain are promising, there are still two major challenges before their practical usage. Firstly, the construction of the knowledge graph in LegalAI is complicated. In most scenarios, there are no ready-made legal knowledge graphs available, so researchers need to build from scratch. In addition, different legal concepts have different representations and meanings under legal systems in different countries, which also makes it challenging to construct a general legal knowledge graph. Some researchers tried to embed legal dictionaries (Cvrˇcek et al., 2012), which can be regarded as an alternative method. Secondly, a generalized legal knowledge graph is different in the form with those commonly used in NLP. Existing knowledge graphs concern the relationship between entities and concepts, but LegalAI focuses more on the explanation of legal concepts. These two challenges make knowledge modelling via embedding in LegalAI non-trivial, and researchers can try to overcome the challenges in the future. 2.2 Pretrained Language Models Pretrained language models (PLMs) such as BERT (Devlin et al., 2019) have been the recent focus in many fields in NLP (Radford et al., 2019; Yang et al., 2019; Liu et al., 2019a). Given the success of PLM, using PLM in LegalAI is also a very reasonable and direct choice. However, there are differences between the text used by existing PLMs and legal text, which also lead to unsatisfactory performances when directly applying PLMs to legal tasks. The differences stem from the terminology and knowledge involved in legal texts. To address this issue, Zhong et al. (2019b) propose a language model pretrained on Chinese legal documents, including civil and criminal case documents. Legal domain-specific PLMs provide a more qualified baseline system for the tasks of LegalAI. We will show several experiments comparing different BERT models in LegalAI tasks. For the future exploration of PLMs in LegalAI, researchers can aim more at integrating knowledge into PLMs. Integrating knowledge into pretrained models can help the reasoning ability between legal concepts. Lots of work has been done on integrating knowledge from the general domain into models (Zhang et al., 2019; Peters et al., 2019; Hayashi et al., 2019). Such technology can also be considered for future application in LegalAI. 3 Symbol-based Methods In this section, we describe symbol-based methods, also named as structured prediction methods. Symbol-based methods are involved in utilizing legal domain symbols and knowledge for the tasks of LegalAI. The symbolic legal knowledge, such as events and relationships, can provide interpretability. Deep learning methods can be employed for symbol-based methods for better performance. 3.1 Information Extraction Information extraction (IE) has been widely studied in NLP. IE emphasizes on extracting valuable information from texts, and there are many NLP works which concentrate on IE, including name entity recognition (Lample et al., 2016; Kuru et al., 2016; Akbik et al., 2019), relation extraction (Zeng et al., 2015; Miwa and Bansal, 2016; Lin et al., 2016; Christopoulou et al., 2018), and event extraction (Chen et al., 2015; Nguyen et al., 2016; Nguyen and Grishman, 2018). IE in LegalAI has also attracted the interests of many researchers. To make better use of the particularity of legal texts, researchers try to use ontology (Bruckschen et al., 2010; Cardellino et al., 2017; Lenci et al., 2009; Zhang et al., 2017) or global consistency (Yin et al., 2018) for named entity recognition in LegalAI. To extract relationship and events from legal documents, re5221 searchers attempt to apply different NLP technologies, including hand-crafted rules (Bartolini et al., 2004; Truyens and Eecke, 2014), CRF (Vacek and Schilder, 2017), joint models like SVM, CNN, GRU (Vacek et al., 2019), or scale-free identifier network (Yan et al., 2017) for promising results. Existing works have made lots of efforts to improve the effect of IE, but we need to pay more attention to the benefits of the extracted information. The extracted symbols have a legal basis and can provide interpretability to legal applications, so we cannot just aim at the performance of methods. Here, we show two examples of utilizing the extracted symbols for interpretability of LegalAI: Relation Extraction and Inheritance Dispute. Inheritance dispute is a type of cases in Civil Law that focuses on the distribution of inheritance rights. Therefore, identifying the relationship between the parties is vital, as those who have the closest relationship with the deceased can get more assets. Towards this goal, relation extraction in inheritance dispute cases can provide the reason for judgment results and improve performance. Event Timeline Extraction and Judgment Prediction of Criminal Case. In criminal cases, multiple parties are often involved in group crimes. To decide who should be primarily responsible for the crime, we need to determine what everyone has done throughout the case, and the order of these events is also essential. For example, in the case of crowd fighting, the person who fights first should bear the primary responsibility. As a result, a qualified event timeline extraction model is required for judgment prediction of criminal cases. In future research, we need to concern more about applying extracted information to the tasks of LegalAI. The utilization of such information depends on the requirements of specific tasks, and the information can provide more interpretability. 3.2 Legal Element Extraction In addition to those common symbols in general NLP, LegalAI also has its exclusive symbols, named legal elements. The extraction of legal elements focuses on extracting crucial elements like whether someone is killed or something is stolen. These elements are called constitutive elements of crime, and we can directly convict offenders based on the results of these elements. Utilizing these elements can not only bring intermediate supervision information to the judgment prediction task but also make the prediction results of the model more interpretable. Fact Description: One day, Bob used a fake reason for marriage decoration to borrow RMB 2k from Alice. After arrested, Bob has paid the money back to Alice. Whether did Bob sell something? × Whether did Bob make a fictional fact? ✓ Whether did Bob illegally possess the property of others? ✓ Judgment Results: Fraud. Table 1: An example of element detection from Zhong et al. (2020). From this example, we can see that the extracted elements can decide the judgment results. It shows that elements are useful for downstream tasks. Towards a more in-depth analysis of elementbased symbols, Shu et al. (2019) propose a dataset for extracting elements from three different kinds of cases, including divorce dispute, labor dispute, and loan dispute. The dataset requires us to detect whether the related elements are satisfied or not, and formalize the task as a multi-label classification problem. To show the performance of existing methods on element extraction, we have conducted experiments on the dataset, and the results can be found in Table 2. Divorce Labor Loan Model MiF MaF MiF MaF MiF MaF TextCNN 78.7 65.9 76.4 54.4 80.3 60.6 DPCNN 81.3 64.0 79.8 47.4 81.4 42.5 LSTM 80.6 67.3 81.0 52.9 80.4 53.1 BiDAF 83.1 68.7 81.5 59.4 80.5 63.1 BERT 83.3 69.6 76.8 43.7 78.6 39.5 BERT-MS 84.9 72.7 79.7 54.5 81.9 64.1 Table 2: Experimental results on extracting elements. Here MiF and MaF denotes micro-F1 and macro-F1. We have implemented several classical encoding models in NLP for element extraction, including TextCNN (Kim, 2014), DPCNN (Johnson and Zhang, 2017), LSTM (Hochreiter and Schmidhuber, 1997), BiDAF (Seo et al., 2016), and BERT (Devlin et al., 2019). We have tried two different versions of pretrained parameters of BERT, including the origin parameters (BERT) and the parameters pretrained on Chinese legal documents (BERT-MS) (Zhong et al., 2019b). From the results, we can see that the language model pretrained on the general domain performs worse 5222 than domain-specific PLM, which proves the necessity of PLM in LegalAI. For the following parts of our paper, we will use BERT pretrained on legal documents for better performance. From the results of element extraction, we can find that existing methods can reach a promising performance on element extraction, but are still not sufficient for corresponding applications. These elements can be regarded as pre-defined legal knowledge and help with downstream tasks. How to improve the performance of element extraction is valuable for further research. 4 Applications of LegalAI In this section, we will describe several typical applications in LegalAI, including Legal Judgment Prediction, Similar Case Matching and Legal Question Answering. Legal Judgment Prediction and Similar Case Matching can be regarded as the core function of judgment in Civil Law and Common Law system, while Legal Question Answering can provide consultancy for those who are unfamiliar with the legal domain. Therefore, exploring these three tasks can cover most aspects of LegalAI. 4.1 Legal Judgment Prediction Legal Judgment Prediction (LJP) is one of the most critical tasks in LegalAI, especially in the Civil Law system. In the Civil Law system, the judgment results are decided according to the facts and the statutory articles. One will receive legal sanctions only after he or she has violated the prohibited acts prescribed by law. The task LJP mainly concerns how to predict the judgment results from both the fact description of a case and the contents of the statutory articles in the Civil Law system. As a result, LJP is an essential and representative task in countries with Civil Law system like France, Germany, Japan, and China. Besides, LJP has drawn lots of attention from both artificial intelligence researchers and legal professionals. In the following parts, we describe the research progress and explore the future direction of LJP. Related Work LJP has a long history. Early works revolve around analyzing existing legal cases in specific circumstances using mathematical or statistical methods (Kort, 1957; Ulmer, 1963; Nagel, 1963; Keown, 1980; Segal, 1984; Lauderdale and Clark, 2012). The combination of mathematical methods and legal rules makes the predicted results interpretable. Fact Description: One day, the defendant Bob stole cash 8500 yuan and T-shirts, jackets, pants, shoes, hats (identified a total value of 574.2 yuan) in Beijing Lining store. Judgment Results Relevant Articles Article 264 of Criminal Law. Applicable Charges Theft. Term of Penalty 6 months. Table 3: An example of legal judgment prediction from Zhong et al. (2018). In this example, the judgment results include relevant articles, applicable charges and the the term of penalty. To promote the progress of LJP, Xiao et al. (2018) have proposed a large-scale Chinese criminal judgment prediction dataset, C-LJP. The dataset contains over 2.68 million legal documents published by the Chinese government, making C-LJP a qualified benchmark for LJP. C-LJP contains three subtasks, including relevant articles, applicable charges, and the term of penalty. The first two can be formalized as multi-label classification tasks, while the last one is a regression task. Besides, English LJP datasets also exist (Chalkidis et al., 2019a), but the size is limited. With the development of the neural network, many researchers begin to explore LJP using deep learning technology (Hu et al., 2018; Wang et al., 2019; Li et al., 2019b; Liu et al., 2019b; Li et al., 2019a; Kang et al., 2019). These works can be divided into two primary directions. The first one is to use more novel models to improve performance. Chen et al. (2019) use the gating mechanism to enhance the performance of predicting the term of penalty. Pan et al. (2019) propose multi-scale attention to handle the cases with multiple defendants. Besides, other researchers explore how to utilize legal knowledge or the properties of LJP. Luo et al. (2017) use the attention mechanism between facts and law articles to help the prediction of applicable charges. Zhong et al. (2018) present a topological graph to utilize the relationship between different tasks of LJP. Besides, Hu et al. (2018) incorporate ten discriminative legal attributes to help predict low-frequency charges. Experiments and Analysis To better understand recent advances in LJP, we have conducted a series of experiments on CLJP. Firstly, we implement several classical text classification models, including TextCNN (Kim, 2014), DPCNN (Johnson and Zhang, 2017), 5223 Dev Test Task Charge Article Term Charge Article Term Metrics MiF MaF MiF MaF Dis MiF MaF MiF MaF Dis TextCNN 93.8 74.6 92.8 70.5 1.586 93.9 72.2 93.5 67.0 1.539 DPCNN 94.7 72.2 93.9 68.8 1.448 94.9 72.1 94.6 69.4 1.390 LSTM 94.7 71.2 93.9 66.5 1.456 94.3 66.0 94.7 70.7 1.467 BERT 94.5 66.3 93.5 64.7 1.421 94.7 71.3 94.3 66.9 1.342 FactLaw 79.5 25.4 79.8 24.9 1.721 76.9 35.0 78.1 30.8 1.683 TopJudge 94.8 76.3 94.0 69.6 1.438 97.6 76.8 96.9 70.9 1.335 Gating Network 1.604 1.553 Table 4: Experimental results of judgment prediction on C-LJP. In this table, MiF and MaF denotes micro-F1 and macro-F1, and Dis denotes the log distance between prediction and ground truth. LSTM (Hochreiter and Schmidhuber, 1997), and BERT (Devlin et al., 2019). For the parameters of BERT, we use the pretrained parameters on Chinese criminal cases (Zhong et al., 2019b). Secondly, we implement several models which are specially designed for LJP, including FactLaw (Luo et al., 2017), TopJudge (Zhong et al., 2018), and Gating Network (Chen et al., 2019). The results can be found in Table 4. From the results, we can learn that most models can reach a promising performance in predicting high-frequency charges or articles. However, the models perform not well on low-frequency labels as there is a gap between micro-F1 and macro-F1. Hu et al. (2018) have explored few-shot learning for LJP. However, their model requires additional attribute information labelled manually, which is time-consuming and makes it hard to employ the model in other datasets. Besides, we can find that performance of BERT is not satisfactory, as it does not make much improvement from those models with fewer parameters. The main reason is that the length of the legal text is very long, but the maximum length that BERT can handle is 512. According to statistics, the maximum document length is 56, 694, and the length of 15% documents is over 512. Document understanding and reasoning techniques are required for LJP. Although embedding-based methods can achieve promising performance, we still need to consider combining symbol-based with embedding-based methods in LJP. Take TopJudge as an example, this model formalizes topological order between the tasks in LJP (symbol-based part) and uses TextCNN for encoding the fact description. By combining symbol-based and embedding-based methods, TopJudge has achieved promising results on LJP. Comparing the results between TextCNN and TopJudge, we can find that just integrating the order of judgments into the model can lead to improvements, which proves the necessity of combining embedding-based and symbol-based methods. For better LJP performance, some challenges require the future efforts of researchers: (1) Document understanding and reasoning techniques are required to obtain global information from extremely long legal texts. (2) Few-shot learning. Even low-frequency charges should not be ignored as they are part of legal integrity. Therefore, handling in-frequent labels is essential to LJP. (3) Interpretability. If we want to apply methods to real legal systems, we must understand how they make predictions. However, existing embedding-based methods work as a black box. What factors affected their predictions remain unknown, and this may introduce unfairness and ethical issues like gender bias to the legal systems. Introducing legal symbols and knowledge mentioned before will benefit the interpretability of LJP. 4.2 Similar Case Matching In those countries with the Common Law system like the United States, Canada, and India, judicial decisions are made according to similar and representative cases in the past. As a result, how to identify the most similar case is the primary concern in the judgment of the Common Law system. In order to better predict the judgment results in the Common Law system, Similar Case Matching (SCM) has become an essential topic of LegalAI. SCM concentrate on finding pairs of similar cases, and the definition of similarity can be various. SCM requires to model the relationship between cases from the information of different granularity, like fact level, event level and element level. In 5224 other words, SCM is a particular form of semantic matching (Xiao et al., 2019), which can benefit the legal information retrieval. Related Work Traditional methods of Information Retrieve (IR) focus on term-level similarities with statistical models, including TF-IDF (Salton and Buckley, 1988) and BM25 (Robertson and Walker, 1994), which are widely applied in current search systems. In addition to these term matching methods, other researchers try to utilize meta-information (Medin, 2000; Gao et al., 2011; Wu et al., 2013) to capture semantic similarity. Many machine learning methods have also been applied for IR like SVD (Xu et al., 2010) or factorization (Rendle, 2010; Kabbur et al., 2013). With the rapid development of deep learning technology and NLP, many researchers apply neural models, including multi-layer perceptron (Huang et al., 2013), CNN (Shen et al., 2014; Hu et al., 2014; Qiu and Huang, 2015), and RNN (Palangi et al., 2016) to IR. There are several LegalIR datasets, including COLIEE (Kano et al., 2018), CaseLaw (Locke and Zuccon, 2018), and CM (Xiao et al., 2019). Both COLIEE and CaseLaw are involved in retrieving most relevant articles from a large corpus, while data examples in CM give three legal documents for calculating similarity. These datasets provide benchmarks for the studies of LegalIR. Many researchers focus on building an easy-to-use legal search engine (Barmakian, 2000; Turtle, 1995). They also explore utilizing more information, including citations (Monroy et al., 2013; Geist, 2009; Raghav et al., 2016) and legal concepts (Maxwell and Schafer, 2008; Van Opijnen and Santos, 2017). Towards the goal of calculating similarity in semantic level, deep learning methods have also been applied to LegalIR. Tran et al. (2019) propose a CNN-based model with document and sentence level pooling which achieves the state-of-the-art results on COLIEE, while other researchers explore employing better embedding methods for LegalIR (Landthaler et al., 2016; Sugathadasa et al., 2018). Experiments and Analysis To get a better view of the current progress of LegalIR, we select CM (Xiao et al., 2019) for experiments. CM contains 8, 964 triples where each triple contains three legal documents (A, B, C). The task designed in CM is to determine whether B or C is more similar to A. We have implemented four different types of baselines: (1) Term matching methods, TF-IDF (Salton and Buckley, 1988). (2) Siamese Network with two parametershared encoders, including TextCNN (Kim, 2014), BiDAF (Seo et al., 2016) and BERT (Devlin et al., 2019), and a distance function. (3) Semantic matching models in sentence level, ABCNN (Yin et al., 2016), and document level, SMASH-RNN (Jiang et al., 2019). The results can be found in Table 5. Model Dev Test TF-IDF 52.9 53.3 TextCNN 62.5 69.9 BiDAF 63.3 68.6 BERT 64.3 66.8 ABCNN 62.7 69.9 SMASH RNN 64.2 65.8 Table 5: Experimental results of SCM. The evaluation metric is accuracy. From the results, we observe that existing neural models which are capable of capturing semantic information outperform TF-IDF, but the performance is still not enough for SCM. As Xiao et al. (2019) state, the main reason is that legal professionals think that elements in this dataset define the similarity of legal cases. Legal professionals will emphasize on whether two cases have similar elements. Only considering term-level and semantic-level similarity is insufficient for the task. For the further study of SCM, there are two directions which need future effort: (1) Elementalbased representation. Researchers can focus more on symbols of legal documents, as the similarity of legal cases is related to these symbols like elements. (2) Knowledge incorporation. As semantic-level matching is insufficient for SCM, we need to consider about incorporating legal knowledge into models to improve the performance and provide interpretability. 4.3 Legal Question-Answering Another typical application of LegalAI is Legal Question Answering (LQA) which aims at answering questions in the legal domain. One of the most important parts of legal professionals’ work is to provide reliable and high-quality legal consulting services for non-professionals. However, due to the insufficient number of legal professionals, it is often challenging to ensure that non-professionals 5225 KD-Questions CA-Questions All Single All Single All Single All Unskilled Humans 76.9 71.1 62.5 58.0 70.0 64.2 Skilled Humans 80.6 77.5 86.8 84.7 84.1 81.1 BiDAF 36.7 20.6 37.2 22.2 38.3 22.0 BERT 38.0 21.2 38.9 23.7 39.7 22.3 Co-matching 35.8 20.2 35.8 20.3 38.1 21.2 HAF 36.6 21.4 42.5 19.8 42.6 21.2 Table 6: Experimental results of JEC-QA. The evaluation metrics is accuracy. The performance of unskilled and skilled humans is collected from original paper. Question: Which crimes did Alice and Bob commit if they transported more than 1.5 million yuan of counterfeit currency from abroad to China? Direct Evidence P1: Transportation of counterfeit money: · · · The defendants are sentenced to three years in prison. P2: Smuggling counterfeit money: · · · The defendants are sentenced to seven years in prison. Extra Evidence P3: Motivational concurrence: The criminals carry out one behavior but commit several crimes. P4: For motivational concurrence, the criminals should be convicted according to the more serious crime. Comparison: seven years > three years Answer: Smuggling counterfeit money. Table 7: An example of LQA from Zhong et al. (2019a). In this example, direct evidence and extra evidence are both required for answering the question. The hard reasoning steps prove the difficulty of legal question answering. can get enough and high-quality consulting services, and LQA is expected to address this issue. In LQA, the form of questions varies as some questions will emphasize on the explanation of some legal concepts, while others may concern the analysis of specific cases. Besides, questions can also be expressed very differently between professionals and non-professionals, especially when describing domain-specific terms. These problems bring considerable challenges to LQA, and we conduct experiments to demonstrate the difficulties of LQA better in the following parts. Related Work In LegalAI, there are many datasets of question answering. Duan et al. (2019) propose CJRC, a legal reading comprehension dataset with the same format as SQUAD 2.0 (Rajpurkar et al., 2018), which includes span extraction, yes/no questions, and unanswerable questions. Besides, COLIEE (Kano et al., 2018) contains about 500 yes/no questions. Moreover, the bar exam is a professional qualification examination for lawyers, so bar exam datasets (Fawei et al., 2016; Zhong et al., 2019a) may be quite hard as they require professional legal knowledge and skills. In addition to these datasets, researchers have also worked on lots of methods on LQA. The rulebased systems (Buscaldi et al., 2010; Kim et al., 2013; Kim and Goebel, 2017) are prevalent in early research. In order to reach better performance, researchers utilize more information like the explanation of concepts (Taniguchi and Kano, 2016; Fawei et al., 2015) or formalize relevant documents as graphs to help reasoning (Monroy et al., 2009, 2008; Tran et al., 2013). Machine learning and deep learning methods like CRF (Bach et al., 2017), SVM (Do et al., 2017), and CNN (Kim et al., 2015) have also been applied to LQA. However, most existing methods conduct experiments on small datasets, which makes them not necessarily applicable to massive datasets and real scenarios. Experiments and Analysis We select JEC-QA (Zhong et al., 2019a) as the dataset of the experiments, as it is the largest dataset collected from the bar exam, which guarantees its difficulty. JEC-QA contains 28, 641 multiple-choice and multiple-answer questions, together with 79, 433 relevant articles to help to answer the questions. JEC-QA classifies questions into knowledge-driven questions (KD-Questions) and case-analysis questions (CA-Questions) and reports the performances of humans. We implemented several representative question answering models, including BiDAF (Seo et al., 2016), BERT (Devlin et al., 2019), Co-matching (Wang et al., 2018), and HAF (Zhu et al., 2018). The experimental results can be found in Table 6. From the experimental results, we can learn the 5226 models cannot answer the legal questions well compared with their promising results in open-domain question answering and there is still a huge gap between existing models and humans in LQA. For more qualified LQA methods, there are several significant difficulties to overcome: (1) Legal multi-hop reasoning. As Zhong et al. (2019a) state, existing models can perform inference but not multi-hop reasoning. However, legal cases are very complicated, which cannot be handled by singlestep reasoning. (2) Legal concepts understanding. We can find that almost all models are better at case analyzing than knowledge understanding, which proves that knowledge modelling is still challenging for existing methods. How to model legal knowledge to LQA is essential as legal knowledge is the foundation of LQA. 5 Conclusion In this paper, we describe the development status of various LegalAI tasks and discuss what we can do in the future. In addition to these applications and tasks we have mentioned, there are many other tasks in LegalAI like legal text summarization and information extraction from legal contracts. Nevertheless, no matter what kind application is, we can apply embedding-based methods for better performance, together with symbol-based methods for more interpretability. Besides, the three main challenges of legal tasks remain to be solved. Knowledge modelling, legal reasoning, and interpretability are the foundations on which LegalAI can reliably serve the legal domain. Some existing methods are trying to solve these problems, but there is still a long way for researchers to go. In the future, for these existing tasks, researchers can focus on solving the three most pressing challenges of LegalAI combining embedding-based and symbol-based methods. For tasks that do not yet have a dataset or the datasets are not large enough, we can try to build a large-scale and highquality dataset or use few-shot or zero-shot methods to solve these problems. Furthermore, we need to take the ethical issues of LegalAI seriously. Applying the technology of LegalAI directly to the legal system will bring ethical issues like gender bias and racial discrimination. The results given by these methods cannot convince people. To address this issue, we must note that the goal of LegalAI is not replacing the legal professionals but helping their work. As a result, we should regard the results of the models only as a reference. Otherwise, the legal system will no longer be reliable. For example, professionals can spend more time on complex cases and leave the simple cases for the model. However, for safety, these simple cases must still be reviewed. In general, LegalAI should play as a supporting role to help the legal system. Acknowledgements This work is supported by the National Key Research and Development Program of China (No. 2018YFC0831900) and the National Natural Science Foundation of China (NSFC No. 61772302, 61532010). Besides, the dataset of element extraction is provided by Gridsum. References Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In Proceedings of NAACL. Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preotiuc-Pietro, and Vasileios Lampos. 2016. Predicting judicial decisions of the european court of human rights: A natural language processing perspective. PeerJ Computer Science, 2. Iosif ANGELIDIS, Ilias CHALKIDIS, and Manolis KOUBARAKIS. 2018. Named entity recognition, linking and generation for greek legislation. Kevin D Ashley. 2017. Artificial intelligence and legal analytics: new tools for law practice in the digital age. Cambridge University Press. Ngo Xuan Bach, Tran Ha Ngoc Thien, Tu Minh Phuong, et al. 2017. Question analysis for vietnamese legal question answering. In Proceedings of KSE. IEEE. Deanna Barmakian. 2000. Better search engines for law. Law Libr. J., 92. Roberto Bartolini, Alessandro Lenci, Simonetta Montemagni, Vito Pirrelli, and Claudia Soria. 2004. Semantic mark-up of Italian legal texts through NLPbased techniques. In Proceedings of LREC. Paheli Bhattacharya, Kaustubh Hiware, Subham Rajgaria, Nilay Pochhi, Kripabandhu Ghosh, and Saptarshi Ghosh. 2019. A comparative study of summarization algorithms applied to legal case judgments. In Proceedings of ECIR. Springer. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 5227 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pages 2787–2795. M´ırian Bruckschen, Caio Northfleet, Paulo Bridi, Roger Granada, Renata Vieira, Prasad Rao, and Tomas Sander. 2010. Named entity recognition in the legal domain for ontology population. In Workshop Programme, page 16. Citeseer. Davide Buscaldi, Paolo Rosso, Jos´e Manuel G´omezSoriano, and Emilio Sanchis. 2010. Answering questions with an n-gram based passage retrieval engine. Journal of Intelligent Information Systems, 34(2):113–134. Cristian Cardellino, Milagro Teruel, Laura Alonso Alemany, and Serena Villata. 2017. Legal NERC with ontologies, Wikipedia and curriculum learning. In Proceedings of EACL. Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019a. Neural legal judgment prediction in English. In Proceedings of ACL. Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, and Ion Androutsopoulos. 2019b. Large-scale multi-label text classification on EU legislation. In Proceedings of ACL. Ilias Chalkidis and Dimitrios Kampas. 2019. Deep learning in law: early adaptation and legal word embeddings trained on large corpora. Artificial Intelligence and Law, 27(2):171–198. Huajie Chen, Deng Cai, Wei Dai, Zehui Dai, and Yadong Ding. 2019. Charge-based prison term prediction with deep gating network. In Proceedings of EMNLP-IJCNLP, pages 6363–6368. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of ACL. Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2018. A walk-based model on entity graphs for relation extraction. In Proceedings of ACL, pages 81–88. Frantiˇsek Cvrˇcek, Karel Pala, and Pavel Rychl´y. 2012. Legal electronic dictionary for Czech. In Proceedings of LREC. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL. Phong-Khac Do, Huy-Tien Nguyen, Chien-Xuan Tran, Minh-Tien Nguyen, and Minh-Le Nguyen. 2017. Legal question answering using ranking svm and deep convolutional neural network. arXiv preprint arXiv:1703.05320. Xingyi Duan, Baoxin Wang, Ziyue Wang, Wentao Ma, Yiming Cui, Dayong Wu, Shijin Wang, Ting Liu, Tianxiang Huo, Zhen Hu, et al. 2019. Cjrc: A reliable human-annotated benchmark dataset for chinese judicial reading comprehension. In Proceedings of CCL. Springer. Biralatei Fawei, Adam Wyner, and Jeff Pan. 2016. Passing a USA national bar exam: a first corpus for experimentation. In Proceedings of LREC. Biralatei Fawei, Adam Wyner, Jeff Z Pan, and Martin Kollingbaum. 2015. Using legal ontologies with rules for legal textual entailment. In AI Approaches to the Complexity of Legal Systems, pages 317–324. Springer. Jianfeng Gao, Kristina Toutanova, and Wen-tau Yih. 2011. Clickthrough-based latent semantic models for web search. In Proceedings of SIGIR. ACM. Anne von der Lieth Gardner. 1984. An artificial intelligence approach to legal reasoning. Anton Geist. 2009. Using citation analysis techniques for computer-assisted legal research in continental jurisdictions. Available at SSRN 1397674. Ben Hachey and Claire Grover. 2006. Extractive summarisation of legal texts. Artificial Intelligence and Law, 14(4):305–345. Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, and Graham Neubig. 2019. Latent relation language models. arXiv preprint arXiv:1908.07690. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8). Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Proceedings of NIPS. Zikun Hu, Xiang Li, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Few-shot charge prediction with discriminative legal attributes. In Proceedings of COLING. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of CIKM. ACM. Jyun-Yu Jiang, Mingyang Zhang, Cheng Li, Michael Bendersky, Nadav Golbandi, and Marc Najork. 2019. Semantic text matching for long-form documents. In Proceedings of WWW. ACM. Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text categorization. In Proceedings of ACL. Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H´erve J´egou, and Tomas Mikolov. 2016. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. 5228 Santosh Kabbur, Xia Ning, and George Karypis. 2013. Fism: factored item similarity models for top-n recommender systems. In Proceedings of SIGKDD. ACM. Liangyi Kang, Jie Liu, Lingqiao Liu, Qinfeng Shi, and Dan Ye. 2019. Creating auxiliary representations from charge definitions for criminal charge prediction. arXiv preprint arXiv:1911.05202. Yoshinobu Kano, Mi-Young Kim, Masaharu Yoshioka, Yao Lu, Juliano Rabelo, Naoki Kiyota, Randy Goebel, and Ken Satoh. 2018. Coliee-2018: Evaluation of the competition on legal information extraction and entailment. In Proceedings of JSAI, pages 177–192. Springer. R Keown. 1980. Mathematical models for legal prediction. Computer/LJ, 2:829. Mi-Young Kim and Randy Goebel. 2017. Two-step cascaded textual entailment for legal bar exam question answering. In Proceedings of Articial Intelligence and Law. ACM. Mi-Young Kim, Ying Xu, and Randy Goebel. 2015. A convolutional neural network in legal question answering. Mi-Young Kim, Ying Xu, Randy Goebel, and Ken Satoh. 2013. Answering yes/no questions in legal bar exams. In Proceedings of JSAI, pages 199–213. Springer. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP. Fred Kort. 1957. Predicting supreme court decisions mathematically: A quantitative analysis of the ”right to counsel” cases. American Political Science Review, 51(1):1–12. Onur Kuru, Ozan Arkan Can, and Deniz Yuret. 2016. CharNER: Character-level named entity recognition. In Proceedings of COLING. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL. J¨org Landthaler, Bernhard Waltl, Patrick Holl, and Florian Matthes. 2016. Extending full text search for legal document collections using word embeddings. In JURIX, pages 73–82. Benjamin E Lauderdale and Tom S Clark. 2012. The supreme court’s many median justices. American Political Science Review, 106(4):847–866. Alessandro Lenci, Simonetta Montemagni, Vito Pirrelli, and Giulia Venturi. 2009. Ontology learning from italian legal texts. Law, Ontologies and the Semantic Web, 188:75–94. Shang Li, Hongli Zhang, Lin Ye, Xiaoding Guo, and Binxing Fang. 2019a. Mann: A multichannel attentive neural network for legal judgment prediction. IEEE Access. Yu Li, Tieke He, Ge Yan, Shu Zhang, and Hui Wang. 2019b. Using case facts to predict penalty with deep learning. In International Conference of Pioneering Computer Scientists, Engineers and Educators, pages 610–617. Springer. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of AAAI. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Zhiyuan Liu, Cunchao Tu, and Maosong Sun. 2019b. Legal cause prediction with inner descriptions and outer hierarchies. In Proceedings of CCL, pages 573–586. Springer. Daniel Locke and Guido Zuccon. 2018. A test collection for evaluating legal case law search. In Proceedings of SIGIR. ACM. Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, and Dongyan Zhao. 2017. Learning to predict charges for criminal cases with legal basis. In Proceedings of EMNLP. K Tamsin Maxwell and Burkhard Schafer. 2008. Concept and context in legal information retrieval. In Proceedings of JURIX. Douglas L Medin. 2000. Psychology of learning and motivation: advances in research and theory. Elsevier. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of ACL, pages 1105– 1116. Alfredo Monroy, Hiram Calvo, and Alexander Gelbukh. 2008. Using graphs for shallow question answering on legal documents. In Mexican International Conference on Artificial Intelligence. Springer. 5229 Alfredo Monroy, Hiram Calvo, and Alexander Gelbukh. 2009. Nlp for shallow question answering of legal documents using graphs. In Proceedings of CICLing. Springer. Alfredo L´opez Monroy, Hiram Calvo, Alexander Gelbukh, and Georgina Garc´ıa Pacheco. 2013. Link analysis for representing and retrieving legal information. In Proceedings of CICLing, pages 380–393. Springer. Stuart S Nagel. 1963. Applying correlation analysis to case prediction. Texas Law Review, 42:1006. John J. Nay. 2016. Gov2Vec: Learning distributed representations of institutions and their legal text. In Proceedings of the First Workshop on NLP and Computational Social Science. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of NAACL. Thien Huu Nguyen and Ralph Grishman. 2018. Graph convolutional networks with argument-aware pooling for event detection. In Proceedings of AAAI. Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, and Rabab Ward. 2016. Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 24(4). Sicheng Pan, Tun Lu, Ning Gu, Huajuan Zhang, and Chunlin Xu. 2019. Charge prediction for multidefendant cases with multi-scale attention. In CCF Conference on Computer Supported Cooperative Work and Social Computing. Springer. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532– 1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Matthew E Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of EMNLP-IJCNLP. Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for communitybased question answering. In Proceedings of IJCAI. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). K Raghav, P Krishna Reddy, and V Balakista Reddy. 2016. Analyzing the extraction of relevant legal judgments using paragraph-level and citation information. AI4JCArtificial Intelligence for Justice, page 30. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of ACL. Steffen Rendle. 2010. Factorization machines. In Proceedings of ICDM. IEEE. Stephen E Robertson and Steve Walker. 1994. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In Proceedings of SIGIR. Gerard Salton and Christopher Buckley. 1988. Termweighting approaches in automatic text retrieval. Information processing & management. Jeffrey A Segal. 1984. Predicting supreme court cases probabilistically: The search and seizure cases, 1962-1981. American Political Science Review, 78(4):891–900. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. 2014. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of CIKM. ACM. Yi Shu, Yao Zhao, Xianghui Zeng, and Qingli Ma. 2019. Cail2019-fe. Technical report, Gridsum. Keet Sugathadasa, Buddhi Ayesha, Nisansa de Silva, Amal Shehan Perera, Vindula Jayawardana, Dimuthu Lakmal, and Madhavi Perera. 2018. Legal document retrieval using document vector embeddings and deep learning. In Proceedings of SAI. Springer. Harry Surden. 2018. Artificial intelligence and law: An overview. Ga. St. UL Rev. Ryosuke Taniguchi and Yoshinobu Kano. 2016. Legal yes/no question answering system using case-role analysis. In Proceedings of JSAI, pages 284–298. Springer. Oanh Thi Tran, Bach Xuan Ngo, Minh Le Nguyen, and Akira Shimazu. 2013. Answering legal questions by mining reference information. In Proceedings of JSAI. Springer. Vu Tran, Minh Le Nguyen, and Ken Satoh. 2019. Building legal case retrieval systems with lexical matching and summarization using a pre-trained phrase scoring model. In Proceedings of Artificial Intelligence and Law. ACM. 5230 Maarten Truyens and Patrick Van Eecke. 2014. Legal aspects of text mining. In Proceedings of LREC. Howard Turtle. 1995. Text retrieval in the legal world. Artificial Intelligence and Law, 3(1-2). S Sidney Ulmer. 1963. Quantitative analysis of judicial processes: Some practical and theoretical applications. Law and Contemporary Problems, 28:164. Thomas Vacek, Ronald Teo, Dezhao Song, Timothy Nugent, Conner Cowling, and Frank Schilder. 2019. Litigation analytics: Case outcomes extracted from US federal court dockets. In Proceedings of NLLP Workshop. Tom Vacek and Frank Schilder. 2017. A sequence approach to case outcome detection. In Proceedings of Articial Intelligence and Law, pages 209– 215. ACM. Marc Van Opijnen and Cristiana Santos. 2017. On the concept of relevance in legal information retrieval. Artificial Intelligence and Law, 25(1). Hui Wang, Tieke He, Zhipeng Zou, Siyuan Shen, and Yu Li. 2019. Using case facts to predict accusation based on deep learning. In Proceedings of QRS-C, pages 133–137. IEEE. Shuohang Wang, Mo Yu, Jing Jiang, and Shiyu Chang. 2018. A co-matching model for multi-choice reading comprehension. In Proceedings of ACL. Wei Wu, Hang Li, and Jun Xu. 2013. Learning query and document similarities from click-through bipartite graph with metadata. In Proceedings of WSDM. ACM. Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, et al. 2018. Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478. Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Tianyang Zhang, Xianpei Han, Heng Wang, Jianfeng Xu, et al. 2019. Cail2019-scm: A dataset of similar case matching in legal domain. arXiv preprint arXiv:1911.08962. Jun Xu, Hang Li, and Chaoliang Zhong. 2010. Relevance ranking using kernels. In Proceedings of AIRS. Springer. Yukun Yan, Daqi Zheng, Zhengdong Lu, and Sen Song. 2017. Event identification as a decision process with non-linear representation of text. arXiv preprint arXiv:1710.00969. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Hai Ye, Xin Jiang, Zhunchen Luo, and Wenhan Chao. 2018. Interpretable charge predictions for criminal cases: Learning to generate court views from fact descriptions. In Proceedings of NAACL. Wenpeng Yin, Hinrich Sch¨utze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics. Xiaoxiao Yin, Daqi Zheng, Zhengdong Lu, and Ruifang Liu. 2018. Neural entity reasoner for global consistency in ner. arXiv preprint arXiv:1810.00347. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of EMNLP. Ni Zhang, Yi-Fei Pu, Sui-Quan Yang, Ji-Liu Zhou, and Jin-Kang Gao. 2017. An ontological chinese legal consultation system. IEEE Access, 5:18250–18261. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of ACL. Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Chaojun Xiao, Zhiyuan Liu, and Maosong Sun. 2018. Legal judgment prediction via topological learning. In Proceedings of EMNLP. Haoxi Zhong, Yuzhong Wang, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. Iteratively questioning and answering for interpretable legal judgment prediction. In Proceedings of AAAI. Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2019a. Jec-qa: A legal-domain question answering dataset. arXiv preprint arXiv:1911.12011. Haoxi Zhong, Zhengyan Zhang, Zhiyuan Liu, and Maosong Sun. 2019b. Open chinese language pretrained model zoo. Technical report, Technical Report. Technical Report. Haichao Zhu, Furu Wei, Bing Qin, and Ting Liu. 2018. Hierarchical attention flow for multiple-choice reading comprehension. In Proceedings of AAAI.
2020
466
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5231–5247 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5231 Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work? Yada Pruksachatkun1∗Jason Phang1∗Haokun Liu1∗Phu Mon Htut1∗ Xiaoyi Zhang1 Richard Yuanzhe Pang1 Clara Vania1 Katharina Kann2 Samuel R. Bowman1 1New York University 2University of Colorado Boulder {yp913,bowman}@nyu.edu Abstract While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate–target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings. 1 Introduction Unsupervised pretraining—e.g., BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019b)—has recently pushed the state of the art on many natural language understanding tasks. One method of further improving pretrained models that has been shown to be broadly helpful is to first finetune a pretrained model on an intermediate task, before fine-tuning again on the target task of interest (Phang et al., 2018; Wang et al., 2019a; Clark et al., 2019a; Sap et al., 2019), also referred to as ∗Equal contribution. Figure 1: Our experimental pipeline with intermediatetask transfer learning and subsequent fine-tuning on target and probing tasks. STILTs. However, this approach does not always improve target task performance, and it is unclear under what conditions it does. This paper offers a large-scale empirical study aimed at addressing this open question. We perform a broad survey of intermediate and target task pairs, following an experimental pipeline similar to Phang et al. (2018) and Wang et al. (2019a). This differs from previous work in that we use a larger and more diverse set of intermediate and target tasks, introduce additional analysis-oriented probing tasks, and use a better-performing base model RoBERTa (Liu et al., 2019b). We aim to answer the following specific questions: • What kind of tasks tend to make good intermediate tasks across a wide variety of target tasks? • Which linguistic skills does a model learn from intermediate-task training? • Which skills learned from intermediate tasks help the model succeed on which target tasks? The first question is the most straightforward: it can be answered by a sufficiently exhaustive search over possible intermediate–target task pairs. The second and third questions address the why rather than the when, and differ in a crucial detail: A 5232 model might learn skills by training on an intermediate task, but those skills might not help it to succeed on a target task. Our search for intermediate tasks focuses on natural language understanding tasks in English. In particular, we run our experiments on 11 intermediate tasks and 10 target tasks, which results in a total of 110 intermediate–target task pairs. We use 25 probing tasks—tasks that each target a narrowly defined model behavior or linguistic phenomenon— to shed light on which skills are learned from each intermediate task. Our findings include the following: (i) Natural language inference tasks as well as QA tasks which involve commonsense reasoning are generally useful as intermediate tasks. (ii) SocialIQA and QQP as intermediate tasks are not helpful as a means to teach the skills captured by our probing tasks, while finetuning first on MNLI and CosmosQA result in an increase in all skills. (iii) While a model’s ability to learn skills relating to input-noising correlate with target task performance, low-level skills such as knowledge of a sentence’s raw content preservation skills and ability to detect various attributes of input sentences such as tense of main verb and sentence length are less correlated with target task performance. This suggests that a model’s ability to do well on the masked language modelling (MLM) task is important for downstream performance. Furthermore, we conjecture that a portion of our analysis is affected by catastrophic forgetting of knowledge learned during pretraining. 2 Methods 2.1 Experimental Pipeline Our experimental pipeline (Figure 1) consists of two steps, starting with a pretrained model: intermediate-task training, and fine-tuning on a target or probing task. Intermediate Task Training We fine-tune RoBERTa on each intermediate task. The training procedure follows the standard procedure of fine-tuning a pretrained model on a target task, as described in Devlin et al. (2019). We opt for single intermediate-task training as opposed to multi-task training (cf. Liu et al., 2019a) to isolate the effect of skills learned from individual intermediate tasks. Target and Probing Task Fine-Tuning After intermediate-task training, we fine-tune our models on each target and probing task individually. Target tasks are tasks of interest to the general community, spanning various facets of natural language, domains, and sources. Probing tasks, while potentially similar in data source to target tasks such as with CoLA, are designed to isolate the presence of particular linguistic capabilities or skills. For instance, solving the target task BoolQ (Clark et al., 2019a) may require various skills including coreference and commonsense reasoning, while probing tasks like the SentEval probing suite (Conneau et al., 2018) target specific syntactic and metadatalevel phenomena such as subject-verb agreement and sentence length detection. 2.2 Tasks Table 1 presents an overview of the intermediate and target tasks. 2.2.1 Intermediate Tasks We curate a diverse set of tasks that either represent an especially large annotation effort or that have been shown to yield positive transfer in prior work. The resulting set of tasks cover question answering, commonsense reasoning, and natural language inference. QAMR The Question–Answer Meaning Representations dataset (Michael et al., 2018) is a crowdsourced QA task consisting of question–answer pairs that correspond to predicate–argument relationships. It is derived from Wikinews and Wikipedia sentences. For example, if the sentence is “Ada Lovelace was a computer scientist.”, a potential question is “What is Ada’s last name?”, with the answer being “Lovelace.” CommonsenseQA CommonsenseQA (Talmor et al., 2019) is a multiple-choice QA task derived from ConceptNet (Speer et al., 2017) with the help of crowdworkers, that is designed to test a range of commonsense knowledge. SciTail SciTail (Khot et al., 2018) is a textual entailment task built from multiple-choice science questions from 4th grade and 8th grade exams, as well as crowdsourced questions (Welbl et al., 2017). The task is to determine whether a hypothesis, which is constructed from a science question and its corresponding answer, is entailed or not (neutral) by the premise. Cosmos QA Cosmos QA is a task for a commonsense-based reading comprehension task 5233 Name |Train| |Dev| task metrics genre/source CommonsenseQA 9,741 1,221 question answering acc. ConceptNet SciTail 23,596 1,304 natural language inference acc. science exams Cosmos QA 25,588 3,000 question answering acc. blogs SocialIQA 33,410 1,954 question answering acc. crowdsourcing CCG 38,015 5,484 tagging acc. Wall Street Journal HellaSwag 39,905 10,042 sentence completion acc. video captions & Wikihow QA-SRL 44,837 7,895 question answering F1/EM Wikipedia SST-2 67,349 872 sentiment classification acc. movie reviews QAMR 73,561 27,535 question answering F1/EM Wikipedia Intermediate Tasks QQP 363,846 40,430 paraphrase detection acc./F1 Quora questions MNLI 392,702 20,000 natural language inference acc. fiction, letters, telephone speech CB 250 57 natural language inference acc./F1 Wall Street Journal, fiction, dialogue COPA 400 100 question answering acc. blogs, photography encyclopedia WSC 554 104 coreference resolution acc. hand-crafted RTE 2,490 278 natural language inference acc. news, Wikipedia MultiRC 5,100 953 question answering F1α/EM crowd-sourced WiC 5,428 638 word sense disambiguation acc. WordNet, VerbNet, Wiktionary BoolQ 9,427 3,270 question answering acc. Google queries, Wikipedia Target Tasks CommonsenseQA 9,741 1,221 question answering acc. ConceptNet Cosmos QA 25,588 3,000 question answering acc. blogs ReCoRD 100,730 10,000 question answering F1/EM news (CNN, Daily Mail) Table 1: Overview of the intermediate tasks (top) and target tasks (bottom) in our experiments. EM is short for Exact Match. The F1 metrics for MultiRC is calculated over all answer-options. formulated as multiple-choice questions (Huang et al., 2019). The questions concern the causes or effects of events that require reasoning not only based on the exact text spans in the context, but also wide-range abstractive commonsense reasoning. It differs from CommonsenseQA in that it focuses on causal and deductive commensense reasoning and that it requires reading comprehension over an auxiliary passage, rather than simply answering a freestanding question. SocialIQA SocialIQA (Sap et al., 2019) is a task for multiple choice QA. It tests for reasoning surrounding emotional and social intelligence in everyday situations. CCG CCGbank (Hockenmaier and Steedman, 2007) is a task that is a translation of the Penn Treebank into a corpus of Combinatory Categorial Grammar (CCG) derivations. We use the CCG supertagging task, which is the task of assigning tags to individual word tokens that jointly determine the parse of the sentence. HellaSwag HellaSwag (Zellers et al., 2019) is a commonsense reasoning task that tests a model’s ability to choose the most plausible continuation of a story. It is built using adversarial filtering (Zellers et al., 2018) with BERT to create challenging negative examples. QA-SRL The question-answer driven semantic role labeling dataset (QA-SRL; He et al., 2015) for a QA task that is derived from a semantic role labeling task. Each example, which consists of a set of questions and answers, corresponds to a predicate-argument relationship in the sentence it is derived from. Unlike QAMR, which focuses on all words in the sentence, QA-SRL is specifically focused on verbs. SST-2 The Stanford sentiment treebank (Socher et al., 2013) is a sentiment classification task based on movie reviews. We use the binary sentence classification version of the task. QQP The Quora Question Pairs dataset1 is constructed based on questions posted on the community question-answering website Quora. The task is to determine if two questions are semantically equivalent. MNLI The Multi-Genre Natural Language Inference dataset (Williams et al., 2018) is a crowdsourced collection of sentence pairs with textual entailment annotations across a variety of genres. 2.2.2 Target Tasks We use ten target tasks, eight of which are drawn from the SuperGLUE benchmark (Wang et al., 2019b). The tasks in the SuperGLUE benchmark 1http://data.quora.com/First-Quora-DatasetReleaseQuestion-Pairs 5234 cover question answering, entailment, word sense disambiguation, and coreference resolution and have been shown to be easy for humans but difficult for models like BERT. Although we offer a brief description of the tasks below, we refer readers to the SuperGLUE paper for a more detailed description of the tasks. CommitmentBank (CB; de Marneffe et al., 2019) is a three-class entailment task that consists of texts and an embedded clause that appears in each text, in which models must determine whether that embedded clause is entailed by the text. Choice of Plausible Alternatives (COPA; Roemmele et al., 2011) is a classification task that consists of premises and a question that asks for the cause or effect of each premise, in which models must correctly pick between two possible choices. Winograd Schema Challenge (WSC; Levesque et al., 2012) is a sentence-level commonsense reasoning task that consists of texts, a pronoun from each text, and a list of possible noun phrases from each text. The dataset has been designed such that world knowledge is required to determine which of the possible noun phrases is the correct referent to the pronoun. We use the SuperGLUE binary classification cast of the task, where each example consists of a text, a pronoun, and a noun phrase from the text, which models must classify as being coreferent to the pronoun or not. Recognizing Textual Entailment (RTE; Dagan et al., 2005, et seq) is a textual entailment task. Multi-Sentence Reading Comprehension (MultiRC; Khashabi et al., 2018) is a multi-hop QA task that consists of paragraphs, a question on each paragraph, and a list of possible answers, in which models must distinguish which of the possible answers are true and which are false. Word-in-Context (WiC; Pilehvar and Camacho-Collados, 2019) is a binary classification word sense disambiguation task. Examples consist of two text snippets, with a polysemous word that appears in both. Models must determine whether the same sense of the word is used in both contexts. BoolQ (Clark et al., 2019a) is a QA task that consists of passages and a yes/no question associated with each passage. Reading Comprehension with Commonsense Reasoning (ReCoRD; Zhang et al., 2018) is a multiple-choice QA task that consists of news articles. For each article, models are given a question about each article with one entity masked out and a list of possible entities from the article, and the goal is to correctly identify the masked entity out of the list. Additionally, we use CommonsenseQA and Cosmos QA as target tasks, due to their unique combination of small dataset size and high level of difficulty for high-performing models like BERT from our set of intermediate tasks. 2.2.3 Probing Tasks We use well-established datasets for our probing tasks, including the edge-probing suite from Tenney et al. (2019b), function word oriented tasks from Kim et al. (2019), and sentence-level probing datasets (SentEval; Conneau et al., 2018). Acceptability Judgment Tasks This set of binary classifications tasks was designed to investigate if a model can judge the grammatical acceptability of a sentence. We use the following five datasets: AJ-CoLA is a task that tests for a model’s understanding of general grammaticality using the Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019b), which is drawn from 22 theoretical linguistics publications. The other tasks concern the behaviors of specific classes of function words, using the dataset by Kim et al. (2019): AJ-WH is a task that tests a model’s ability to detect if a wh-word in a sentence has been swapped with another wh-word, which tests a model’s ability to identify the antecedent associated with the wh-word. AJ-Def is a task that tests a model’s ability to detect if the definite/indefinite articles in a given sentence have been swapped. AJCoord is a task that tests a model’s ability to detect if a coordinating conjunction has been swapped, which tests a model’s ability to understand how ideas in the various clauses relate to each other. AJ-EOS is a task that tests a model’s ability to identify grammatical sentences without indicators such as punctuation marks and capitalization, and consists of grammatical text that are removed of punctuation. Edge-Probing Tasks The edge probing (EP) tasks are a set of core NLP labeling tasks, collected by Tenney et al. (2019b) and cast into Boolean classification. These tasks focus on the syntactic and semantic relations between spans in a sentence. The first five tasks use the OntoNotes corpus (Hovy et al., 2006): Part-of-Speech tagging (EP-POS) is a task that tests a model’s ability to predict the syntactic category (noun, verb, adjective, etc.) for each word in the sentence. Named entity recognition (EP-NER) is task that tests a model’s abil5235 ity to predict the category of an entity in a given span. Semantic Role Labeling (EP-SRL) is a task that tests a model’s ability to assign a label to a given span of words that indicates its semantic role (agent, goal, etc.) in the sentence. Coreference (EP-Coref) is a task that tests a model’s ability to classify if two spans of tokens refer to the same entity/event. The other datasets can be broken down into both syntactic and semantic probing tasks. Constituent labeling (EP-Const) is a task that tests a model’s ability to classify a non-terminal label for a span of tokens (e.g., noun phrase, verb phrase, etc.). Dependency labeling (EP-UD) is a task that tests a model on the functional relationship of one token relative to another. We use the English Web Treebank portion of Universal Dependencies 2.2 release (Silveira et al., 2014) for this task. Semantic ProtoRole labeling is a task that tests a model’s ability to predict the fine-grained non-exclusive semantic attributes of a given span. Edge probing uses two datasets for SPR: SPR1 (EP-SPR1) (Teichert et al., 2017), derived from the Penn Treebank, and SPR2 (EP-SPR2) (Rudinger et al., 2018), derived from the English Web Treebank. Relation classification (EP-Rel) is a task that tests a model’s ability to predict the relation between two entities. We use the SemEval 2010 Task 8 dataset (Hendrickx et al., 2009) for this task. For example, the relation between “Yeri” and “Korea” in “Yeri is from Korea” is ENTITY-ORIGIN. The Definite Pronoun Resolution dataset (Rahman and Ng, 2012) (EPDPR) is a task that tests a model’s ability to handle coreference, and differs from OntoNotes in that it focuses on difficult cases of definite pronouns. SentEval Tasks The SentEval probing tasks (SE) (Conneau et al., 2018) are cast in the form of single-sentence classification. Sentence Length (SE-SentLen) is a task that tests a model’s ability to classify the length of a sentence. Word Content (SE-WC) is a task that tests a model’s ability to identify which of a set of 1,000 potential words appear in a given sentence. Tree Depth (SETreeDepth) is a task that tests a model’s ability to estimate the maximum depth of the constituency parse tree of the sentence. Top Constituents (SETopConst) is a task that tests a model’s ability to identify the high-level syntactic structure of the sentence by choosing among 20 constituent sequences (the 19 most common, plus an other category). Bigram Shift (SE-BShift) is a task that tests a model’s ability to classify if two consecutive tokens in the same sentence have been reordered. Coordination Inversion (SE-CoordInv) is a task that tests a model’s ability to identify if two coordinating clausal conjoints are swapped (ex: “he knew it, and he deserved no answer.”). PastPresent (SE-Tense) is a task that tests a model’s ability to classify the tense of the main verb of the sentence. Subject Number (SE-SubjNum) and Object Number (SE-ObjNum) are tasks that test a model’s ability to classify whether the subject or direct object of the main clause is singular or plural. Odd-Man-Out (SE-SOMO) is a task that tests the model’s ability to predict whether a sentence has had one of its content words randomly replaced with another word of the same part of speech. 3 Experiments Training and Optimization We use the largescale pretrained model RoBERTaLarge in all experiments. For each intermediate, target, and probing task, we perform a hyperparameter sweep, varying the peak learning rate ∈{2 × 10−5, 1 × 10−5, 5 × 10−6, 3 × 10−6} and the dropout rate ∈{0.2, 0.1}. After choosing the best learning rate and dropout rate, we apply the best configuration for each task for all runs. For each task, we use the batch size that maximizes GPU usage, and use a maximum sequence length of 256. Aside from these details, we follow the RoBERTa paper for all other training hyperparameters. We use NVIDIA P40 GPUs for our experiments. A complete pipeline with one intermediate task works as follows: First, we fine-tune RoBERTa on the intermediate task. We then fine-tune copies of the resulting model separately on each of the 10 target tasks and 25 probing tasks and test on their respective validation sets. We run the same pipeline three times for the 11 intermediate tasks, plus a set of baseline runs without intermediate training. This gives us 35×12×3 = 1260 observations. We train our models using the Adam optimizer (Kingma and Ba, 2015) with linear decay and early stopping. We run training for a maximum of 10 epochs when more than 1,500 training examples are available, and 40 epochs otherwise to ensure models are sufficiently trained on small datasets. We use the jiant (Wang et al., 2019c) NLP toolkit, based on PyTorch (Paszke et al., 2019), Hugging Face Transformers (Wolf et al., 2019), and AllenNLP (Gardner et al., 2017), for all of our 5236 QAMR CSenseQA SciTail CosmosQASocialIQA CCG HellaSwag QA-SRL SST-2 QQP MNLI CB COPA WSC RTE MultiRC WiC BoolQ CSenseQA CosmosQA ReCoRD Avg. Target EP-POS EP-NER EP-SRL EP-Coref EP-Const EP-SPR1 EP-SPR2 EP-DPR EP-Rel EP-UD SE-SentLen SE-WC SE-TreeDepth SE-TopConst SE-BShift SE-Tense SE-SubjNum SE-ObjNum SE-SOMO SE-CoordInv AJ-CoLA AJ-Wh AJ-Def AJ-Coord AJ-EOS -4.0 -0.4 -6.2 -0.4 -21.7 -12.2 -3.1 -7.2 -1.2 -31.0 -0.4 -4.0 8.7 4.3 6.0 -3.7 -20.7 6.7 -3.7 -2.0 0.7 -0.7 -0.3 0.0 1.3 2.9 -4.8 -3.2 3.6 4.8 2.6 -3.8 0.3 0.6 3.4 3.4 5.1 -4.3 -18.2 4.8 1.1 2.6 -2.4 3.1 2.4 7.9 2.6 10.1 -10.6 -8.1 6.8 2.6 1.1 -4.2 6.5 -1.3 0.1 2.5 1.7 -2.0 -1.1 0.1 2.1 -6.4 1.4 0.9 -0.1 0.9 0.1 1.1 -2.8 -10.6 0.7 0.0 0.9 -4.2 1.4 -4.7 -1.6 -2.6 0.1 -7.8 -12.0 0.4 -5.1 -0.9 -7.6 -2.6 -2.5 -0.1 -2.1 -0.4 -9.1 -6.9 -0.0 -3.0 -0.0 -8.4 -0.5 -4.0 -0.0 -1.5 -0.1 -12.4 -6.1 0.2 -4.7 -0.5 -11.9 -1.6 -1.8 1.9 0.2 2.6 -7.9 -9.9 2.0 -1.3 -0.4 -7.1 0.7 0.0 0.0 -0.0 -0.1 -0.1 -0.0 0.0 -0.0 0.1 -97.4 0.0 -0.1 0.0 -0.1 -0.1 -21.5 -0.2 0.0 -0.2 0.0 -64.9 -0.3 12.2 0.1 30.7 12.4 -61.7 31.2 30.9 31.1 31.9 -61.9 31.3 0.0 0.0 0.0 0.1 -0.6 -0.3 0.1 0.0 -0.1 -13.4 0.1 -0.0 -0.1 -0.1 0.0 -0.0 -0.2 -0.1 0.0 -0.9 -0.2 -0.1 -0.2 0.1 0.1 0.2 -1.7 -0.4 0.2 0.1 0.3 -21.9 0.2 -0.2 -0.0 -0.1 0.1 -3.9 -0.4 -0.1 -0.3 -0.1 -8.2 -0.1 7.5 7.9 7.3 8.6 -15.6 3.5 8.3 8.2 7.9 -14.7 6.6 0.1 -25.0 0.4 0.1 -55.1 0.2 0.4 -28.8 0.8 -85.4 0.1 -0.2 0.0 0.0 0.1 -62.0 -0.2 0.0 -0.1 0.1 -89.7 -0.0 -0.0 -0.2 -0.1 -0.3 -0.4 0.5 -0.1 0.1 0.1 -0.9 -0.2 -0.1 -0.0 -0.0 -0.0 -33.3 -0.0 0.0 -0.0 -0.0 -33.8 -0.0 0.1 -0.1 -0.1 -0.1 -1.1 0.3 -0.5 -0.1 -0.1 -1.4 -0.6 -0.2 -0.3 -0.3 -0.1 -0.4 -0.2 -0.2 -0.2 -0.2 -0.4 -0.3 -0.1 0.2 0.1 0.0 -0.4 -0.2 0.2 0.0 0.1 -0.1 0.1 -1.1 -0.4 -0.5 -0.0 -0.3 -1.3 0.0 -0.8 -0.2 -1.5 -1.2 0.3 0.5 0.4 0.9 -0.1 0.8 0.8 0.2 0.5 -0.1 0.4 -0.6 -0.1 -0.1 0.0 -0.5 0.2 -0.3 0.2 -0.4 0.2 -0.1 -2.2 0.4 -1.1 0.1 -4.1 -3.6 0.2 -1.8 -1.0 -2.5 -1.2 -0.7 -0.1 -0.4 -0.2 -1.3 -1.0 -0.0 -0.3 -0.2 -3.0 -0.1 -2.6 -0.7 -1.9 -1.6 -10.3 -6.9 -0.7 -3.7 -0.6 -5.5 -1.1 13.4 26.8 3.4 14.5 14.2 26.8 14.5 28.4 28.4 3.8 11.8 23.1 46.0 11.1 0.0 18.0 46.4 32.4 22.5 14.0 11.1 23.7 25.2 17.7 11.1 20.2 22.3 32.6 11.1 22.2 17.4 11.1 11.1 11.9 13.2 13.9 13.2 -21.3 8.5 5.0 11.8 -4.5 -13.9 6.0 Target Probing Baseline Performance 99.1 86.0 67.3 83.5 47.4 70.5 86.6 74.0 81.9 86.0 78.2 98.1 97.0 61.9 97.1 88.8 87.2 83.8 81.4 85.4 95.8 46.4 99.8 76.1 93.5 97.7 91.1 93.3 95.7 77.2 88.3 68.1 69.9 47.2 47.2 84.7 Figure 2: Transfer learning results between intermediate and target/probing tasks. Baselines (rightmost column) are models fine-tuned without intermediate-task training. Each cell shows the difference in performance (delta) between the baseline and model with intermediate-task training. We use the macro-average of each task’s metrics as the reported performance. Refer to Table 1 for target task metrics. experiments. 4 Results and Analysis 4.1 Investigating Transfer Performance Figure 2 shows the differences in target and probing task performances (deltas) between the baselines and models trained with intermediate-task training, each averaged across three restarts. A positive delta indicates successful transfer. Target Task Performance We define good intermediate tasks as ones that lead to positive transfer in target task performance. We observe that tasks that require complex reasoning and inference tend to make good intermediate tasks. These include MNLI and commonsense-oriented tasks such as CommonsenseQA, HellaSWAG, and Cosmos QA (with our poor performance with the similar SocialIQA serving as a suprising exception). SocialIQA, CCG, and QQP as intermediate tasks lead to negative transfer on all target tasks and the majority of probing tasks. We investigate the role of dataset size in the intermediate tasks with downstream task performance by additionally running a set of experiments on varying amounts of data on five intermediate tasks, which is shown in the Appendix. We do not find differences in intermediate-task dataset size to have any substantial consistent impact on downstream target task performance. In addition, we find that smaller target tasks such as RTE, BoolQ, MultiRC, WiC, WSC benefit the most from intermediate-task training.2 There are no instances of positive transfer to CommitmentBank, since our baseline model achieves 100% accuracy. Probing Task Performance Looking at the probing task performance, we find that intermediate-task training affects performance 2The deltas for experiments with the same intermediate and target tasks are not 0 as may be expected. This is because we perform both intermediate and target training phases in these cases, with reset optimizer states and stopping criteria in between intermediate and target training. 5237 on low-level syntactic probing tasks uniformly across intermediate tasks; we observe little to no improvement for the SentEval probing tasks and higher improvement for acceptability judgment probing tasks, except for AJ-CoLA. This is also consistent with Phang et al. (2018), who find negative transfer with CoLA in their experiments. Variation across Intermediate Tasks There is variable performance across higher-level syntactic or semantic tasks such as the Edge-Probing and SentEval tasks. SocialIQA and QQP have negative transfer for most of the Edge-Probing tasks, while CosmosQA and QA-SRL see drops in performance only for EP-Rel. While we do see that intermediate-task trained models improve performance on EP-SRL and EP-DPR across the board, there is little to no gain in SentEval probing tasks from any intermediate tasks. Additionally, tasks that increase performance in the most number of probing tasks perform well as intermediate tasks. Degenerate Runs We find that the model may not exceed chance performance in some training runs. This mostly affects the baseline (no intermediate training) runs on the acceptability judgment probing tasks, excluding AJ-CoLA, which all have very small training sets. We include these degenerate runs in our analysis to reflect this phenomenon. Consistent with Phang et al. (2018), we find that intermediate-task training reduces the likelihood of degenerate runs, leading to ostensibly positive transfer results on those four acceptability judgment tasks across most intermediate tasks. On the other hand, extremely negative transfer from intermediate-task training can also result in a higher frequency of degenerate runs in downstream tasks, as we observe in the cases of using QQP and SocialIQA as intermediate tasks. We also observe a number of degenerate runs on the EP-SRL task as well as the EP-Rel task. These degenerate runs decrease positive transfer in probing tasks, such as with SocialIQA and QQP probing performance, and also decrease the average amount of positive transfer we see in target task performance. 4.2 Correlation Between Probing and Target Task Performance Next, we investigate the relationship between target and probing tasks in an attempt to understand why certain intermediate-task models perform better on certain target tasks. We use probing task performance as an indicator of the acquisition of particular language skills. We compute the Spearman correlation between probing-task and target-task performances across training on different intermediate tasks and multiple restarts, as shown in Figure 3. We test for statistical significance at p = 0.05 and apply HolmBonferroni correction for multiple testing. We omit correlations that are not statistically significant. We opt for Spearman and not Pearson correlation because of the wide variety of metrics used for the different tasks.3 We find that acceptability judgment probing task performance is generally uncorrelated with the target task performance, except for AJ-CoLA. Similarly, many of the SentEval tasks do not correlate with the target tasks, except for Bigram Shift (SEBShift), Odd-Man-Out (SE-SOMO) and Coordination Inversion (SE-CoordInv). These three tasks are input noising tasks—tasks where a model has to predict if a given input sentence has been randomly modified—which are, by far, the most similar tasks we study to the masked language modeling task that is used for training RoBERTa. This may explain the strong correlation with the performance of the target tasks. We also find that some of these strong correlations, such as with SE-SOMO and SE-CoordInv, are almost entirely driven by variation in the degree of negative transfer, rather than any positive transfer. Intuitively, fine-tuning RoBERTa on an intermediate task can cause the model to forget some of its ability to perform the MLM task. Thus, a future direction for potential improvement for intermediate-task training may be integrating the MLM objective into intermediate-task training or bounding network parameter changes to reduce catastrophic forgetting (Kirkpatrick et al., 2016; Chen et al., 2019). Interestingly, while intermediate tasks such as SocialIQA, CCG and QQP, which show negative transfer on target tasks, tend to have negative transfer on these three probing tasks, the intermediate tasks with positive transfer, such as CommonsenseQA tasks and MNLI, do not appear to adversely affect the performance on these probing tasks. This asymmetric impact may indicate that, beyond the similarity of intermediate and target tasks, avoiding catastrophic forgetting of pretrain3Full correlation tables across all target and probing tasks with both Spearman and Pearson correlations can be found in the Appendix. 5238 CB COPA WSC RTE MultiRC WiC BoolQ CSenseQA CosmosQA ReCoRD EP-POS EP-NER EP-SRL EP-Coref EP-Const EP-SPR1 EP-SPR2 EP-DPR EP-Rel EP-UD SE-SentLen SE-WC SE-TreeDepth SE-TopConst SE-BShift SE-Tense SE-SubjNum SE-ObjNum SE-SOMO SE-CoordInv AJ-CoLA AJ-Wh AJ-Def AJ-Coord AJ-EOS CB COPA WSC RTE MultiRC WiC BoolQ CSenseQA CosmosQA ReCoRD 1 .73 .74 .72 .82 .69 .71 .70 .72 .66 .74 .63 .75 .64 .71 1 .67 .66 .74 1 .63 .67 1 .86 .83 .85 .68 .67 .71 .66 .71 .74 .80 .71 .73 .86 1 .79 .76 .67 .66 .78 .74 .71 .73 .79 1 .74 .83 .79 1 .79 .80 .76 .74 .70 .69 .76 .68 .75 .82 .78 .72 .66 .85 .76 .79 1 .85 .83 .61 .74 .77 .68 .69 .80 .72 .88 .76 .76 .82 .68 .67 .80 .85 1 .86 .63 .70 .76 .66 .74 .81 .84 .87 .80 .83 .69 .67 .66 .76 .83 .86 1 .66 .71 .77 .69 .73 .84 .76 .83 .79 .71 Target Probing Figure 3: Correlations between probing and target task performances. Each cell contains the Spearman correlation between probing-task and target-task performances across training on different intermediate tasks and random restarts. We test for statistical significance at p = 0.05 with Holm-Bonferroni correction, and omit the correlations that are not statistically significant. ing is critical to successful intermediate-task transfer. The remaining SentEval probing tasks have similar delta values (Figure 2), which may indicate that there is insufficient variation among transfer performance to derive significant correlations. Among the edge-probing tasks, the more semantic tasks such as coreference (EP-Coref and EP-DPR), semantic proto-role labeling (EP-SPR1 and EPSPR2), and dependency labeling (EP-Rel) show the highest correlations with our target tasks. As our set of target tasks is also oriented towards semantics and reasoning, this is to be expected. On the other hand, among the target tasks, we find that ReCoRD, CommonsenseQA and Cosmos QA—all commonsense-oriented tasks— exhibit both high correlations with each other as well as a similar set of correlations with the probing tasks. Similarly, BoolQ, MultiRC, and RTE correlate strongly with each other and have similar patterns of probing-task performance. 5 Related Work Within the paradigm of training large pretrained Transformer language representations via intermediate-stage training before fine-tuning on a target task, positive transfer has been shown in both sequential task-to-task (Phang et al., 2018) and multi-task-to-task (Liu et al., 2019a; Raffel et al., 2019) formats. Wang et al. (2019a) perform an extensive study on transfer with BERT, finding language modeling and NLI tasks to be among the most beneficial tasks for improving target-task performance. Talmor and Berant (2019) perform a similar cross-task transfer study on reading comprehension datasets, finding similar positive transfer in most cases, with the biggest gains stemming from a combination of multiple QA datasets. Our work consists of a larger, more diverse, set of intermediate task–target task pairs. We also use probing tasks to shed light on the skills learned by the intermediate tasks. Among the prior work on predicting transfer performance, Bingel and Søgaard (2017) is the most similar to ours. They do a regression analysis that predicts target-task performance on the basis of various features of the source and target tasks and task pairs. They focus on a multi-task training setting without self-supervised pretraining, as opposed to our single-intermediate task, three-step procedure. Similar work (Lin et al., 2019b) has been done on cross-lingual transfer—the analogous challenge of transferring learned knowledge from a highresource to a low-resource language. Many recent works have attempted to understand the knowledge and linguistic skills BERT learns, for instance by analyzing the language model surprisal for subject–verb agreements (Goldberg, 2018), identifying specific knowledge or phenomena encapsulated in the representations learned by BERT using probing tasks (Tenney et al., 2019b,a; Warstadt et al., 2019a; Lin et al., 2019a; Hewitt and Manning, 2019; Jawahar et al., 2019), analyzing the attention heads of BERT (Clark et al., 2019b; 5239 Coenen et al., 2019; Lin et al., 2019a; Htut et al., 2019), and testing the linguistic generalizations of BERT across runs (McCoy et al., 2019). However, relatively little work has been done to analyze fine-tuned BERT-style models (Wang et al., 2019a; Warstadt et al., 2019a). 6 Conclusion and Future Work This paper presents a large-scale study on when and why intermediate-task training works with pretrained models. We perform experiments on RoBERTa with a total of 110 pairs of intermediate and target tasks, and perform an analysis using 25 probing tasks, covering different semantic and syntactic phenomena. Most directly, we observe that tasks like Cosmos QA and HellaSwag, which require complex reasoning and inference, tend to work best as intermediate tasks. Looking to our probing analysis, intermediate tasks that help RoBERTa improve across the board show the most positive transfer in downstream tasks. However, it is difficult to draw definite conclusions about the specific skills that drive positive transfer. Intermediate-task training may help improve the handling of syntax, but there is little to no correlation between target-task and probing-task performance for these skills. Probes for higherlevel semantic abilities tend to have a higher correlation with the target-task performance, but these results are too diffuse to yield more specific conclusions. Future work in this area would benefit greatly from improvements to both the breadth and depth of available probing tasks. We also observe a worryingly high correlation between target-task performance and the two probing tasks which most closely resemble RoBERTa’s masked language modeling pretraining objective. Thus, the results of our intermediate-task training analysis may be driven in part by forgetting of knowledge acquired during pretraining. Our results therefore suggest a need for further work on efficient transfer learning mechanisms. Acknowledgments This project has benefited from support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), by Samsung Research (under the project Improving Deep Learning using Latent Structure), by Intuit, Inc., and by NVIDIA Corporation (with the donation of a Titan V GPU). References Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 164–169, Valencia, Spain. Association for Computational Linguistics. Xinyang Chen, Sinan Wang, Bo Fu, Mingsheng Long, and Jianmin Wang. 2019. Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’ Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 1906–1916. Curran Associates, Inc. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019a. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019b. What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics. Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda B. Vi´egas, and Martin Wattenberg. 2019. Visualizing and measuring the geometry of BERT. Unpublished manuscript available on arXiv. Alexis Conneau, German Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop, pages 177–190. Springer. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 5240 pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform. Unpublished manuscript available on arXiv. Yoav Goldberg. 2018. Assessing BERT’s syntactic abilities. Unpublished manuscript available on arXiv. Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 643–653, Lisbon, Portugal. Association for Computational Linguistics. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. SemEval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009), pages 94– 99, Boulder, Colorado. Association for Computational Linguistics. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank. Computational Linguistics, 33(3):355–396. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 57–60, New York City, USA. Association for Computational Linguistics. Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R. Bowman. 2019. Do attention heads in bert track syntactic dependencies? Unpublished manuscript available on arXiv. Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2391–2401, Hong Kong, China. Association for Computational Linguistics. Ganesh Jawahar, Benoˆıt Sagot, and Djam´e Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657, Florence, Italy. Association for Computational Linguistics. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 252–262, New Orleans, Louisiana. Association for Computational Linguistics. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In Thirty-Second AAAI Conference on Artificial Intelligence. Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 235–249, Minneapolis, Minnesota. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2016. Overcoming catastrophic forgetting in neural networks. In Proceedings of the national academy of sciences (PNAS). Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, KR’12, pages 552–561. AAAI Press. Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019a. Open sesame: Getting inside BERT’s linguistic knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241–253, Florence, Italy. Association for Computational Linguistics. 5241 Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019b. Choosing transfer languages for crosslingual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125–3135, Florence, Italy. Association for Computational Linguistics. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487–4496, Florence, Italy. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized bert pretraining approach. Unpublished manuscript available on arXiv. Marie-Catherine de Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The CommitmentBank: Investigating projection in naturally occurring discourse. In proceedings of Sinn und Bedeutung, volume 23, pages 107–124. R. Thomas McCoy, Junghyun Min, and Tal Linzen. 2019. BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. Unpublished manuscript available on arXiv. Julian Michael, Gabriel Stanovsky, Luheng He, Ido Dagan, and Luke Zettlemoyer. 2018. Crowdsourcing question-answer meaning representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 560–568, New Orleans, Louisiana. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’ Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Jason Phang, Thibault F´evry, and Samuel R. Bowman. 2018. Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks. Unpublished manuscript available on arXiv. Mohammad Taher Pilehvar and Jose CamachoCollados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267–1273, Minneapolis, Minnesota. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. Unpublished manuscript available on arXiv. Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: The Winograd schema challenge. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 777–789, Jeju Island, Korea. Association for Computational Linguistics. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In 2011 AAAI Spring Symposium Series. Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, and Benjamin Van Durme. 2018. NeuralDavidsonian Semantic Proto-role Labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 944– 955, Brussels, Belgium. Association for Computational Linguistics. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4453– 4463, Hong Kong, China. Association for Computational Linguistics. Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A Gold Standard Dependency Corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 2897–2904, Reykjavik, Iceland. European Language Resources Association (ELRA). Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. 5242 Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. Alon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and transfer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4911–4921, Florence, Italy. Association for Computational Linguistics. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Adam Teichert, Adam Poliak, Benjamin Van Durme, and Matthew R Gormley. 2017. Semantic proto-role labeling. In Thirty-First AAAI Conference on Artificial Intelligence. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bowman. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4465–4476, Florence, Italy. Association for Computational Linguistics. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. SuperGLUE: A multi-task benchmark and analysis platform for natural language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’ Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 3261– 3275. Curran Associates, Inc. Alex Wang, Ian F. Tenney, Yada Pruksachatkun, Katherin Yu, Jan Hula, Patrick Xia, Raghu Pappagari, Shuning Jin, R. Thomas McCoy, Roma Patel, Yinghui Huang, Jason Phang, Edouard Grave, Haokun Liu, Najoung Kim, Phu Mon Htut, Thibault F´evry, Berlin Chen, Nikita Nangia, Anhad Mohananey, Katharina Kann, Shikha Bordia, Nicolas Patry, David Benton, Ellie Pavlick, and Samuel R. Bowman. 2019c. jiant 1.2: A software toolkit for research on general-purpose text understanding models. Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019a. Investigating BERT’s knowledge of language: Five analysis methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2870–2880, Hong Kong, China. Association for Computational Linguistics. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019b. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics (TACL), 7:625–641. Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. In Proceedings of the 3rd Workshop on Noisy Usergenerated Text, pages 94–106, Copenhagen, Denmark. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. Unpublished manuscript available on arXiv. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93– 104, Brussels, Belgium. Association for Computational Linguistics. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791– 5243 4800, Florence, Italy. Association for Computational Linguistics. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading comprehension. Unpublished manuscript available on arXiv. 5244 A Correlation Between Probing and Target Task Performance Figure 4 shows the correlation matrix using Spearman correlation and Figure 5 shows the matrix using Pearson correlation. B Effect of Intermediate Task Size on Target Task Performance Figure 6 shows the effect of dataset size on intermediate task training on downstream target task performance for five intermediate tasks, which were picked to maximize the variety of original intermediate task sizes and effectiveness in transfer learning abilities. 5245 CB COPA WSC RTE MultiRC WiC BoolQ CSenseQA CosmosQA ReCoRD EP-POS EP-NER EP-SRL EP-Coref EP-Const EP-SPR1 EP-SPR2 EP-DPR EP-Rel EP-UD SE-SentLen SE-WC SE-TreeDepth SE-TopConst SE-BShift SE-Tense SE-SubjNum SE-ObjNum SE-SOMO SE-CoordInv AJ-CoLA AJ-Wh AJ-Def AJ-Coord AJ-EOS CB COPA WSC RTE MultiRC WiC BoolQ CSenseQA CosmosQA ReCoRD EP-POS EP-NER EP-SRL EP-Coref EP-Const EP-SPR1 EP-SPR2 EP-DPR EP-Rel EP-UD SE-SentLen SE-WC SE-TreeDepth SE-TopConst SE-BShift SE-Tense SE-SubjNum SE-ObjNum SE-SOMO SE-CoordInv AJ-CoLA AJ-Wh AJ-Def AJ-Coord AJ-EOS 1 .31 .37 .58 .73 -.04 .74 .72 .82 .69 .59 .50 .55 .71 .08 .70 .72 .66 .39 .74 .02 .49 .29 .12 .63 .24 .38 -.12 .75 .64 .71 .12 .03 -.04 .55 .31 1 .25 .67 .57 .17 .52 .66 .55 .59 .02 .42 -.10 .46 .13 .31 .41 .40 .22 .53 -.26 .15 -.08 .05 .58 .36 .25 .07 .74 .48 .55 .03 -.13 -.34 .38 .37 .25 1 .36 .37 .16 .40 .45 .39 .42 .40 .38 .26 .44 .01 .49 .51 .63 .34 .52 .03 .34 .19 .31 .54 .47 .42 .21 .45 .44 .43 .16 -.26 .03 .35 .58 .67 .36 1 .86 .27 .83 .85 .68 .67 .31 .60 .34 .71 .25 .58 .66 .51 .32 .71 .09 .40 .01 .14 .59 .29 .25 -.04 .74 .80 .71 .15 -.14 -.22 .38 .73 .57 .37 .86 1 .16 .79 .76 .67 .66 .27 .51 .40 .78 .40 .55 .74 .57 .28 .71 .07 .46 .10 .11 .56 .23 .26 -.02 .73 .79 .59 .01 -.07 -.15 .48 -.04 .17 .16 .27 .16 1 .05 .10 -.05 .01 -.29 -.00 -.04 .02 .18 .04 .04 .14 -.09 .11 -.03 .05 -.10 .10 .15 -.21 -.09 .35 .15 .06 .03 -.07 -.31 -.11 .26 .74 .52 .40 .83 .79 .05 1 .79 .80 .76 .47 .59 .46 .74 .17 .70 .69 .58 .29 .76 .04 .41 .05 .11 .68 .37 .28 -.13 .75 .82 .78 .15 -.05 -.16 .35 .72 .66 .45 .85 .76 .10 .79 1 .85 .83 .40 .61 .36 .74 .06 .77 .68 .69 .41 .80 .06 .48 .07 .27 .72 .47 .39 -.15 .88 .76 .76 .20 -.18 -.19 .40 .82 .55 .39 .68 .67 -.05 .80 .85 1 .86 .59 .63 .51 .70 -.03 .76 .66 .74 .45 .81 .01 .54 .14 .24 .84 .33 .36 -.15 .87 .80 .83 .27 .06 -.18 .52 .69 .59 .42 .67 .66 .01 .76 .83 .86 1 .44 .66 .43 .71 .08 .77 .69 .73 .45 .84 .09 .46 .21 .31 .76 .39 .42 -.11 .83 .79 .71 .21 -.10 -.15 .50 .59 .02 .40 .31 .27 -.29 .47 .40 .59 .44 1 .67 .56 .45 -.28 .55 .48 .43 .47 .50 .35 .48 .45 .20 .42 .28 .34 -.35 .40 .39 .57 .40 .24 .06 .22 .50 .42 .38 .60 .51 -.00 .59 .61 .63 .66 .67 1 .58 .63 .05 .52 .49 .51 .51 .61 .34 .46 .47 .24 .47 .28 .37 -.20 .57 .57 .60 .35 .03 .02 .29 .55 -.10 .26 .34 .40 -.04 .46 .36 .51 .43 .56 .58 1 .45 .13 .39 .49 .48 .36 .53 .41 .51 .39 .25 .36 -.13 .06 -.11 .28 .53 .36 .25 .13 .04 .19 .71 .46 .44 .71 .78 .02 .74 .74 .70 .71 .45 .63 .45 1 .35 .70 .69 .66 .33 .59 .19 .57 .29 .27 .59 .37 .37 -.16 .70 .72 .54 .15 -.10 -.04 .57 .08 .13 .01 .25 .40 .18 .17 .06 -.03 .08 -.28 .05 .13 .35 1 -.06 .10 .18 -.15 .03 -.18 .10 -.11 -.12 .01 -.11 -.09 -.01 -.01 .25 -.20 -.21 -.18 -.02 .17 .70 .31 .49 .58 .55 .04 .70 .77 .76 .77 .55 .52 .39 .70 -.06 1 .61 .68 .44 .75 .17 .52 .27 .41 .61 .55 .54 -.16 .74 .70 .61 .43 -.16 -.04 .48 .72 .41 .51 .66 .74 .04 .69 .68 .66 .69 .48 .49 .49 .69 .10 .61 1 .74 .45 .80 .21 .48 .27 .31 .62 .38 .30 .01 .69 .71 .56 .08 .01 .04 .41 .66 .40 .63 .51 .57 .14 .58 .69 .74 .73 .43 .51 .48 .66 .18 .68 .74 1 .29 .70 -.01 .44 .20 .53 .76 .29 .30 .06 .74 .68 .53 .25 -.17 .04 .55 .39 .22 .34 .32 .28 -.09 .29 .41 .45 .45 .47 .51 .36 .33 -.15 .44 .45 .29 1 .62 .36 .53 .48 .32 .35 .40 .65 -.26 .32 .41 .46 .32 .27 .14 .21 .74 .53 .52 .71 .71 .11 .76 .80 .81 .84 .50 .61 .53 .59 .03 .75 .80 .70 .62 1 .11 .54 .27 .33 .74 .43 .45 -.07 .79 .75 .74 .23 -.07 -.15 .45 .02 -.26 .03 .09 .07 -.03 .04 .06 .01 .09 .35 .34 .41 .19 -.18 .17 .21 -.01 .36 .11 1 .44 .58 .26 -.03 -.05 .04 -.09 -.03 .21 .06 .39 .37 .15 -.08 .49 .15 .34 .40 .46 .05 .41 .48 .54 .46 .48 .46 .51 .57 .10 .52 .48 .44 .53 .54 .44 1 .38 .19 .49 .28 .41 -.22 .48 .51 .40 .42 .28 .02 .41 .29 -.08 .19 .01 .10 -.10 .05 .07 .14 .21 .45 .47 .39 .29 -.11 .27 .27 .20 .48 .27 .58 .38 1 .39 .06 .02 .34 .08 .13 .07 .22 .43 .26 .24 .31 .12 .05 .31 .14 .11 .10 .11 .27 .24 .31 .20 .24 .25 .27 -.12 .41 .31 .53 .32 .33 .26 .19 .39 1 .18 .12 .20 .05 .20 .21 .13 .34 -.17 .13 .22 .63 .58 .54 .59 .56 .15 .68 .72 .84 .76 .42 .47 .36 .59 .01 .61 .62 .76 .35 .74 -.03 .49 .06 .18 1 .29 .26 -.01 .84 .74 .78 .28 .00 -.22 .59 .24 .36 .47 .29 .23 -.21 .37 .47 .33 .39 .28 .28 -.13 .37 -.11 .55 .38 .29 .40 .43 -.05 .28 .02 .12 .29 1 .66 -.18 .43 .28 .25 .16 -.11 .12 .02 .38 .25 .42 .25 .26 -.09 .28 .39 .36 .42 .34 .37 .06 .37 -.09 .54 .30 .30 .65 .45 .04 .41 .34 .20 .26 .66 1 -.04 .44 .33 .34 .39 .06 .18 .18 -.12 .07 .21 -.04 -.02 .35 -.13 -.15 -.15 -.11 -.35 -.20 -.11 -.16 -.01 -.16 .01 .06 -.26 -.07 -.09 -.22 .08 .05 -.01 -.18 -.04 1 .05 .01 -.04 .01 -.12 -.01 .15 .75 .74 .45 .74 .73 .15 .75 .88 .87 .83 .40 .57 .28 .70 -.01 .74 .69 .74 .32 .79 -.03 .48 .13 .20 .84 .43 .44 .05 1 .74 .77 .26 -.06 -.20 .57 .64 .48 .44 .80 .79 .06 .82 .76 .80 .79 .39 .57 .53 .72 .25 .70 .71 .68 .41 .75 .21 .51 .07 .21 .74 .28 .33 .01 .74 1 .68 .32 .04 -.18 .43 .71 .55 .43 .71 .59 .03 .78 .76 .83 .71 .57 .60 .36 .54 -.20 .61 .56 .53 .46 .74 .06 .40 .22 .13 .78 .25 .34 -.04 .77 .68 1 .34 .02 -.27 .42 .12 .03 .16 .15 .01 -.07 .15 .20 .27 .21 .40 .35 .25 .15 -.21 .43 .08 .25 .32 .23 .39 .42 .43 .34 .28 .16 .39 .01 .26 .32 .34 1 .22 .23 .19 .03 -.13 -.26 -.14 -.07 -.31 -.05 -.18 .06 -.10 .24 .03 .13 -.10 -.18 -.16 .01 -.17 .27 -.07 .37 .28 .26 -.17 .00 -.11 .06 -.12 -.06 .04 .02 .22 1 .28 -.03 -.04 -.34 .03 -.22 -.15 -.11 -.16 -.19 -.18 -.15 .06 .02 .04 -.04 -.02 -.04 .04 .04 .14 -.15 .15 .02 .24 .13 -.22 .12 .18 -.01 -.20 -.18 -.27 .23 .28 1 .00 .55 .38 .35 .38 .48 .26 .35 .40 .52 .50 .22 .29 .19 .57 .17 .48 .41 .55 .21 .45 -.08 .41 .31 .22 .59 .02 .18 .15 .57 .43 .42 .19 -.03 .00 1 Figure 4: Correlations between probing and target task performances. Each cell contains the Spearman correlation between probing and target tasks performances across training on different intermediate tasks and random restarts. 5246 CB COPA WSC RTE MultiRC WiC BoolQ CSenseQA CosmosQA ReCoRD EP-POS EP-NER EP-SRL EP-Coref EP-Const EP-SPR1 EP-SPR2 EP-DPR EP-Rel EP-UD SE-SentLen SE-WC SE-TreeDepth SE-TopConst SE-BShift SE-Tense SE-SubjNum SE-ObjNum SE-SOMO SE-CoordInv AJ-CoLA AJ-Wh AJ-Def AJ-Coord AJ-EOS CB COPA WSC RTE MultiRC WiC BoolQ CSenseQA CosmosQA ReCoRD EP-POS EP-NER EP-SRL EP-Coref EP-Const EP-SPR1 EP-SPR2 EP-DPR EP-Rel EP-UD SE-SentLen SE-WC SE-TreeDepth SE-TopConst SE-BShift SE-Tense SE-SubjNum SE-ObjNum SE-SOMO SE-CoordInv AJ-CoLA AJ-Wh AJ-Def AJ-Coord AJ-EOS 1 .31 .37 .58 .73 -.04 .74 .72 .82 .69 .59 .50 .55 .71 .08 .70 .72 .66 .39 .74 .02 .49 .29 .12 .63 .24 .38 -.12 .75 .64 .71 .12 .03 -.04 .55 .31 1 .25 .67 .57 .17 .52 .66 .55 .59 .02 .42 -.10 .46 .13 .31 .41 .40 .22 .53 -.26 .15 -.08 .05 .58 .36 .25 .07 .74 .48 .55 .03 -.13 -.34 .38 .37 .25 1 .36 .37 .16 .40 .45 .39 .42 .40 .38 .26 .44 .01 .49 .51 .63 .34 .52 .03 .34 .19 .31 .54 .47 .42 .21 .45 .44 .43 .16 -.26 .03 .35 .58 .67 .36 1 .86 .27 .83 .85 .68 .67 .31 .60 .34 .71 .25 .58 .66 .51 .32 .71 .09 .40 .01 .14 .59 .29 .25 -.04 .74 .80 .71 .15 -.14 -.22 .38 .73 .57 .37 .86 1 .16 .79 .76 .67 .66 .27 .51 .40 .78 .40 .55 .74 .57 .28 .71 .07 .46 .10 .11 .56 .23 .26 -.02 .73 .79 .59 .01 -.07 -.15 .48 -.04 .17 .16 .27 .16 1 .05 .10 -.05 .01 -.29 -.00 -.04 .02 .18 .04 .04 .14 -.09 .11 -.03 .05 -.10 .10 .15 -.21 -.09 .35 .15 .06 .03 -.07 -.31 -.11 .26 .74 .52 .40 .83 .79 .05 1 .79 .80 .76 .47 .59 .46 .74 .17 .70 .69 .58 .29 .76 .04 .41 .05 .11 .68 .37 .28 -.13 .75 .82 .78 .15 -.05 -.16 .35 .72 .66 .45 .85 .76 .10 .79 1 .85 .83 .40 .61 .36 .74 .06 .77 .68 .69 .41 .80 .06 .48 .07 .27 .72 .47 .39 -.15 .88 .76 .76 .20 -.18 -.19 .40 .82 .55 .39 .68 .67 -.05 .80 .85 1 .86 .59 .63 .51 .70 -.03 .76 .66 .74 .45 .81 .01 .54 .14 .24 .84 .33 .36 -.15 .87 .80 .83 .27 .06 -.18 .52 .69 .59 .42 .67 .66 .01 .76 .83 .86 1 .44 .66 .43 .71 .08 .77 .69 .73 .45 .84 .09 .46 .21 .31 .76 .39 .42 -.11 .83 .79 .71 .21 -.10 -.15 .50 .59 .02 .40 .31 .27 -.29 .47 .40 .59 .44 1 .67 .56 .45 -.28 .55 .48 .43 .47 .50 .35 .48 .45 .20 .42 .28 .34 -.35 .40 .39 .57 .40 .24 .06 .22 .50 .42 .38 .60 .51 -.00 .59 .61 .63 .66 .67 1 .58 .63 .05 .52 .49 .51 .51 .61 .34 .46 .47 .24 .47 .28 .37 -.20 .57 .57 .60 .35 .03 .02 .29 .55 -.10 .26 .34 .40 -.04 .46 .36 .51 .43 .56 .58 1 .45 .13 .39 .49 .48 .36 .53 .41 .51 .39 .25 .36 -.13 .06 -.11 .28 .53 .36 .25 .13 .04 .19 .71 .46 .44 .71 .78 .02 .74 .74 .70 .71 .45 .63 .45 1 .35 .70 .69 .66 .33 .59 .19 .57 .29 .27 .59 .37 .37 -.16 .70 .72 .54 .15 -.10 -.04 .57 .08 .13 .01 .25 .40 .18 .17 .06 -.03 .08 -.28 .05 .13 .35 1 -.06 .10 .18 -.15 .03 -.18 .10 -.11 -.12 .01 -.11 -.09 -.01 -.01 .25 -.20 -.21 -.18 -.02 .17 .70 .31 .49 .58 .55 .04 .70 .77 .76 .77 .55 .52 .39 .70 -.06 1 .61 .68 .44 .75 .17 .52 .27 .41 .61 .55 .54 -.16 .74 .70 .61 .43 -.16 -.04 .48 .72 .41 .51 .66 .74 .04 .69 .68 .66 .69 .48 .49 .49 .69 .10 .61 1 .74 .45 .80 .21 .48 .27 .31 .62 .38 .30 .01 .69 .71 .56 .08 .01 .04 .41 .66 .40 .63 .51 .57 .14 .58 .69 .74 .73 .43 .51 .48 .66 .18 .68 .74 1 .29 .70 -.01 .44 .20 .53 .76 .29 .30 .06 .74 .68 .53 .25 -.17 .04 .55 .39 .22 .34 .32 .28 -.09 .29 .41 .45 .45 .47 .51 .36 .33 -.15 .44 .45 .29 1 .62 .36 .53 .48 .32 .35 .40 .65 -.26 .32 .41 .46 .32 .27 .14 .21 .74 .53 .52 .71 .71 .11 .76 .80 .81 .84 .50 .61 .53 .59 .03 .75 .80 .70 .62 1 .11 .54 .27 .33 .74 .43 .45 -.07 .79 .75 .74 .23 -.07 -.15 .45 .02 -.26 .03 .09 .07 -.03 .04 .06 .01 .09 .35 .34 .41 .19 -.18 .17 .21 -.01 .36 .11 1 .44 .58 .26 -.03 -.05 .04 -.09 -.03 .21 .06 .39 .37 .15 -.08 .49 .15 .34 .40 .46 .05 .41 .48 .54 .46 .48 .46 .51 .57 .10 .52 .48 .44 .53 .54 .44 1 .38 .19 .49 .28 .41 -.22 .48 .51 .40 .42 .28 .02 .41 .29 -.08 .19 .01 .10 -.10 .05 .07 .14 .21 .45 .47 .39 .29 -.11 .27 .27 .20 .48 .27 .58 .38 1 .39 .06 .02 .34 .08 .13 .07 .22 .43 .26 .24 .31 .12 .05 .31 .14 .11 .10 .11 .27 .24 .31 .20 .24 .25 .27 -.12 .41 .31 .53 .32 .33 .26 .19 .39 1 .18 .12 .20 .05 .20 .21 .13 .34 -.17 .13 .22 .63 .58 .54 .59 .56 .15 .68 .72 .84 .76 .42 .47 .36 .59 .01 .61 .62 .76 .35 .74 -.03 .49 .06 .18 1 .29 .26 -.01 .84 .74 .78 .28 .00 -.22 .59 .24 .36 .47 .29 .23 -.21 .37 .47 .33 .39 .28 .28 -.13 .37 -.11 .55 .38 .29 .40 .43 -.05 .28 .02 .12 .29 1 .66 -.18 .43 .28 .25 .16 -.11 .12 .02 .38 .25 .42 .25 .26 -.09 .28 .39 .36 .42 .34 .37 .06 .37 -.09 .54 .30 .30 .65 .45 .04 .41 .34 .20 .26 .66 1 -.04 .44 .33 .34 .39 .06 .18 .18 -.12 .07 .21 -.04 -.02 .35 -.13 -.15 -.15 -.11 -.35 -.20 -.11 -.16 -.01 -.16 .01 .06 -.26 -.07 -.09 -.22 .08 .05 -.01 -.18 -.04 1 .05 .01 -.04 .01 -.12 -.01 .15 .75 .74 .45 .74 .73 .15 .75 .88 .87 .83 .40 .57 .28 .70 -.01 .74 .69 .74 .32 .79 -.03 .48 .13 .20 .84 .43 .44 .05 1 .74 .77 .26 -.06 -.20 .57 .64 .48 .44 .80 .79 .06 .82 .76 .80 .79 .39 .57 .53 .72 .25 .70 .71 .68 .41 .75 .21 .51 .07 .21 .74 .28 .33 .01 .74 1 .68 .32 .04 -.18 .43 .71 .55 .43 .71 .59 .03 .78 .76 .83 .71 .57 .60 .36 .54 -.20 .61 .56 .53 .46 .74 .06 .40 .22 .13 .78 .25 .34 -.04 .77 .68 1 .34 .02 -.27 .42 .12 .03 .16 .15 .01 -.07 .15 .20 .27 .21 .40 .35 .25 .15 -.21 .43 .08 .25 .32 .23 .39 .42 .43 .34 .28 .16 .39 .01 .26 .32 .34 1 .22 .23 .19 .03 -.13 -.26 -.14 -.07 -.31 -.05 -.18 .06 -.10 .24 .03 .13 -.10 -.18 -.16 .01 -.17 .27 -.07 .37 .28 .26 -.17 .00 -.11 .06 -.12 -.06 .04 .02 .22 1 .28 -.03 -.04 -.34 .03 -.22 -.15 -.11 -.16 -.19 -.18 -.15 .06 .02 .04 -.04 -.02 -.04 .04 .04 .14 -.15 .15 .02 .24 .13 -.22 .12 .18 -.01 -.20 -.18 -.27 .23 .28 1 .00 .55 .38 .35 .38 .48 .26 .35 .40 .52 .50 .22 .29 .19 .57 .17 .48 .41 .55 .21 .45 -.08 .41 .31 .22 .59 .02 .18 .15 .57 .43 .42 .19 -.03 .00 1 Figure 5: Correlations between probing and target task performances. Each cell contains the Pearson correlation between probing and target tasks performances across training on different intermediate tasks and random restarts. 5247 Figure 6: Results of experiments on impact of intermediate task data size on downstream target task performance. For each subfigure, we finetune RoBERTa over a variety of dataset size (sampled randomly from the dataset). We report the macro-average of each target task’s performance metrics after finetuning on each dataset size split.
2020
467
Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview Deven Shah Stony Brook University [email protected] H. Andrew Schwartz Stony Brook University [email protected] Dirk Hovy Bocconi University [email protected] Abstract An increasing number of natural language processing papers address the effect of bias on predictions, introducing mitigation techniques at different parts of the standard NLP pipeline (data and models). However, these works have been conducted individually, without a unifying framework to organize efforts within the field. This situation leads to repetitive approaches, and focuses overly on bias symptoms/effects, rather than on their origins, which could limit the development of effective countermeasures. In this paper, we propose a unifying predictive bias framework for NLP. We summarize the NLP literature and suggest general mathematical definitions of predictive bias. We differentiate two consequences of bias: outcome disparities and error disparities, as well as four potential origins of biases: label bias, selection bias, model overamplification, and semantic bias. Our framework serves as an overview of predictive bias in NLP, integrating existing work into a single structure, and providing a conceptual baseline for improved frameworks. 1 Introduction Predictive models in NLP are sensitive to a variety of (often unintended) biases throughout the development process. As a result, fitted models do not generalize well, incurring performance and reliability losses on unseen data. They also have socially undesirable effects by systematically underserving or mispredicting certain user groups. The general phenomenon of biased predictive models in NLP is not recent. The community has long worked on the domain adaptation problem (Jiang and Zhai, 2007; Daume III, 2007): models fit on newswire data do not perform well on social media and other text types. This problem arises from the tendency of statistical models to pick up on non-generalizable signals during the training process. In the case of domains, these non-generalizations are words, phrases, or senses that occur in one text type, but not another. However, this kind of variation is not just restricted to text domains: it is a fundamental property of human-generated language: we talk differently than our parents or people from a different part of our country, etc. (Pennebaker and Stone, 2003; Eisenstein et al., 2010; Kern et al., 2016). In other words, language reflects the diverse demographics, backgrounds, and personalities of the people who use it. While these differences are often subtle, they are distinct and cumulative (Trudgill, 2000; Kern et al., 2016; Pennebaker, 2011). Similar to text domains, this variation can lead models to pick up on patterns that do not generalize to other author-demographics, or to rely on undesirable word-demographic relationships. Bias may be an inherent property of any NLP system (and broadly any statistical model), but this is not per se negative. In essence, biases are priors that inform our decisions (a dialogue system designed for elders might work differently than one for teenagers). Still, undetected and unaddressed, biases can lead to negative consequences: There are aggregate effects for demographic groups, which combine to produce predictive bias. I.e., the label distribution of a predictive model reflects a human attribute in a way that diverges from a theoretically defined “ideal distribution.” For example, a Part Of Speech (POS) tagger reflecting how an older generation uses words (Hovy and Søgaard, 2015) diverges from the population as a whole. A variety of papers have begun to address countermeasures for predictive biases (Li et al., 2018; Elazar and Goldberg, 2018; Coavoux et al., 2018).1 Each identifies a specific bias and counter1An even more extensive body of work on fairness exists as part of the FAT* conferences, which goes beyond the scope features Xsource features Xtarget label bias Biased annotations, interaction, or latent bias from past classifications. over-amplification The model discriminates on a given human attribute beyond its source base-rate. predict Source Population (Model Side) Target Population (Application Side) biased outcomes Ŷtarget fit semantic bias Non-ideal associations between attributed lexeme (e.g. gendered pronouns) and non-attributed lexeme (e.g. occupation). features 𝜃embedding outcome disparity The distribution of outcomes, given attribute A, is dissimilar than the ideal distribution: Q(Ŷt|A) ≁ P(Yt|A) error disparity The distribution of error (ϵ) over at least two different values of an attribute (A) are unequal: Q(ϵt|Ai) ≁ Q(ϵt|Aj) Embedding Corpus (Pre-trained Side) outcomes Ysource origin consequence selection bias The sample of observations themselves are not representative of the application population. Example Dep, is_older outcome ideal: 0, 0 : .30 0, 1 : .35 1, 0 : .20 1, 1 : .15 From predicte 0, 0 : .25 0, 1 : .40 1, 0 : .25 1, 1 : .20 error disparity The distribution of error (ϵ) is inconsistent over different values of an attribute (A): Q(ϵt|A) ≁ Uniform Figure 1: The Predictive Bias Framework for NLP: Depiction of where bias may originate within a standard supervised NLP pipeline. Evidence of bias is seen in ˆy via outcome disparity and error disparity. measure on their terms, but it is often not explicitly clear which bias is addressed, where it originates, or how it generalizes. There are multiple sources from which bias can arise within the predictive pipeline, and methods proposed for one specific bias often do not apply to another. As a consequence, much work has focused on bias effects and symptoms rather than their origins. While it is essential to address the effects of bias, it can leave the fundamental origin unchanged (Gonen and Goldberg, 2019), requiring researchers to rediscover the issue over and over. The “bias” discussed in one paper may, therefore, be quite different than that in another.2 A shared definition and framework of predictive bias can unify these efforts, provide a common terminology, help to identify underlying causes, and allow coordination of countermeasures (Sun et al., 2019). However, such a general framework had yet to be proposed within the NLP community. To address these problems, we suggest a joint conceptual framework, depicted in Figure 1, outlining and relating the different origins of bias. We base our framework on an extensive survey of the relevant NLP literature, informed by seof this biased-focused paper. Note also that while bias is an ethical issue and contributes to many papers in the ethics in NLP area, the two should not be conflated: ethics covers more than bias. 2Quantitative social science offers a background for bias (Berk, 1983). However, NLP differs fundamentally in analytic goals (namely out-of-sample prediction for NLP versus parameter inference for hypothesis testing in social science) that bring about NLP-specific situations: biases in word embeddings, annotator labels, or predicting over-amplified demographics. lected works in social science and adjacent fields. We identify four distinct sources of bias: selection bias, label bias, model overamplification, and semantic bias. We can express all of these as differences between (a) a “true” or intended distribution (e.g., over users, labels, or outcomes), and (b) the distribution used or produced by the model. These cases arise at specific points within a typical predictive pipeline: embeddings, source data, labels (human annotators), models, and target data. We provide quantitative definitions of predictive bias in this framework intended to make it easier to: (a) identify biases (because they can be classified), (b) develop countermeasures (because the underlying problem is known), and (c) compare biases and countermeasures across papers. We hope this paper will help researchers spot, compare, and address bias in all its various forms. Contributions Our primary contributions include: (1) a conceptual framework for identifying and quantifying predictive bias and its origins within a standard NLP pipeline, (2) a survey of biases identified in NLP models, and (3) a survey of methods for countering bias in NLP organized within our conceptual framework. 2 Definition - Two Types of Disparities Our definition of predictive bias in NLP builds on its definition within the literature on standardized testing (i.e., SAT, GRE, etc.) Specifically, Swinton (1981) states: By “predictive bias," we refer to a situation in which a [predictive model] is used to predict a specific criterion for a particular population, and is found to give systematically different predictions for subgroups of this population who are in fact identical on that specific criterion.3 We generalize Swinton’s definition in two ways: First, to align notation with standard supervised modeling, we say there are both Y (a random variable representing the “true” values of an outcome) and ˆY (a random variable representing the predictions. Next, we allow the concept to apply to differences associated with continuously-valued human attributes rather than simply discrete subgroups of people.4 Below, we define two types of measurable systematic differences (i.e. “disparities”): (1) a systematic difference between Y and ˆY ( outcome disparity) and (2) a difference in error (ϵ = |Y −ˆY ) error disparity, both as a function of a given human attribute, A. Outcome disparity. Formally, we say an outcome disparity exists for outcome, Y , a domain D (with values source or target), and with respect to attribute, A, when the distribution of the predicted outcome (Q( ˆYD|AD)) is dissimilar to a given theoretical ideal distribution (P(YD|AD)): Q( ˆYD|AD) ≁P(YD|AD) The ideal distribution is specific to the target application. Our framework allows researchers to use their own criteria to determine this distribution. However, the task of doing so may be nontrivial. First, the current distribution within a population may not be accessible. Even when it is, it may not be what most consider the ideal distribution (e.g., the distribution of gender in computer science and the associated disparity of NLP models attributing male pronouns to computer scientists more frequently (Hovy, 2015)). Second, it may be difficult to come to an agreed-upon ideal distribution from a moral or ethical perspective. In such a case, it may be helpful to use an ideal “direction,” rather than specifying a specific distribution (e.g., moving toward a uniform distribution of 3We have substituted “test" with “predictive model”. 4“Attributes” include both continuously valued user-level variables, like age, personality on a 7-point scale, etc. (also referred to as “dimensional” or “factors”), and discrete categories like membership in an ethnic group. Psychological research suggests that people are better represented by continuously valued scores, where possible, than discrete categories (Baumeister et al., 2007; Widiger and Samuel, 2005; McCrae and Costa Jr., 1989). In NLP, Lynn et al. (2017) shows benefits from treating user-level attributes as continuously when integrating into NLP models. pronouns associated with computer science). Our framework should enable its users to apply evolving standards and norms across NLP’s many application contexts. A prototypical example of outcome disparity is gender disparity in image captions. Zhao et al. (2017) and Hendricks et al. (2018) demonstrate a systematic difference with respect to gender in the outcome of the model, ˆY even when taking the source distribution as an ideal target distribution: Q( ˆYtarget|gender) ≁Q(Ytarget|gender) ∼ Q(Ysource|gender). As a result, captions overpredict females in images with ovens and males in images with snowboards. Error disparity. We say there is an error disparity when model predictions have larger error for individuals with a given user attribute (or range of attributes in the case of continuously-valued attributes). Formally, the error of a predicted distribution is ϵD = |YD −ˆYD| If there is a difference in ϵD over at least two different values of an attribute A (assuming they have been adequately sampled to establish a distribution of ϵD) then there is an error disparity: Q(ϵD|Ai) ≁Q(ϵD|Aj) In other words, the error for one group might systematically differ from the error for another group, e.g., the error for green people differs from the error for blue people. Under unbiased conditions, the difference would be equal. This formulation allows us to capture both the discrete case (arguably more common in NLP, for example, in POS tagging) and the continuous case (for example, in age or income prediction). We propose that if either of these two disparities exist in our target application, then there is a predictive bias. Note that predictive bias is then a property of a model given a specific application, rather than merely an intrinsic property of the model by itself. This definition mirrors predictive bias in standardized testing (Swinton, 1981): “a [predictive model] cannot be called biased without reference to a specific prediction situation; thus, the same instrument may be biased in one application, but unbiased in another." A prototypical example of error disparity is the “Wall Street Journal Effect” – a systematic difference in error as a function of demographics, first documented by Hovy and Søgaard (2015). In theory, POS tagging errors increase the further an author’s demographic attributes differ from the average WSJ author of the 1980s and 1990s (on whom many POS taggers were trained – a selection bias, discussed next). Work by Sap et al. (2019) shows error disparity from a different origin, namely unfairness in hate speech detection. They find that annotators for hate speech on social media make more mistakes on posts of black individuals. Contrary to the case above, the disparity is not necessarily due to a difference between author and annotator population (a selection bias). Instead, the label disparity stems from annotators failing to account for the authors’ racial background and sociolinguistic norms. Source and Target Populations. An important assumption of our framework is that disparities are dependent on the population for which the model will be applied. This assumption is reflected in distinguishing a separate “target population” from the “source population” on which the model was trained. In cross-validation over random folds, models are trained and tested over the same population. However, in practice, models are often applied to novel data that may originate from a different population of people. In other words, the disparity may exist as a model property for one application, but not for another. Quantifying disparity. Given the definitions of the two types of disparities, we can quantify bias with well-established measures of distributional divergence or deviance. Specifically, we suggest the Log-likelihood ratio as a central metric: D(Y, ˆY |A) = 2(log(p(Y |A)) −log(p( ˆY |A))) where p(Y |A) is the specified ideal distribution (either derived empirically or theoretically) and p( ˆY |A) is the distribution within the data. For error disparity the ideal distribution is always the Uniform and ˆY is replaced with the error. KL divergence (DKL[P( ˆY |A)P(Y |A)]) can be used as a secondary, more scalable alternative. Our measure above attempts to synthesize metrics others have used in works focused on specific biases. For example, the definition of outcome disparity is analogous to that used for semantic bias. Kurita et al. (2019) quantify bias in embeddings as the difference in log probability score when replacing words suspected to carry semantic differences (‘he’, ‘she’) with a mask: log(P([Mask] = “⟨PRON⟩”|[Mask] is “⟨NOUN⟩”)) − log(P([Mask] = “⟨PRON⟩”|[Mask] is [Mask]))) ⟨NOUN⟩is replaced with a specific noun to check for semantic bias (e.g., an occupation), and ⟨PRON⟩is an associated demographic word (e.g., “he” or “she”). 3 Four Origins of Bias But what leads to an outcome disparity or error disparity? We identify four points within the standard supervised NLP pipeline where bias may originate: (1) the training labels (label bias), (2) the samples used as observations — for training or testing (selection bias), (3) the representation of data (semantic bias), or (4) due to the fit method itself (overamplification). Label Bias Label bias emerges when the distribution of the dependent variable in the data source diverges substantially from the ideal distribution: Q(Ys|As) ≁P(Ys|As) Here, the labels themselves are erroneous concerning the demographic attribute of interest (as compared to the source distribution). Sometimes, this bias is due to a non-representative group of annotators (Joseph et al., 2017). In other cases, it may be due to a lack of domain expertise (Plank et al., 2014), or due to preconceived notions and stereotypes held by the annotators (Sap et al., 2019). Selection bias. Selection bias emerges due to non-representative observations. I.e., when the users generating the training (source) observations differ from the user distribution of the target, where the model will be applied. Selection bias (sometimes also referred to as sample bias) has long been a concern in the social sciences. At this point, testing for such a bias is a fundamental consideration in study design (Berk, 1983; Culotta, 2014). Non-representative data is the origin for selection bias. Within NLP, some of the first works to note demographic biases were due to a selection bias (Hovy and Søgaard, 2015; Jørgensen et al., 2015). A prominent example is the so-called “Wall Street Journal effect”, where syntactic parsers and part-of-speech taggers are most accurate over language written by middleaged white men. The effect occurs because this group happened to be the predominant authors’ demographics of the WSJ articles, which are traditionally used to train syntactic models (Garimella et al., 2019). The same effect was reported for language identification difficulties for African-American Vernacular English (Blodgett and O’Connor, 2017; Jurgens et al., 2017). The predicted output is dissimilar from the ideal distribution, leading, for example, to lower accuracy for a given demographic, since the source did not reflect the ideal distribution. We say that the distribution of human attribute, A, within the source data, s, is dissimilar to the distribution of A within the target data, t: Q(As) ≁P(At) Selection bias has several peculiarities. First, it is dependent on the ideal distribution of the target population, so a model may have selection bias for one application (and its associated target population), but not for another. Also, consider that either the source features (Xs) or source labels (Ys) may be non-representative. In many situations, the distributions for the features and labels are the same. However, there are some cases where they diverge. For example, when using features from age-biased tweets, but labels from non-biased census surveys. In such cases, we need to take multiple analysis levels into account: corrections can be applied to user features as they are aggregated to communities (Almodaresi et al., 2017). The consequences could be both outcome and error disparity. One of the challenges in addressing selection bias is that we can not know a priori what sort of (demographic) attribute will be important to control. Age and gender are well-studied, but others might be less obvious. We might someday realize that a formerly innocuous attribute (say, handedness) turns out to be relevant for selection biases. This problem is known as The Known and Unknown Unknowns. As we know, there are known knowns: there are things we know we know. We also know there are known unknowns: that is to say, we know there are some things we do not know. But there are also unknown unknowns: the ones we don’t know we don’t know. — Donald Rumsfeld ANNOTATION incorrect correct SAMPLE notrepr. selection bias, label bias selection bias repr. label bias no bias Table 1: Interaction between selection and label bias under different conditions for sample representativeness and annotation quality We will see later how better documentation can help future researchers address this problem. Overamplification. Another source of bias can occur even when there is no label or selection bias. In overamplification, a model relies on a small difference between human attributes with respect to the objective (even an acceptable difference matching the ideal distribution), but amplifies this difference to be much more pronounced in the predicted outcomes. The origins of overamplification are during learning itself. The model learns to pick up on imperfect evidence for the outcome, which brings out the bias. Formally, in overamplification the predicted distribution (Q( ˆYs|As)) is dissimilar to the source training distribution (Q(Ys|As)) with respect to a human attribute, A. The predicted distribution is therefore also dissimilar to the target ideal distribution: Q( ˆYs|As) ≁Q(Ys|As) ∼P(Yt|At) For example, Yatskar et al. (2016) found that in the imSitu image captioning data set, 58% of captions involving a person in a kitchen mention women. However, standard models trained on such data end up predicting people depicted in kitchens as women 63% of the time (Zhao et al., 2017). In other words, an error in generating a gender reference within the text (e.g., “A [woman ∥man] standing next to a counter-top”) males an incorrect female reference much more common. The occurrence of overamplification in the absence of other biases is an important motivation for countermeasures. It does not require bias on the part of the annotator, data collector, or even the programmer/data analyst (though it can escalate existing biases and the models’ statistical discrimination along a demographic dimension). In particular, it extends countermeasures beyond the point some authors have made, that they are merely cosmetic and do not address the underlying cause: biased language in society (Gonen and Goldberg, 2019). Semantic bias. Embeddings (i.e., vectors representing the meaning of words or phrases) have become a mainstay of modern NLP, providing more flexible representations that feed both traditional and deep learning models. However, these representations often contain unintended or undesirable associations and societal stereotypes (e.g., connecting medical doctors more frequently to male pronouns than female pronouns, see Bolukbasi et al. (2016); Caliskan et al. (2017)). We adopt the term used for this phenomenon by others, “semantic bias”. Formally, we attribute semantic bias to the parameters of the embedding model (θemb). Semantic bias is a unique case since it indirectly affects both outcome disparity and error disparity by creating other biases, such as overamplification (Yatskar et al., 2016; Zhao et al., 2017) or diverging words associations within embeddings or language models (Bolukbasi et al., 2016; Rudinger et al., 2018). However, we distinguish it from the other biases, since the population does not have to be people, but rather words in contexts that yield non-ideal associations. For example, the issue is not (only) that a particular gender authors more of the training data for the embeddings. Instead, that gendered pronouns are mentioned alongside occupations according to a non-ideal distribution (e.g., texts talk more about male doctors and female nurses than vice versa). Furthermore, pretrained embeddings are often used without access to the original data (or the resources to process it). We thus suggest that embedding models themselves are a distinct source of bias within NLP predictive pipelines. They have consequently received increased attention, with dedicated sessions at NAACL and ACL 2019. As an example, Kurita et al. (2019) quantify human-like bias in BERT. Using the Gender Pronoun Resolution (GPR) task, they find that, even after balancing the data set, the model predicts no female pronouns with high probability. Semantic bias is also of broad interest to the social sciences as a diagnostic tool (see Section A). However, their inclusion in our framework is not for reasons of social scientific diagnostics, but rather to guide mindful researchers where to look for problems. Multiple Biases. Biases occur not only in isolation, but they also compound to increase their effects. Label and selection bias can – and often do – interact, so it can be challenging to distinguish them. Table 1 shows the different conditions to understand the boundaries of one or another. Consider the case where a researcher chooses to balance a sentiment data set for a user attribute, e.g., age. This decision can directly impact the label distribution of the target variable. E.g., because the positive label is over-represented in a minority age group. Models learn to exploit this confounding correlation between age and label prevalence and magnify it even more. The resulting model may be useless, as it only captures the distribution in the synthetic data sample. We see this situation in early work on using social media data to predict mental health conditions. Models to distinguish PTSD from depression turned out to mainly capture the differences in user age and gender, rather than language reflecting the actual conditions (Preo¸tiuc-Pietro et al., 2015). 3.1 Other Bias Definitions and Frameworks While this is the first attempt at a comprehensive conceptual framework for bias in NLP, alternative frameworks exist, both in other fields and based on more qualitative definitions. Friedler et al. (2016) define bias as unfairness in algorithms. They specify the idea of a “construct” space, which captures the latent features in the data that help predict the right outcomes. They suggest that finding those latent variables would also enable us to produce the right outcomes. Hovy and Spruit (2016) take a broader scope on bias based on ethics in new technologies. They list three qualitative sources (data, modeling, and research design), and suggest three corresponding types of biases: demographic bias, overgeneralization, and topic exposure. Suresh and Guttag (2019) propose a qualitative framework for bias in machine learning, defining bias as a “potential harmful property of the data”. They categorize bias into historical bias, representation bias, measurement bias, and evaluation bias. Glymour and Herington (2019) classify algorithmic bias, in general, into four different categories, depending on the causal conditional dependencies to which it is sensitive: procedural bias, outcome bias, behavior-relative error bias, and score-relative error bias. CorbettDavies and Goel (2018) propose statistical limitations of the three prominent definitions of fairness (anti-classification, classification parity, and calibration), enabling researchers to develop fairer machine learning algorithms. Our framework focuses on NLP, but it follows Glymour and Herington (2019) in providing probabilistic based definitions of bias. It incorporates and formalizes the above to varying degrees. In social sciences, bias definitions often relate to the ability to test causal hypotheses. Hernán et al. (2004) propose a common structure for various types of selection bias. They define bias as the difference between a variable and the outcome, and the causal effect of a variable on the outcome. E.g., when the causal risk ratio (CRR) differs from associational risk ratio (ARR). Similarly, Baker et al. (2013) define bias as uncontrolled covariates or “disturbing variables” that are related to measures of interest. Others provide definitions restricted to particular applications. For example, Caliskan et al. (2017) propose the Word-Embedding Association Test (WEAT). It quantifies semantic bias based on the distance between words with demographic associations in the embedding space. The previously mentioned work by Kurita et al. (2019) and Sweeney and Najafian (2019) extend such measures. Similarly, Romanov et al. (2019) define bias based on the correlation between the embeddings of human attributes with the difference in the True Positive rates between human traits. This approach is reflective of an error disparity. Our framework encompasses bias-related work in the social sciences. Please see the supplement in A.1 for a brief overview. 4 Countermeasures We group proposed countermeasures based on the origin(s) on which they act. Label Bias. There are several ways to address label bias, typically by controlling for biases of the annotators (Pavlick et al., 2014). Disagreement between annotators has long been an active research area in NLP, with various approaches to measure and quantify disagreement through interannotator agreement (IAA) scores to remove outliers (Artstein and Poesio, 2008). Lately, there has been more of an emphasis on embracing variation through the use of Bayesian annotation models (Hovy et al., 2013; Passonneau and Carpenter, 2014; Paun et al., 2018). These models arrive at a much less biased estimate for the final label than majority voting, by attaching confidence scores to each annotator, and reweighting them through that method. Other approaches have explored harnessing the inherent disagreement among annotators to guide the training process (Plank et al., 2014). By weighting updates by the amount of disagreement on the labels, this method prevents bias towards any one label. The weighted updates act as a regularizer during training, which might also help prevent overamplification. If annotators behave in predictable ways to produce artifacts (i.e., always add “not” to form a contradiction), we can train a model on such biased features and use it in ensemble learning (Clark et al., 2019). Hays et al. (2015) attempt to make Web studies equivalent to representative focus group panels. They give an overview of probabilistic and non-probabilistic approaches to generate the Internet panels that contribute to the data generation. Along with the six demographic attributes (age, gender, race/ethnicity, education, marital status, and income), they use poststratification to reduce the bias (some of these methods cross into addressing selection bias). Selection bias. The primary source for selection bias is the mismatch between the sample distribution and the ideal distribution. Consequently, any countermeasures need to re-align the two distributions to minimize this mismatch. The easiest way to address the mismatch is to re-stratify the data to more closely match the ideal distribution. However, this often involves downsampling an overly represented class, which reduces the number of available instances. Mohammady and Culotta (2014) use a stratified sampling technique to reduce the selection bias in the data. Almeida et al. (2015) use demographic user attributes, including age, gender, and social status, to predict the election results in six different cities of Brazil. They use stratified sampling on all the resulting groups to reduce selection bias. Rather than re-sampling, others use reweighting or poststratifying to reduce selection bias. Culotta (2014) estimates county-level health statistics based on social media data. He shows we can stratify based on external socio-demographic data about a community’s composition (e.g., gender and race). Park et al. (2006) estimate statewise public opinions using the National Surveys corpus. To reduce bias, they use various socioeconomic and demographic attributes (state of residence, sex, ethnicity, age, and education level) in a multilevel logistic regression. Choy et al. (2011) and Choy et al. (2012) also use race and gender as features for reweighting in predicting the results of the Singapore and US presidential elections. Baker et al. (2013) study how selection bias manifests in inferences for a larger population, and how to avoid it. Apart from the basic demographic attributes, they also consider attitudinal and behavioral attributes for the task. They suggest using reweighting, ranking reweighting or propensity score adjustment, and sample-matching techniques to reduce selection bias. Others have suggested combinations of these approaches. Hernán et al. (2004), propose Directed Acyclic graphs for various heterogeneous types of selection bias, and suggest using stratified sampling, regression adjustment, or inverse probability weighting to avoid the bias in the data. Zagheni and Weber (2015), study the use of Internet Data for demographic studies and propose two approaches to reduce the selection bias in their task. If the ground truth is available, they adjust selection bias based on the calibration of a stochastic microsimulation. If unavailable, they suggest using a difference-in-differences technique to find out trends on the Web. Zmigrod et al. (2019) show that gender-based selection bias could be addressed by data augmentation, i.e., by adding slightly altered examples to the data. This addition addresses selection bias originating in the features (Xsource), so that the model is fit on a more gender-representative sample. Their approach is similar to the reweighting of poll data based on demographics, which can be applied more directly to tweet-based population surveillance (see our last case study, A.2). Li et al. (2018) introduce a model-based countermeasure. They use an adversarial multitasklearning setup to model demographic attributes as auxiliary tasks explicitly. By reversing the gradient for those tasks during backpropagation, they effectively force the model to ignore confounding signals associated with the demographic attributes. Apart from improving overall performance across demographics, they show that it also protects user privacy. The findings from Elazar and Goldberg (2018), however, suggest that even with adversarial training, internal representations still retain traces of demographic information. Overamplification. In its simplest form, overamplification of inherent bias by the model can be corrected by downweighting the biased instances in the sample, to discourage the model from exaggerating the effects. A common approach involves using synthetic matched distributions. To address gender bias in neural network approaches to coreference resolution Rudinger et al. (2018); Zhao et al. (2018) suggest matching the label distributions in the data, and training the model on the new data set. They swap male and female instances and merge them with the original data set for training. In the same vein, Webster et al. (2018) provide a genderbalanced training corpus for coreference resolution. Based on the first two corpora, Stanovsky et al. (2019) introduce a bias evaluation for machine translation, showing that most systems overamplify gender bias (see also Prates et al. (2018)). Hovy et al. (2020) show that this overamplification consistently makes translations sound older and more male than the original authors. Several authors have suggested it is essential for language to be understood within the context of the author and their social environment Jurgens (2013); Danescu-Niculescu-Mizil et al. (2013); Hovy (2018); Yang et al. (2019). Considering the author demographics improves the accuracy of text classifiersVolkova et al. (2013); Hovy (2015); Lynn et al. (2017), and in turn, could lead to decreased error disparity. Semantic bias. Countermeasures for semantic bias in embeddings typically attempt to adjust the parameters of the embedding model to reflect a target distribution more accurately. Because all of the above techniques can be applied for model fitting, here we highlight techniques that are more specific to addressing bias in embeddings. Bolukbasi et al. (2016) suggest that techniques to de-bias embeddings can be classified into two approaches: hard de-biasing (completely removes bias) and soft de-biasing (partially removes bias avoiding side effects). Romanov et al. (2019) generalize this work to a multi-class setting, exploring methods to mitigate bias in an occupation classification task. They reduce the correlation between the occupation of people and the word embedding of their names, and manage to simultaneously reduce race and gender biases without reducing the classifier’s performance. Manzini et al. (2019), identify the bias subspace using principal component analysis and remove the biased components using hard Neutralize and Equalize de-biasing and soft biasing methods proposed by Bolukbasi et al. (2016). The above examples evaluate success through the semantic analogy task (Mikolov et al., 2013), a method whose informativeness has since been questioned, though (Nissim et al., 2019). For a dedicated overview of semantic de-biasing techniques see Lauscher et al. (2020). Social-Level Mitigation. Several initiatives propose standardized documentation to trace potential biases, and to ultimately mitigate them. Data Statements Bender and Friedman (2018) suggest clearly disclosing data selection, annotation, and curation processes explicitly and transparently. Similarly, Gebru et al. (2018) suggest Datasheets to cover the lifecycle of data including “motivation for dataset creation; dataset composition; data collection process; data preprocessing; dataset distribution; dataset maintenance; and legal and ethical considerations”. Mitchell et al. (2019) extend this idea to include model specifications and performance details on different user groups. Hitti et al. (2019) propose a taxonomy for assessing the gender bias of a data set. While these steps do not directly mitigate bias, they can encourage researchers to identify and communicate sources of label or selection bias. Such documentation, combined with a conceptual framework to guide specific mitigation techniques, acts as an essential mitigation technique at the level of the research community. See Appendix A.2 for case studies outlining various types of bias in several NLP tasks. 5 Conclusion We present a comprehensive overview of the recent literature on predictive bias in NLP. Based on this survey, we develop a unifying conceptual framework to describe bias sources and their effects (rather than just their effects). This framework allows us to group and compare works on countermeasures. Rather than giving the impression that bias is a growing problem, we would like to point out that bias is not necessarily something gone awry, but rather something nearly inevitable in statistical models. We do, however, stress that we need to acknowledge and address bias with proactive measures. Having a formal framework of the causes can help us achieve this. We would like to leave the reader with these main points: (1) every predictive model with errors is bound to have disparities over human attributes (even those not directly integrating human attributes); (2) disparities can result from a variety of origins — the embedding model, the feature sample, the fitting process, and the outcome sample — within the standard predictive pipeline; (3) selection of “protected attributes” (or human attributes along which to avoid biases) is necessary for measuring bias, and often helpful for mitigating bias and increasing the generalization ability of the models. We see this paper as a step toward a unified understanding of bias in NLP. We hope it inspires further work in both identifying and countering bias, as well as conceptually and mathematically defining bias in NLP. Framework Application Steps (TL;DR) 1. Specify target population and an ideal distribution of the attribute (A) to be investigated for bias; Consult datasheets and data statements5 if available for the model source; 2. If outcome disparity or error disparity, check for potential origins: (a) if label bias: use post-stratification or retrain annotators. (b) if selection bias: use stratified sampling to match source to target populations, or use post-stratification, re-weighting techniques. (c) if overamplification: synthetically match distributions or add outcome disparity to cost function. (d) if semantic bias: retrain or retrofit embeddings considering approaches above, but with attributed (e.g., gendered) words (rather than people) as the population. Acknowledegments The authors would like to thank Vinod Prabhakaran, Niranjan Balasubramanian, Joao Sedoc, Lyle Ungar, Rediet Abebe, Salvatore Giorgi, Margaret Kern and the anonymous reviewers for their constructive comments. Dirk Hovy is a member of the Bocconi Institute for Data Science and Analytics (BIDSA) and the Data and Marketing Insights (DMI) unit. 5 (Gebru et al., 2018; Bender and Friedman, 2018) References Jussara M Almeida, Gisele L Pappa, et al. 2015. Twitter population sample bias and its impact on predictive outcomes: A case study on elections. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pages 1254–1261. ACM. Fatemeh Almodaresi, Lyle Ungar, Vivek Kulkarni, Mohsen Zakeri, Salvatore Giorgi, and H. Andrew Schwartz. 2017. On the distribution of lexical features at multiple levels of analysis. In The 55th Annual Meeting of the Association for Computational Linguistics. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Reg Baker, J Michael Brick, Nancy A Bates, Mike Battaglia, Mick P Couper, Jill A Dever, Krista J Gile, and Roger Tourangeau. 2013. Summary report of the aapor task force on non-probability sampling. Journal of Survey Statistics and Methodology, 1(2):90–143. Roy F Baumeister, Kathleen D Vohs, and David C Funder. 2007. Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2(4):396–403. Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604. Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multitask learning for mental health conditions with limited social media data. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 152–162. Richard A Berk. 1983. An introduction to sample selection bias in sociological data. American Sociological Review, pages 386–398. Sudeep Bhatia. 2017. Associative judgment and vector space semantics. Psychological review, 124(1):1. Su Lin Blodgett and Brendan O’Connor. 2017. Racial disparity in natural language processing: A case study of social media African-American English. arXiv preprint arXiv:1707.00061. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems, pages 4349–4357. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Murphy Choy, Michelle Cheong, Ma Nang Laik, and Koo Ping Shung. 2012. Us presidential election 2012 prediction using census corrected twitter model. arXiv preprint arXiv:1211.0938. Murphy Choy, Michelle LF Cheong, Ma Nang Laik, and Koo Ping Shung. 2011. A sentiment analysis of singapore presidential election 2011 using twitter data with census correction. arXiv preprint arXiv:1108.5520. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Donâ ˘A´Zt take the easy way out: Ensemble based methods for avoiding known dataset biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4060–4073. Maximin Coavoux, Shashi Narayan, and Shay B Cohen. 2018. Privacy-preserving neural representations of text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1–10. Glen Coppersmith, Mark Dredze, Craig Harman, Kristy Hollingshead, and Margaret Mitchell. 2015. Clpsych 2015 shared task: Depression and ptsd on twitter. In CLPsych@ HLT-NAACL, pages 31–39. Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023. Fintan Costello and Paul Watts. 2014. Surprisingly rational: Probability theory plus noise explains biases in judgment. Psychological review, 121(3):463. Mick P Couper. 2013. Is the sky falling? new technology, changing media, and the future of surveys. In Survey Research Methods, volume 7, pages 145– 156. Aron Culotta. 2014. Reducing sampling bias in social media data for county health inference. In Joint Statistical Meetings Proceedings, pages 1–12. Cristian Danescu-Niculescu-Mizil, Robert West, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. No country for old members: User lifecycle and linguistic change in online communities. In Proceedings of the 22nd international conference on World Wide Web, pages 307–318. Hal Daume III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256–263. Jacob Eisenstein, Brendan O’Connor, Noah A Smith, and Eric P Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1277–1287. Association for Computational Linguistics. Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11–21. Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2016. On the (im) possibility of fairness. corr abs/1609.07236 (2016). Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635–E3644. Aparna Garimella, Carmen Banea, Dirk Hovy, and Rada Mihalcea. 2019. Women’s syntactic resilience and men’s grammatical luck: Gender-bias in partof-speech tagging and dependency parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3493– 3498, Florence, Italy. Association for Computational Linguistics. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for datasets. In Proceedings of the 5 th Workshop on Fairness, Accountability, and Transparency in Machine Learning, Stockholm, Sweden. Salvatore Giorgi, Veronica Lynn, Keshav Gupta, Sandra Matz, Lyle Ungar, and H.A. Schwartz. 2019. Correcting sociodemographic selection biases for population prediction. ArXiv. Bruce Glymour and Jonathan Herington. 2019. Measuring the biases that matter: The ethical and casual foundations for measures of fairness in algorithms. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* ’19, pages 269–278, New York, NY, USA. ACM. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614. Ron D Hays, Honghu Liu, and Arie Kapteyn. 2015. Use of internet panels to conduct surveys. Behavior research methods, 47(3):685–690. Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning models. In European Conference on Computer Vision, pages 793–811. Springer. Joseph Henrich, Steven J Heine, and Ara Norenzayan. 2010. The weirdest people in the world? Behavioral and brain sciences, 33(2-3):61–83. Miguel A Hernán, Sonia Hernández-Díaz, and James M Robins. 2004. A structural approach to selection bias. Epidemiology, pages 615–625. Yasmeen Hitti, Eunbee Jang, Ines Moreno, and Carolyne Pelletier. 2019. Proposed taxonomy for gender bias in text; a filtering methodology for the gender generalization subtype. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 8–17, Florence, Italy. Association for Computational Linguistics. Dirk Hovy. 2015. Demographic factors improve classification performance. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 752–762. Dirk Hovy. 2018. The social and the neural network: How to make natural language processing about people again. In Proceedings of the Second Workshop on Computational Modeling of Peopleâ ˘A ´Zs Opinions, Personality, and Emotions in Social Media, pages 42–49. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with mace. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120–1130. Dirk Hovy, Federico Bianchi, and Tommaso Fornaciari. 2020. Can you translate that into man? commercial machine translation systems include stylistic biases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, WA, USA. Association for Computational Linguistics. Dirk Hovy and Anders Søgaard. 2015. Tagging performance correlates with author age. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 483–488. Dirk Hovy and Shannon L Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 591–598. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In Proceedings of the 45th annual meeting of the association of computational linguistics, pages 264–271. Anna Jørgensen, Dirk Hovy, and Anders Søgaard. 2015. Challenges of studying and processing dialects in social media. In Proceedings of the Workshop on Noisy User-generated Text, pages 9–18. Kenneth Joseph, Lisa Friedland, William Hobbs, Oren Tsur, and David Lazer. 2017. Constance: Modeling annotation contexts to improve stance classification. arXiv preprint arXiv:1708.06309. David Jurgens. 2013. That’s what friends are for: Inferring location in online social media platforms based on social relationships. In Seventh International AAAI Conference on Weblogs and Social Media. David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. 2017. Incorporating dialectal variability for socially equitable language identification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 51–57. ML Kern, G Park, JC Eichstaedt, HA Schwartz, M Sap, LK Smith, and LH Ungar. 2016. Gaining insights from social media language: Methodologies and challenges. Psychological methods, 21(4):507–525. Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43–53. Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R Sunstein. 2018. Discrimination in the age of algorithms. Journal of Legal Analysis, 10. Austin C Kozlowski, Matt Taddy, and James A Evans. 2018. The geometry of culture: Analyzing meaning through word embeddings. arXiv preprint arXiv:1803.09288. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. arXiv preprint arXiv:1906.07337. Anne Lauscher, Goran Glavaš, Simone Paolo Ponzetto, and Ivan Vuli´c. 2020. A general framework for implicit and explicit debiasing of distributional word vector spaces. Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 25–30. Veronica E. Lynn, Youngseo Son, Vivek Kulkarni, Niranjan Balasubramanian, and H. Andrew Schwartz. 2017. Human centered nlp with user-factor adaptation. In Empirical Methods in Natural Language Processing. Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv preprint arXiv:1904.04047. Robert R. McCrae and Paul T. Costa Jr. 1989. Reinterpreting the Myers-Briggs type indicator from the perspective of the five-factor model of personality. Journal of Personality, 57(1):17–40. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of ICLR. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 220–229. ACM. Ehsan Mohammady and Aron Culotta. 2014. Using county demographics to infer attributes of twitter users. In Proceedings of the joint workshop on social dynamics and personal attributes in social media, pages 7–16. Malvina Nissim, Rik van Noord, and Rob van der Goot. 2019. Fair is better than sensational: Man is to doctor as woman is to doctor. arXiv preprint arXiv:1905.09866. David K Park, Andrew Gelman, and Joseph Bafumi. 2006. State-level opinions from national surveys: Poststratification using multilevel logistic regression. Public opinion in state politics, pages 209–28. Rebecca J Passonneau and Bob Carpenter. 2014. The benefits of a model of annotation. Transactions of the Association for Computational Linguistics, 2:311–326. Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and Massimo Poesio. 2018. Comparing bayesian models of annotation. Transactions of the Association for Computational Linguistics, 6:571–585. Ellie Pavlick, Matt Post, Ann Irvine, Dmitry Kachaev, and Chris Callison-Burch. 2014. The language demographics of amazon mechanical turk. Transactions of the Association for Computational Linguistics, 2:79–92. James W Pennebaker. 2011. The secret life of pronouns. New Scientist, 211(2828):42–45. James W. Pennebaker and Lori D. Stone. 2003. Words of wisdom: Language use over the life span. Journal of Personality and Social Psychology, 85(2):291. Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 742–751. Marcelo OR Prates, Pedro H Avelar, and Luís C Lamb. 2018. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications, pages 1–19. Daniel Preo¸tiuc-Pietro, Johannes Eichstaedt, Gregory Park, Maarten Sap, Laura Smith, Victoria Tobolsky, H Andrew Schwartz, and Lyle Ungar. 2015. The role of personality, age, and gender in tweeting about mental illness. In Proceedings of the 2nd workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality, pages 21–30. Daniel Preotiuc-Pietro, Maarten Sap, H Andrew Schwartz, and Lyle Ungar. 2015. Mental illness detection at the world well-being project for the clpsych 2015 shared task. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, NAACL. Iyad Rahwan, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W Crandall, Nicholas A Christakis, Iain D Couzin, Matthew O Jackson, et al. 2019. Machine behaviour. Nature, 568(7753):477. Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, and Adam Tauman Kalai. 2019. What’s in a name? reducing bias in bios without access to protected attributes. arXiv preprint arXiv:1904.05233. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 8–14. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1668–1678, Florence, Italy. Association for Computational Linguistics. H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Megha Agrawal, Gregory J Park, Shrinidhi K Lakshmikanth, Sneha Jha, Martin EP Seligman, Lyle Ungar, et al. 2013. Characterizing geographic variation in well-being using tweets. In Seventh International AAAI Conference on Weblogs and Social Media (ICWSM 2013). Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679–1684, Florence, Italy. Association for Computational Linguistics. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1630–1640, Florence, Italy. Association for Computational Linguistics. Harini Suresh and John V Guttag. 2019. A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002. Chris Sweeney and Maryam Najafian. 2019. A transparent framework for evaluating unintended demographic bias in word embeddings. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1662–1667, Florence, Italy. Association for Computational Linguistics. Spencer S Swinton. 1981. Predictive bias in graduate admissions tests. ETS Research Report Series, 1981(1):i–53. Peter Trudgill. 2000. Sociolinguistics: An introduction to language and society. Penguin UK. Amos Tversky and Daniel Kahneman. 1973. Availability: A heuristic for judging frequency and probability. Cognitive psychology, 5(2):207–232. Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic language variations to improve multilingual sentiment analysis in social media. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1815–1827. Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the gap: A balanced corpus of gendered ambiguous pronouns. Transactions of the Association for Computational Linguistics, 6:605–617. Thomas A. Widiger and Douglas B. Samuel. 2005. Diagnostic categories or dimensions? A question for the Diagnostic and Statistical Manual of Mental Disorders—Fifth Edition. Journal of Abnormal Psychology, 114(4):494. Diyi Yang, Robert E Kraut, Tenbroeck Smith, Elijah Mayfield, and Dan Jurafsky. 2019. Seekers, providers, welcomers, and storytellers: Modeling social roles in online health communities. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–14. Mark Yatskar, Luke Zettlemoyer, and Ali Farhadi. 2016. Situation recognition: Visual semantic role labeling for image understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5534–5542. Emilio Zagheni and Ingmar Weber. 2015. Demographic research with non-representative internet data. International Journal of Manpower, 36(1):13– 25. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876. Ran Zmigrod, Sebastian J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1651–1661. A Appendices A.1 Related Work in Other Fields We survey the literature of adjacent fields and outline different streams related to our framework. These examples illustrate the ubiquity and complexity of bias, and highlight its understanding in different disciplines over time. Bias became a crucial topic in social science following the seminal work of Tversky and Kahneman, showing that human thinking was subject to systematic errors (Tversky and Kahneman, 1973). Human logic was seemingly separate from the principles of probability calculus. “Bias” here is interpreted as the result of psychological heuristics, i.e., mental “shortcuts” to help us react faster to situations. While many of these heuristics can be useful in critical situations, their indiscriminate application in everyday life can have adverse effects and cause bias. This line of work has spawned an entire field of study in psychology (decision making). The focus of Tversky and Kahneman (1973) (and a whole field of decision making that followed) was human behavior. Still, the same basic principles of systematic differences in decision making apply to machines as well. However, algorithms also provide systematic ways to reduce bias, and some see the mitigation of bias in algorithm decisions as a potential opportunity to move the needle positively (Kleinberg et al., 2018). Thus, we can apply frameworks of contemporaries in human behavior to machines (Rahwan et al., 2019), and perhaps benefit from a more scalable experimentation process. Costello and Watts (2014) studies human judgment under uncertain conditions, and proposes that we can algorithmically account for observed human bias, provided there is sufficient random noise in the probabilistic model. This view suggests bias within the model itself, what we have called overamplification. Still, most works on bias in decision making assume working with unbiased data, even though social science has long battled selection bias. Most commonly, data selection is heavily skewed towards the students found on western university campuses (Henrich et al., 2010). Attempts to remedy selection bias in a scalable fashion use online populations, which in turn are skewed by unequal access to the Internet, but which we can mitigate through reweighting schemes (Couper, 2013). In some cases, algorithmic bias has helped understand society better. For example, semantic bias in word embeddings has been leveraged to track trends in societal attitudes concerning gender roles and ethnic stereotypes. Garg et al. (2018); Kozlowski et al. (2018) measure the distance between certain sets of words in different decades to track this change. This use of biased embeddings illustrates an interesting distinction between normative and descriptive ethics. When used in predictive models, semantic bias is something to be avoided (Bolukbasi et al., 2016). I.e., it is normatively wrong for many applications (e.g., we ideally would want all genders or ethnicities equally associated with all jobs). However, the works by Garg et al. (2018) and Kozlowski et al. (2018) show that it is precisely this bias of word embeddings that reflects societal attitudes. Here, the presence of bias is descriptively correct. Similarly, Bhatia (2017) uses this property of word embeddings to measure people’s psychological biases and attitudes towards making individual decisions. A.2 Discussion: Example Case Studies Part of Speech Taggers and Parsing. The works by Hovy and Søgaard (2015); Jørgensen et al. (2015) outline the effect of selection bias on syntactic tools. The language of demographic groups systematically differs from each other for syntactic attributes. Therefore, models trained on samples whose demographic composition (e.g., age and ethnicity) differs from the target perform significantly worse. Within the predictive bias framework, the consequence of this selection bias is an error disparity – Q(ϵD=general|A = age, ethnicity) ≁Uniform, the error of the model across a general domain (D) is not uniform with respect to attributes (A) age and ethnicity. Li et al. (2018) shows that this consequence of selection bias can be addressed by adversarial learning, removing the age gap and significantly reducing the performance difference between ethnolects (even if it was not trained with that objective). Garimella et al. (2019) quantifies this bias further by studying the effect of different gender compositions of the training data on tagging and parsing, supporting the claim that debiased samples benefit performance. Image Captions. Hendricks et al. (2018) shows the presence of gender bias in image captioning, overamplifying differences present in the training data. Prior work focused on context (e.g., it is easier to predict “mouse” when there is a computer present). This bias manifests in ignoring people present in the image. The gender bias is not only influenced by the images, but also by biased language models. The primary consequence is an outcome disparity – Q( ˆYD|gender) ≁P(YD|gender), the distribution of outcomes (i.e. caption words and phrases) produced from the model Q( ˆYD|gender) overselects particular phrases beyond the distribution observed in reality: (i.e. P(YD|gender); this is true even when the source and target are the same: D = source = target). To overcome the bias and to increase performance, Hendricks et al. (2018) introduce an equalizer model with two loss-terms: Appearance Confusion Loss (ACL) and Confident Loss (Conf). ACL increases the gender confusion when gender information is not present in the image, making it difficult to predict an accurately gendered word. Confident loss increases the confidence of the predicted gendered word when gender information is present in the image. Both loss terms have the effect of decreasing the difference between Q( ˆYD|gender) and P( ˆYD|gender). In the end, the Equalizer model performs better in predicting a woman while still misclassifying a man as a woman, but decreasing error disparity overall. Sentiment Analysis. Kiritchenko and Mohammad (2018) show the issues of both semantic bias and overamplification. They assess scoring differences in 219 sentiment analysis systems by switching out names and pronouns. (They switch between male and female pronouns, and between prototypical white and black American first names based on name registers.) The results show that male pronouns are associated with higher scores for negative polarity, and prototypical black names with higher scores for negative emotions. The consequence of the semantic bias and overamplification are outcome disparities: Q( ˆYD|gender) ≁ P(YD|gender) and Q( ˆYD|race) ≁P(YD|race). This finding again demonstrates a case of descriptive vs. normative ethics. We could argue that because aggression is more often associated with male protagonists, the models reflect a descriptively correct (if morally objectionable) societal fact. However, if the model score changes based on ethnicity, the difference likely reflects (and amplifies) societal ethnic stereotypes. Those stereotypes, though, are both normatively and descriptively wrong. Differential Diagnosis in Mental Health. In the clinical community, differentiating a subject with post-traumatic stress disorder (PTSD) from someone with depression is known to be difficult. It was, therefore, surprising when early work on this task produced AUCs greater than 0.85 (this and similar tasks were part of the CLPsych2015 Shared task; (Coppersmith et al., 2015)). Labels of depression and PTSD had been automatically derived from a convenience sample of individuals6 who had publicly stated their diagnosis in their profile. The task included a 50/50 split from each category. However, Preotiuc-Pietro et al. (2015) show that these classifiers primarily picked up on differences in age or gender – subjects with PTSD were more likely to be older than those with depression. While age and gender themselves are valid information for mental health diagnosis, the design yielded classifiers that predicted nearly all older individuals to have PTSD, and those younger to have depression, despite the 50/50 split. These classifiers resulted in outcome disparity, because older individuals were much less likely to be labeled depressed than the target population (and younger less likely for PTSD: Q( ˆYD|A = age) ≁ P(YD|A = age)). In the end, the task organizers mitigated the issue by using matched controls – adding another 50% samples for each class such that the age and gender distributions of both groups matched. Recently, Benton et al. (2017) showed that accounting for demographic attributes in the model could leverage this correlation while controlling for the confounds. Assessing Demographic Variance in Language. A final case study in applying our framework demonstrates how inferring user demographics can mitigate bias. Consider the task of producing population measurements from readily available (but biased) community corpora. E.g., assessing representative US county life satisfaction from tweets (Schwartz et al., 2013). Unlike our other examples, the outcomes of the source training data (i.e., surveys) are expected to be representative, while the features come with biases. The source feature distributions with respect to human attributes are dissimilar from the ideal distribu6A convenience sample, a term from social science, is a set of data selected because it is available rather than designed for the given task. tion, while the source outcomes match that target outcomes (Q(Xsource|A) ≁P(Xtarget|A) but Q(Ysource|A) ∼P(Ytarget|A)). In this case, the effectiveness of countermeasures preventing selection and semantic biases (for Xsource and Xtarget) should result in increased predictive performance against a representative community outcome. Indeed, Giorgi et al. (2019) adjust the feature estimates, X, to match representative demographics and socio-economics by using inferred user attributes, and find improved predictions for the life satisfaction of a Twitter community.
2020
468
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5265–5275 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5265 What Does BERT with Vision Look At? Liunian Harold Li†, Mark Yatskar∗, Da Yin◦, Cho-Jui Hsieh† & Kai-Wei Chang† †University of California, Los Angeles ∗Allen Institute for Artificial Intelligence ◦Peking University [email protected], [email protected], wade [email protected], {chohsieh, kwchang}@cs.ucla.edu Abstract Pre-trained visually grounded language models such as ViLBERT, LXMERT, and UNITER have achieved significant performance improvement on vision-and-language tasks but what they learn during pre-training remains unclear. In this work, we demonstrate that certain attention heads of a visually grounded language model actively ground elements of language to image regions. Specifically, some heads can map entities to image regions, performing the task known as entity grounding. Some heads can even detect the syntactic relations between non-entity words and image regions, tracking, for example, associations between verbs and regions corresponding to their arguments. We denote this ability as syntactic grounding. We verify grounding both quantitatively and qualitatively, using Flickr30K Entities as a testbed. 1 Introduction Recently, BERT (Devlin et al., 2019) variants with vision such as ViLBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019), and UNITER (Chen et al., 2019) have achieved new records on several vision-and-language reasoning tasks, e.g. VQA (Antol et al., 2015), NLVR2 (Suhr et al., 2019), and VCR (Zellers et al., 2019). These pre-trained visually grounded language models use Transformers (Vaswani et al., 2017) to jointly model words and image regions. They are pretrained on paired image-text data, where given parts of the input the model is trained to predict the missing pieces. Despite their strong performance, it remains unclear if these models have learned the desired cross-modal representations. Conversely, a large body of work (Liu et al., 2019; Tenney et al., 2019; Clark et al., 2019) has focused on understanding the internal behaviours of pre-trained language models (Peters et al., 2018b; Radford et al., 2018; Devlin et al., 2019) and find that they capture linguistic features such as POS, syntactic structures, and coreferences. This inspires us to ask: what do visually grounded language models learn during pre-training? Following Clark et al. (2019), we find that certain attention heads of a visually grounded language model acquire an intuitive yet fundamental ability that is often believed to be a prerequisite for advanced visual reasoning (Plummer et al., 2015): grounding of language to image regions. We first observe that some heads can perform entity grounding, where entities that have direct semantic correspondences in the image are mapped to the correct regions. For example, in Figure 1, the word “man” attends to the person on the left of the image. Further, non-entity words often attend to image regions that correspond to their syntactic neighbors and we call this syntactic grounding. For example, “wearing” is attending to its subject, the man in the image. We argue that syntactic grounding actually complements entity grounding and that it is a natural byproduct of cross-modal reasoning. For example, to ground “man” to the person on the left rather than other pedestrians, the model needs to identify the syntactic relationships among “man”, “wearing”, “white”, and “shirt” and ground “shirt” and “man” subsequently. During such process, it is helpful and natural that “wearing” attends to the man in the image. We verify such phenomena by treating each attention head as a ready-to-use classifier (Clark et al., 2019) that given an input word, always outputs the most-attended-to image region. Using Flickr30K Entities (Plummer et al., 2015) as a test bed, we demonstrate that certain heads could perform entity and syntactic grounding with an accuracy significantly higher than a rule-based baseline. Further, higher layers tend to have higher grounding accuracy, suggesting that the model is 5266 Man Shirt Sidewalk Pedestrians Sidewalk* Layer 3 Layer 4 Layer 5 Layer 6 Layer 10 Layer 11 Figure 1: Attention weights of some selected heads in a pre-trained visually grounded language model. In high layers (e.g., the 10-th and 11-th layer), the model can implicitly grounding visual concepts (e.g., “other pedestrians” and “man wearing white shirt”). The model also captures certain syntactic dependency relations (e.g., “walking” is aligned to the man region in the 6-th layer). The model also refines its understanding over the layers, incorrectly aligning “man” and “shirt” in the 3-rd layer but correcting them in higher layers. A person hits a ball with a tennis racket Transformer … [CLS] [MASK] [SEP] Objective 2 Objective 1 e1 eN-1 eN … f1 fK-1 fK … f’1 f’K-1 f’K … e’1 e’N-1 e’N Figure 2: The architecture of VisualBERT. Image regions and language are combined with a Transformer to allow the self-attention to discover implicit alignments between language and vision. n. It is pre-trained with a masked language modeling (Objective 1), and sentence-image prediction task (Objective 2), on caption data and then fine-tuned for different tasks. refining its understanding of vision and language layer by layer. Additionally, we provide a qualitative analysis exemplifying these phenomena. A long version of this paper is at https://arxiv. org/abs/1908.03557. Our code is available at https://github.com/uclanlp/visualbert. 2 Model Several pre-trained visually grounded models have been proposed recently, and they are conceptually similar yet vary in design details, making evaluating them complicated and difficult. Thus for simplicity, we propose a simple and performant baseline, VisualBERT (see Figure 2), and base our analysis on this model. We argue that our analysis on VisualBERT can be generalized to other similar models as all these models share the following two core ideas: (1) image features extracted from object detectors such as Faster-RCNN (Ren et al., 2015) are fed in a Transformer-based model along with text; (2) the model is pre-trained on image-text data Task Baseline VisualBERT VQA 68.71 70.80 VCR 44.0 52.4 NLVR2 53.5 67.3 Flickr30K 69.69 71.33 Table 1: Performance of VisualBERT on four benchmarks. On VQA, we compare to Pythia v0.3 (Singh et al., 2019) and report on test-dev; on VCR, we compare to R2C (Zellers et al., 2019) and report test accuracy on Q→AR; on NLVR2, we compare to MaxEnt (Suhr et al., 2019) and report on Test-P; on Flickr30K, we compare to BAN (Kim et al., 2018) and report the test recall@1. with a masked visually grounded langauge model objective. Below we introduce VisualBERT briefly and leave details to the Appendix A. Input to VisualBERT includes a text segment and an image. The image is represeted as a set of visual embeddings, where each embedding vector corresponds to a bounding region in the image, derived from an object detector (Ren et al., 2015). Text and visual embeddings are then passed through multiple Transformer layers to build joint representations. VisualBERT is pre-trained on the COCO dataset (Chen et al., 2015), concisting of around 100K images with 5 captions each. We use two objectives for pre-training. (1) Masked language modeling with the image. Some elements of text input are masked and the model learns to predict the masked words based on the remaining text and visual context. (2) Sentence-image prediction. For COCO, where there are multiple captions corresponding to one image, we provide a text segment consisting of two captions. One of the caption is describing the image, while the other has a 50% chance to be another corresponding caption and a 5267 2 4 6 8 10 12 Layer 0.1 0.2 0.3 0.4 0.5 Grounding Acc Figure 3: Entity grounding accuracy of the attention heads organized by layer. The rule-based baseline is drawn as the grey line. We find that certain heads achieve high accuracy while the accuracy peaks at higher layers. 50% chance to be a randomly drawn caption. The model is trained to distinguish these two situations. Extensive experiments on four vision-andlanguage datasets (Goyal et al., 2017; Zellers et al., 2018; Suhr et al., 2019; Plummer et al., 2015) verify that pre-trained VisualBERT exceeds all comparable baselines significantly. A summary of the results is present in Table 1. See the Appendix B for details. Some of the afore-mentioned pre-trained visually grounded language models use additional pre-training data or parameters and achieve better performance. As this paper focuses on the analysis, we do not focus on comparing the performance of VisualBERT and other similar models. For the rest of the paper, we analyze a VisualBERT that is configured the same as BERTBase with 12 layers and 144 self-attention heads in total. The model is pretrained on COCO. To mitigate the domain difference between the diagnostic dataset Flickr30K and COCO, we perform additional pre-training on the training set of Flickr30K with the fore-mentioned masked language modeling objective with the image. 3 Experiment 3.1 Quantitative analysis Entity Grounding We first focus on entity grounding and use the validation set of Flickr30K Entities for evaluation. The dataset contains imagecaption pairs and annotates the entities in the captions and the corresponding image regions. For each annotated entity and for each attention head of VisualBERT, we take the bounding region which receives the most attention weight as the prediction. An entity could attend to not only the image regions Type Baseline Acc Head det 19.59 54.01 10-1 pobj 17.34 32.82 11-11 amod 18.67 45.96 10-9 nsubj 23.19 44.64 5-1 prep 20.61 49.27 9-11 dobj 9.82 30.24 11-11 punct 23.32 48.80 3-6 partmod 21.41 38.15 4-9 nn 16.33 34.06 10-9 num 23.15 67.44 9-11 Table 2: The best performing heads on grounding 10 most common dependency relationships. We only consider heads that are allocating on average more than 20% of the attention from source words to all image regions. A particular attention head is denoted as <layer>-<head number>. but also other words in the text. For this evaluation, we regard the image region that receives the most attention weight compared to other image regions as the prediction, without considering other words in the text. The predicted region is considered correct as long as it overlaps with the gold bounding region with a IoU ≥0.5 (Kim et al., 2018). We also consider a rule-based baseline that always chooses the region with the highest detection confidence. We report the accuracy for all 144 attention heads in VisualBERT and the baseline in Figure 3. Despite that some heads are accurate at entity grounding, they are not actively attending to the image regions. For example, a head might be allocating 10% of its attention weights to all image regions, but it assigns the most of the 10% weights to the correct region. We regard heads paying on average more than 20% of its attention weights from the entities to the regions as “actively paying attention to the image” and draw then as dark and large dots, while the others are drawn as light and small dots. We make the following two observations. First, certain heads perform entity grounding with a remarkably high accuracy. This is consistent with the observations in Clark et al. (2019) and Voita et al. (2019) that the attention heads specialize in different things. The best of all heads even achieves a high accuracy of 50.77 compared to the baseline 17.33. Further, the grounding accuracy peaks in higher layers. This resembles what Tenney et al. (2019) find, in that BERT also refines its understanding of the input over the layers. Syntactic Grounding As motivated before, alignments between words other than nouns and 5268 Figure 4: Accuracy of attention heads of VisualBERT for syntactic grounding on specific dependency relationships (“pobj”, “nsubj”, “amod”). The grey lines denote a baseline that always chooses the region with the highest detection confidence. We observe that VisualBERT is capable of detecting these dependency relationships without direct supervision. image regions could also be helpful for visual reasoning. More specifically, if two words are connected with a dependency relation, w1 r ←→w2, and w1 is an entity aligned to an image region, we would like to know how often the attention heads attend from w2 to the regions corresponding to w1. For evaluation, we parse all sentences in the validation set of Flickr30K using AllenNLP (Dozat and Manning, 2017; Gardner et al., 2018) and use the parser output as the gold parsing annotation. We find that for each dependency relationship, there exists at least one head that significantly outperforms guessing the most confident bounding region. We report the 10 most common relations in Table 2 and plot the syntactic grounding accuracy of three particularly interesting dependency relationships in Figure 4. Similar to what we observe for entity grounding, the model becomes more accurate on syntactic grounding in higher layers. 3.2 Qualitative Analysis Finally, we showcase several interesting examples of how VisualBERT performs grounding in Figure 1 and Figure 5. To generate these examples, for each ground-truth box, we show a predicted bounding region closest to it and manually group the bounding regions into different categories. We also include regions that the model is actively attending to, even if they are not present in the gold annotations (marked with an asterisk). We then aggregate the attention weights from words to those regions in the same category. We show the best heads of 6 layers that achieve the highest entity grounding accuracy but we find that they also exhibit a certain level of syntactic grounding. We observe the same behaviours as in the quantitative analysis, in that VisualBERT not only performs grounding but also refines its predictions through successive Transformer layers. For example, in the bottom image in Figure 5, initially the word “husband” and the word “woman” both assign significant attention weight to regions corresponding to the woman. By the end of the computation, VisualBERT has disentangled the woman and man, correctly aligning both. Furthermore, there are many examples of syntactic alignments. In the same image, the word “teased” aligns to both the man and woman while “by” aligns to the man. 4 Related Work There is a long research history of bridging vision and language (Chen et al., 2015; Antol et al., 2015; Zellers et al., 2019) with the lasted advances being visually grounded language models (Lu et al., 2019; Alberti et al., 2019; Li et al., 2019; Su et al., 2019; Tan and Bansal, 2019; Chen et al., 2019). However, little analysis has been done on understanding what vision-and-language models learn. Previous works on VQA and image captioning (Yang et al., 2016; Anderson et al., 2018; Kim et al., 2018) have only shown qualitative examples on the grounding ability of the models, while another line of work focuses on designing dedicated models for the entity grounding task (Xiao et al., 2017; Datta et al., 2019). We, however, present a quantitative study on whether visually grounded language models acquire the grounding ability during pre-training without explicit supervision. Our work is inspired by papers on analyzing pretrained language models. One line of work uses probing tasks to study the internal representations (Peters et al., 2018a; Liu et al., 2019; Tenney et al., 2019) while another studies the attention mechanism (Clark et al., 2019; Voita et al., 2019; Kovaleva et al., 2019). We follow the latter but we believe the grounding behaviour could also be probed in the internal representations of VisualBERT. 5269 a Person Ball Racket Layer 3 Layer 4 Layer 5 Layer 6 Layer 10 Layer 11 person hits a ball with a tennis rack ##et a Man Computer Cat Computer* Face* Layer 3 Layer 4 Layer 5 Layer 6 Layer 10 Layer 11 works on a computer with a cat in the background man Woman Sweater Husband Layer 3 Layer 4 Layer 5 Layer 6 Layer 10 Layer 11 a flu ##flustered women in a white sweater is teased her husband by Figure 5: Attention weights of 6 selected heads in VisualBERT where alignments match Flickr30k annotations. 5 Conclusion and Future Work We have presented an analysis on the attention maps of VisualBERT, a proposed visually grounded language model. We note that the grounding behaviour we have found is linguistically inspired, as entity grounding can be regarded as cross-modal entity coref resolution while syntactic grounding can be regarded as cross-modal parsing. Moreover, VisualBERT exhibits a hint of cross-modal pronoun resolution, as in the bottom image of Figure 5, the word “her” is resolved to the woman. For future work, it would be interesting to see if more linguistically-inspired phenomena can be systematically found in cross-modal models. Acknowledgement We would like to thank Xianda Zhou for help with experiments as well as Patrick H. Chen, members of UCLA NLP, and anonymous reviewers for helpful comments. We also thank Rowan Zellers for evaluation on VCR and Alane Suhr for evaluation on NLVR2. Cho-Jui Hsieh acknowledges the support of NSF IIS-1719097 and Facebook Research Award. This work was supported in part by DARPA MCS program under Cooperative Agreement N66001-19-2-4032. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. 5270 References Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. 2019. Fusion of detected objects in text for visual question answering. ArXiv, abs/1908.05054. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In ICCV. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. UNITER: Learning universal image-text representations. arXiv preprint arXiv:1909.11740. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does BERT look at? an analysis of BERT’s attention. BlackboxNLP. Samyak Datta, Karan Sikka, Anirban Roy, Karuna Ahuja, Devi Parikh, and Ajay Divakaran. 2019. Align2ground: Weakly supervised phrase grounding guided by image-caption alignment. ICCV. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency parsing. ICLR. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software. Ross Girshick, Ilija Radosavovic, Georgia Gkioxari, Piotr Doll´ar, and Kaiming He. 2018. Detectron. https://github.com/facebookresearch/ detectron. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Yu Jiang, Vivek Natarajan, Xinlei Chen, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. 2018. Pythia v0. 1: the winning entry to the VQA challenge 2018. arXiv preprint arXiv:1807.09956. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In CVPR. Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In NeurIPS. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR. Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of bert. arXiv preprint arXiv:1908.08593. Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2019. Unicoder-VL: A universal encoder for vision and language by cross-modal pretraining. ArXiv, abs/1908.06066. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In NAACL-HLT. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018a. Dissecting contextual word embeddings: Architecture and representation. In EMNLP, pages 1499–1509. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word representations. In NAACL-HLT. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In ICCV. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. OpenAI. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In NeurIPS. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. 2019. Towards VQA models that can read. In CVPR. 5271 Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. VL-BERT: Pretraining of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530. Alane Suhr, Stephanie Zhou, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. ACL. Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In EMNLP. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. arXiv preprint arXiv:1905.05950. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. ACL. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Fanyi Xiao, Leonid Sigal, and Yong Jae Lee. 2017. Weakly-supervised visual grounding of phrases with linguistic structures. CVPR. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In CVPR. Jun Yu, Jing Li, Zhou Yu, and Qingming Huang. 2019a. Multimodal transformer with multi-view visual representation for image captioning. arXiv preprint arXiv:1905.07841. Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. 2019b. Deep modular co-attention networks for visual question answering. In CVPR. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In CVPR. Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. 2018. Neural motifs: Scene graph parsing with global context. In CVPR. Appendix We first introduce the model architecture and training process of VisualBERT (Section A). We then show experiments on four vision-and-language benchmarks (Section B). Ablation study is performed to verify our design choices (Section C). A VisualBERT First we give background on BERT, then summarize the adaptations we made to allow processing images and text jointly, and finally explain our training procedure. A.1 Background BERT (Devlin et al., 2019) is a Transformer (Vaswani et al., 2017) with subwords (Wu et al., 2016) as input and trained using language modeling objectives. All of the subwords in an input sentence are mapped to a set of embeddings, E. Each embedding e ∈E is computed as the sum of 1) a token embedding et, specific to the subword, 2) a segment embedding es, indicating which part of text the token comes from (e.g., the hypothesis from an entailment pair) and 3) a position embedding ep, indicating the position of the token in the sentence. The input embeddings E are then passed through a multi-layer Transformer that builds up a contextualized representation of the subwords. BERT is commonly trained with two steps: pretraining and fine-tuning. Pre-training is done using a combination of two language modeling objectives: (1) masked language modeling, where some parts of the input tokens are randomly replaced with a special token (i.e., [MASK]), and the model needs to predict the identity of those tokens and (2) next sentence prediction, where the model is given a sentence pair and trained to classify whether they are two consecutive sentences from a document. Finally, to apply BERT to a particular task, a taskspecific input, output layer, and objective are introduced, and the model is fine-tuned on the task data from pre-trained parameters. A.2 Model The core of our idea is to reuse the self-attention mechanism within the Transformer to implicitly align elements of the input text and regions in the input image. In addition to all the components of BERT, we introduce a set of visual embeddings, F, to model an image. Each f ∈F corresponds to a bounding region in the image, derived from an object detector. It is computed by summing three embeddings: (1) fo, a visual feature representation of the bounding region of f, computed by a convolutional neural network, (2) fs, a segment embedding indicating it is an image embedding as opposed to a text embedding, and (3) fp, a position 5272 embedding, which is used when alignments between words and bounding regions are provided as part of the input, and set to the sum of the position embeddings corresponding to the aligned words (see Section B.2). The visual embeddings are then passed to a multi-layer Transformer along with the original set of text embeddings, allowing the model to implicitly discover alignments between both sets of inputs, and build up a joint representation.1 A.3 Training VisualBERT We would like to adopt a similar training procedure as BERT but VisualBERT must learn to accommodate both language and visual input. Therefore we reach to a resource of paired data: COCO (Chen et al., 2015) that contains images each paired with 5 independent captions. Our training procedure contains three phases: Task-Agnostic Pre-Training As introduced before, we pre-train VisualBERT on COCO using two visually-grounded language model objectives. (1) Masked language modeling with the image. Some elements of text input are masked and must be predicted but vectors corresponding to image regions are not masked. (2) Sentence-image prediction. We supply two captions in one training example and one of the caption has a 50% chance to not match the image. The model is trained to determine if the provided captions is describing the image. Task-Specific Pre-Training Before fine-tuning VisualBERT to a downstream task, we find it beneficial to train the model using the data of the task with the masked language modeling with the image objective. This step allows the model to adapt to the new target domain. Fine-Tuning This step mirrors BERT finetuning, where a task-specific input, output, and objective are introduced, and the model is trained to maximize performance on the task. B Experiment We evaluate VisualBERT on four different types of vision-and-language applications: (1) Visual Question Answering (VQA 2.0) (Goyal et al., 2017), (2) Visual Commonsense Reasoning (VCR) (Zellers et al., 2019), (3) Natural Language for Visual Reasoning (NLVR2) (Suhr et al., 2019), and (4) Region1If text and visual input embeddings are of different dimension, we project the visual embeddings into a space of the same dimension as the text embeddings. to-Phrase Grounding (Flickr30K) (Plummer et al., 2015), each described in more details in the following sections. For all tasks, we use the Karpathy train split (Karpathy and Fei-Fei, 2015) of COCO for task-agnostic pre-training, which has around 100k images with 5 captions each. The Transformer encoder in all models has the same configuration as BERTBase: 12 layers, a hidden size of 768, and 12 self-attention heads. The parameters are initialized from BERTBase released by Devlin et al. (2019). For the image representations, each dataset we study has a different standard object detector to generate region proposals and region features. To compare with them, we follow their settings, and as a result, different image features are used for different tasks (see details in the subsections).2 For consistency, during task-agnostic pre-training on COCO, we use the same image features as in the end tasks. For each dataset, we evaluate three variants of our model: VisualBERT: The full model with parameter initialization from BERT that undergoes pre-training on COCO, pre-training on the task data, and finetuning for the task. VisualBERT w/o Early Fusion: VisualBERT but where image representations are not combined with the text in the initial Transformer layer but instead at the very end with a new Transformer layer. This allows us to test whether interaction between language and vision throughout the whole Transformer stack is important for performance. VisualBERT w/o COCO Pre-training: VisualBERT but where we skip task-agnostic pre-training on COCO captions. This allows us to validate the importance of this step. Following Devlin et al. (2019), we optimize all models using SGD with Adam (Kingma and Ba, 2015). We set the warm-up step number to be 10% of the total training step count unless specified otherwise. Batch sizes are chosen to meet hardware constraints and text sequences whose lengths are longer than 128 are capped. Experiments are conducted on Tesla V100s and GTX 1080Tis, and all experiments can be replicated on 4 Tesla V100s each with 16GBs of GPU memory. Pre-training on COCO generally takes less than a day on 4 cards while task-specific pre-training and fine-tuning usually take less. Other task-specific training details are in the corresponding subsections. 2Ideally, we can use the best available detector and visual representation for all tasks, but we would like to compare 5273 Model Test-Dev Test-Std Pythia v0.1 (Jiang et al., 2018) 68.49 Pythia v0.3 (Singh et al., 2019) 68.71 VisualBERT w/o Early Fusion 68.18 VisualBERT w/o COCO Pre-training 70.18 VisualBERT 70.80 71.00 Pythia v0.1 + VG + Other Data Augmentation (Jiang et al., 2018) 70.01 70.24 MCAN + VG (Yu et al., 2019b) 70.63 70.90 MCAN + VG + Multiple Detectors (Yu et al., 2019b) 72.55 MCAN + VG + Multiple Detectors + BERT (Yu et al., 2019b) 72.80 MCAN + VG + Multiple Detectors + BERT + Ensemble (Yu et al., 2019b) 75.00 75.23 Table 3: Model performance on VQA. VisualBERT outperforms Pythia(s), which are tested under a comparable setting. Model Q →A QA →R Q →AR Dev Test Dev Test Dev Test R2C (Zellers et al., 2019) 63.8 65.1 67.2 67.3 43.1 44.0 VL-BERT (Su et al., 2019) 73.7 74.0 74.5 74.8 55.0 55.5 VisualBERT w/o Early Fusion 70.1 71.9 50.6 VisualBERT w/o COCO Pre-training 67.9 69.5 47.9 VisualBERT 70.8 71.6 73.2 73.2 52.2 52.4 Table 4: Model performance on VCR. VisualBERT w/o COCO Pre-training outperforms R2C, which enjoys the same resource while VisualBERT further improves the results. B.1 VQA Given an image and a question, the task is to correctly answer the question. We use the VQA 2.0 (Goyal et al., 2017), consisting of over 1 million questions about images from COCO. We train the model to predict the 3,129 most frequent answers and use image features from a ResNeXt-based Faster RCNN pre-trained on Visual Genome (Jiang et al., 2018). We report the results in Table 3, including baselines using the same visual features and number of bounding region proposals as our methods (first section), our models (second section), and other incomparable methods (third section) that use external question-answer pairs from Visual Genome (+VG) , multiple detectors (Yu et al., 2019a) (+Multiple Detectors) and ensembles of their models. In comparable settings, our method is significantly simpler and outperforms existing work. B.2 VCR VCR consists of 290k questions derived from 110k movie scenes, where the questions focus on visual commonsense. The task is decomposed into two multi-choice sub-tasks wherein we train indimethods on a similar footing. vidual models: question answering (Q →A) and answer justification (QA →R). Image features are obtained from a ResNet50 (He et al., 2016) and “gold” detection bounding boxes and segmentations provided in the dataset are used3. The dataset also provides alignments between words and bounding regions that are referenced to in the text, which we utilize by using the same position embeddings for matched words and regions. Results on VCR are presented in Table 4. We compare our methods against the model released with the dataset which builds on BERT (R2C) and list the top performing single model on the leaderboard when we submit VisualBERT to the leaderloard (VL-BERT). Our ablated VisualBERT w/o COCO Pre-training enjoys the same resource as R2C, and despite being significantly simpler, outperforms it by a large margin. The full model further improves the results. Despite substantial domain difference between COCO and VCR, with VCR covering scenes from movies, pre-training on COCO still helps significantly. 3In the fine-tuning stage, for VisualBERT (with/without Early Fusion), ResNet50 is fine-tuned along with the model as we find it beneficial. For reference, VisualBERT with a fixed ResNet50 gets 51.4 on the dev set for Q →AR. The ResNet50 of VisualBERT w/o COCO Pre-training is not fine-tuned with the model such that we could compare it with R2C fairly. 5274 Model Dev Test-P Test-U Test-U (Cons) MaxEnt (Suhr et al., 2019) 54.1 54.8 53.5 12.0 LXMERT (Tan and Bansal, 2019) 75.0 74.5 76.2 42.1 VisualBERT w/o Early Fusion 64.6 VisualBERT w/o COCO Pre-training 63.5 VisualBERT 67.4 67.0 67.3 26.9 Table 5: Comparison with the state-of-the-art models on NLVR2. The two ablation models significantly outperform MaxEnt while the full model widens the gap. Table 6: Comparison with the state-of-the-art model on the Flickr30K. VisualBERT holds a clear advantage over BAN. Model R@1 R@5 R@10 Upper Bound Dev Test Dev Test Dev Test Dev Test BAN (Kim et al., 2018) 69.69 84.22 86.35 86.97 87.45 VisualBERT w/o Early Fusion 70.33 84.53 86.39 86.97 87.45 VisualBERT w/o COCO Pre-training 68.07 83.98 86.24 VisualBERT 70.40 71.33 84.49 84.98 86.31 86.51 B.3 NLVR2 NLVR2 is a dataset for joint reasoning about natural language and images, with a focus on semantic diversity, compositionality, and visual reasoning challenges. The task is to determine whether a natural language caption is true about a pair of images. The dataset consists of over 100k examples of English sentences paired with web images. We modify the segment embedding mechanism in VisualBERT and assign features from different images with different segment embeddings. We use an off-the-shelf detector from Detectron (Girshick et al., 2018) to provide image features and use 144 proposals per image.4 Results are in Table 5. VisualBERT w/o Early Fusion and VisualBERT w/o COCO Pre-training surpass the best model in Suhr et al. (2019) (MaxEnt) by a large margin while VisualBERT widens the gap. LXMERT is pre-trained on a much larger dataset and thus shows superior performance. B.4 Flickr30K Entities Flickr30K Entities dataset tests the ability of systems to ground phrases in captions to bounding regions in the image. The task is, given spans from a sentence, selecting the bounding regions they correspond to. The dataset consists of 30k images and 4We conducted a preliminary experiment on the effect of the number of object proposals kept per image. We tested models with 9, 18, 36, 72, and 144 proposals, which achieve an accuracy of 64.8, 65.5, 66.7, 67.1, and 67.4 respectively on the development set. nearly 250k annotations. We adapt the setting of BAN (Kim et al., 2018), where image features from a Faster R-CNN pre-trained on Visual Genome are used. For task specific fine-tuning, we introduce an additional self-attention block and use the average attention weights from each head to predict the alignment between boxes and phrases. For a phrase to be grounded, we take whichever box receives the most attention from the last sub-word of the phrase as the model prediction. Results are listed in Table 6. VisualBERT outperforms the current state-of-the-art model BAN. In this setting, we do not observe a significant difference between the ablation model without early fusion and our full model, arguing that perhaps a shallower architecture is sufficient for grounding when supervision is available. C Ablation Study In this section we conduct ablation study on what parts of our approach are important to VisualBERT’s strong performance. We compare two ablation models in the Experiment section and four additional variants on NLVR2. For ease of computations, these models are trained with only 36 features per image (including the full model). Our analysis (Table 7) aims to investigate the contributions of the following four components in VisualBERT: C1: Task-agnostic Pre-training We investigate the contribution of task-agnostic pre-training by 5275 Model Dev VisualBERT 66.7 C1 VisualBERT w/o Grounded Pre-training 63.9 VisualBERT w/o COCO Pre-training 62.9 C2 VisualBERT w/o Early Fusion 61.4 C3 VisualBERT w/o BERT Initialization 64.7 C4 VisualBERT w/o Objective 2 64.9 Table 7: Performance of the ablation models on NLVR2. Results confirm the importance of taskagnostic pre-training (C1) and early fusion of vision and language (C2). entirely skipping such pre-training (VisualBERT w/o COCO Pre-training) and also by pre-training with only text but no images from COCO (VisualBERT w/o Grounded Pre-training). Both variants underperform, showing that pre-training on paired vision and language data is important. C2: Early Fusion We include VisualBERT w/o Early Fusion to justify allowing early interaction between image and text features, confirming again that multiple interaction layers between vision and language are important. C3: BERT Initialization All models discussed before are initialized from a pre-trained BERT. To understand its contribution, we introduce a variant that is randomly initialized and then trained as the full model. While it seems weights from language-only pre-trained BERT are important, performance does not degrade as much as we expect, arguing that the model is likely learning many of the same useful aspects about grounded language during COCO pre-training. C4: The sentence-image prediction objective We introduce a model without the sentence-image prediction objective during pre-training (VisualBERT w/o Objective 2). Results suggest that this objective has positive but less significant effect, compared to other components. Overall, the results confirm that the most important design choices are task-agnostic pre-training (C1) and early fusion of vision and language (C2). In pre-training, both the inclusion of additional COCO data and using both images and captions are paramount.
2020
469
Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese Tatsuki Kuribayashi1,3, Takumi Ito1,3, Jun Suzuki1,2, Kentaro Inui1,2 1Tohoku University 2RIKEN 3Langsmith Inc. {kuribayashi, t-ito, jun.suzuki, inui} @ecei.tohoku.ac.jp Abstract We examine a methodology using neural language models (LMs) for analyzing the word order of language. This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods. In this study, we explore whether the LMbased method is valid for analyzing the word order. As a case study, this study focuses on Japanese due to its complex and flexible word order. To validate the LM-based method, we test (i) parallels between LMs and human word order preference, and (ii) consistency of the results obtained using the LM-based method with previous linguistic studies. Through our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool. Finally, using the LMbased method, we demonstrate the relationship between the canonical word order and topicalization, which had yet to be analyzed by largescale experiments. 1 Introduction Speakers sometimes have a range of options for word order in conveying a similar meaning. A typical case in English is dative alternation: (1) a. A teacher gave a student a book. b. A teacher gave a book to a student. Even for such a particular alternation, several studies (Bresnan et al., 2007; Hovav and Levin, 2008; Colleman, 2009) investigated the factors determining this word order and found that the choice is not random. For analyzing such linguistic phenomena, linguists repeat the cycle of constructing hypotheses and testing their validity, usually through psychological experiments or count-based methods. However, these approaches sometimes face difficulties, such as scalability issues in psychological 0.0002 0.0000001 generation probabilities order1 is more likely. 質に quality-DAT effect-ACC gave. LM gave. effect-ACC quality-DAT (∅!" affected the quality.) order!: order": 影響を 与えた. 質に 影響を 与えた. Figure 1: LM-based method for evaluating the canonicality of each word order considering their generation probabilities. experiments and the propagation of preprocessor errors in count-based methods. Compared to the typical approaches for evaluating linguistic hypotheses, approaches using LMs have potential advantages (Section 3.2). In this study, we examine the methodology of using LMs for analyzing word order (Figure 1). To validate the LM-based method, we first examine if there is a parallel between canonical word order and generation probability of LMs for each word order. Futrell and Levy (2019) reported that English LMs have human-like word order preferences, which can be one piece of evidence for validating the LM-based method. However, it is not clear whether the above assumption is valid even in languages with more flexible word order. In this study, we specifically focus on the Japanese language due to its complex and flexible word order. There are many claims on the canonical word order of Japanese, and it has attracted considerable attention from linguists and natural language processing (NLP) researchers for decades (Hoji, 1985; Saeki, 1998; Miyamoto, 2002; Matsuoka, 2003; Koizumi and Tamaoka, 2004; Nakamoto et al., 2006; Shigenaga, 2014; Sasano and Okumura, 2016; Orita, 2017; Asahara et al., 2018). We investigated the validity of using Japanese LMs for canonical word order analysis by conducting two sets of experiments: (i) comparing word order preference in LMs to that in Japanese speakers (Section 4), and (ii) checking the consistency Topic Time Location Subject (Adverb) Indirect object Direct object Verb Notation TOP TIM LOC NOM DAT ACC Typical particle “は” (wa) “に” (ni) “で” (de) “が” (ga) “に” (ni) “を” (o) Related section 6 5.2 5.2 5.2 5.3 5.1 5.1 5.1 Table 1: Overview of the typical cases in Japanese, their typical particles, and the sections where the corresponding case is analyzed. The well-known canonical word order of Japanese is listed from left to right. between the preference of LMs with previous linguistic studies (Section 5). From our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool, and further explore potential applications. Finally, we analyzed the relationship between topicalization and word order of Japanese by taking advantage of the LM-based method (Section 6). In summary, we: • Discuss and validate the use of LMs as a tool for word order analysis as well as investigate the sensitivity of LMs against different word orders in non-European language (Section 3); • Find encouraging parallels between the results obtained with the LM-based method and those with the previously established method on various hypotheses of canonical word order of Japanese (Sections 4 and 5); and • Showcase the advantages of an LM-based method through analyzing linguistic phenomena that is difficult to explore with the previous data-driven methods (Section 6). 2 Linguistic background This section provides a brief overview of the linguistic background of canonical word order, some basics of Japanese grammar, and common methods of linguistic analysis. 2.1 On canonical word order Every language is assumed to have a canonical word order, even those with flexible word order (Comrie, 1989). There has been a significant linguistic effort to reveal the factors determining the canonical word order (Bresnan et al., 2007; Hoji, 1985). The motivations for revealing the canonical word order range from linguistic interests to those involved in various other fields—it relates to language acquisition and production in psycholinguistics (Slobin and Bever, 1982; Akhtar, 1999), second language education (Alonso Belmonte et al., 2000), and natural language generation (Visweswariah et al., 2011) or error correction (Cheng et al., 2014) in NLP. In Japanese, there are also many studies on its canonical word order (Hoji, 1985; Saeki, 1998; Koizumi and Tamaoka, 2004; Sasano and Okumura, 2016). Japanese canonical word order The word order of Japanese is basically subject-object-verb (SOV) order, but there is no strict rule except placing the verb at the end of the sentence (Tsujimura, 2013). For example, the following three sentences have the same denotational meaning (“A teacher gave a student a book.”): (2) a. 先生が .............. ::::: 生徒に 本を あげた. teacher-NOM student-DAT book-ACC gave. b. 先生が .............. 本を ::::: 生徒に あげた. teacher-NOM book-ACC student-DAT gave. c. 本を ::::: 生徒に 先生が .............. あげた. book-ACC student-DAT teacher-NOM gave. This order-free nature suggests that the position of each constituent does not represent its semantic role (case). Instead, postpositional case particles indicate the roles. Table 1 shows typical constituents in a Japanese sentence, their postpositional particles, their canonical order, and the sections of this paper where each of them is analyzed. Note that postpositional case particles are sometimes omitted or replaced with other particles such as adverbial particles (Section 6). These characteristics complicate the factors determining word order, which renders the automatic analysis of Japanese word order difficult. 2.2 On typical methods for evaluating word order hypotheses and their difficulties There are two main methods in linguistic research: human-based methods, which observe human reactions, and data-driven methods, which analyze text corpora. Human-based methods A typical approach of testing word order hypotheses is observing the reaction (e.g., reading time) of humans to each word order (Shigenaga, 2014; Bahlmann et al., 2007). These approaches are based on the direct observation of humans, but this method has scalability issues. There are also concerns that the participants may be biased, and that the experiments may not be replicable. Data-driven methods Another typical approach is counting the occurrence frequencies of the targeted phenomena in a large corpus. This countbased method is based on the assumption that there are parallels between the canonical word order and the frequency of each word order in a large corpus. The parallel has been widely discussed (Arnon and Snider, 2010; Bresnan et al., 2007), and many studies rely on this assumption (Sasano and Okumura, 2016; Kempen and Harbusch, 2004). One of the advantages of this approach is suitability for largescale experiments. This enables considering a large number of examples. In this method, researchers often have to identify the phenomena of interest with preprocessors (e.g., the predicate-argument structure parser used by Sasano and Okumura (2016)) in order to count them. However, sometimes, identification of the targeted phenomena is difficult for the preprocessors, which limits the possibilities of analysis. For example, Sasano and Okumura (2016) focused only on simple examples where case markers appear explicitly, and only extract the head noun of the argument to avoid preprocessor errors. Thus, they could not analyze the phenomena in which the above conditions were not met. The above issue becomes more serious in low-resource languages, where the necessary preprocessors are often unavailable. In this count-based direction, Bloem (2016) used n-gram LMs to test the claims on the German twoverb clusters. This method is closest to our proposed approach, but the general validity of using LMs is out of focus. This LM-based method also relies on the assumption of the parallels between the canonical word order and the frequency. Another common data-driven approach is to train an interpretable model (e.g., Bayesian linear mixed models) to predict the targeted linguistic phenomena and analyze the inner workings of the model (e.g., slope parameters) (Bresnan et al., 2007; Asahara et al., 2018). Through this approach, researchers can obtain richer statistics, such as the strength of each factor’s effect on the targeted phenomena, but creating labeled data and designing features for supervised learning can be costly. 3 LM-based method 3.1 Overview of the LM-based method In the NLP field, LMs are widely used to estimate the acceptability of text (Olteanu et al., 2006; Kann et al., 2018). An overview of the LM-based method is shown in Figure 1. After preparing several word orders considering the targeted linguistic hypothesis, we compare their generation probabilities in LMs. We assume that the word order with the highest generation probability follows their canonical word order. 3.2 Advantages of the LM-based method In the count-based methods mentioned in Section 2.2, researchers often require preprocessors to identify the occurrence of the phenomena of interest in a large corpus. On the other hand, researchers need to prepare data to be scored by LMs to evaluate hypothesis in the LM-based method. Whether it is easier to prepare the preprocessor or the evaluation data depends on the situation. For example, the data preparation is easier in the situation where one wants to analyze the word order trends when a specific postpositional particle is omitted. The question is whether Japanese speakers prefer the word order like in Example (3)-a or (3)-b.1 (3) a. 生徒に .............. 本を あげた. student-DAT book(-ACC) gave. b. 本を 生徒に .............. あげた. book(-ACC) student-DAT gave. While identifying the cases (ACC in Example (3)) without their postpositional particle is difficult, creating the data without a specific postpositional particle by modifying the existing data is easier such as creating Example (4)-b from Example (4)-a. (4) a. 生徒に .............. 本を あげた. student-DAT book-ACC gave. b. 生徒に .............. 本を あげた. student-DAT book(-ACC) gave. Thus, in such situation, the LM-based method can be suitable. The human-based method is more reliable given an example. However, it can be prohibitively costly. While the human-based method requires an evaluation data and human subjects, the LM-based method only requires the evaluation data. Thus, the LM-based method can be more suitable for estimating the validity of hypotheses and considering 1Omitted characters are crossed out. (e.g., を) many examples as exhaustively as possible. In addition, the LM-based method can be replicable. The suitable approach can be different in a situation, and broadening the choice of alternative methodologies may be beneficial to linguistic research. Nowadays, various useful frameworks, language resources, and machine resources required to train LMs are available,2 which support the ease of implementing the LM-based method. Moreover, we make the LMs used in this study available.3 3.3 Strategies to validate the use of LMs to analyze the word order The goal of this study is to validate the use of LMs for analyzing the canonical word order. The canonical word order itself is still a subject of research, and the community does not know all about it. Thus, it is ultimately impossible to enumerate the requirements on what LMs should know about the canonical word order and probe the knowledge of LMs. Instead, we demonstrate the validity of the LM-based method by showcasing two types of parallels: (i) word order preference of LMs showing parallels with that of humans, and (ii) the results obtained with the LM-based method and those with previous methods being consistent on various claims on canonical word order. If the results of LMs are consistent with those of existing methods, the possibility that LMs and existing methods have the same ability to evaluate the hypotheses is supported. If the LM-based method is assumed to be valid, the method has the potential to streamline the research on unevaluated claims on word order. In the experiment sections, we examine the properties of Japanese LMs on (i) and (ii). 3.4 CAUTION – when using LMs for evaluating linguistic hypotheses Even if LMs satisfy the criteria described in 3.3, there is no exact guarantee that LM scores will reflect the effectiveness of human processing of specific constructions in general. Thus, there seems to be a danger of confusing LM artifacts with language facts. Based on this, we hope that researchers use LMs as a tool just to limit the hypothesis space. LM supported hypotheses should then be re-verified with a human-based approach. 2For example, one can train LMs with fairseq (Ott et al., 2019) and Wikipedia data on cloud computing platforms. 3https://github.com/kuribayashi4/LM_ as_Word_Order_Evaluator. Furthermore, since there is a lot of hypotheses and corresponding research, we cannot check all the properties of LMs in this study. This study focuses on intra-sentential factors of Japanese case order, and it is still unclear whether the LM-based method works properly in linguistic phenomena which are far from being the focus of this study. This is the first study where evidence is collected on the validity of using LMs for word order analysis and encourages further research on collecting such evidence and examining under what conditions this validity is guaranteed. 3.5 LMs settings We used auto-regressive, unidirectional LMs with Transformer (Vaswani et al., 2017). We used two variants of LMs, a character-based LM (CLM) and a subword-based LM (SLM). In training SLM, the input sentences are once divided into morphemes by MeCab (Kudo, 2006) with a UniDic dictionary,4 and then these morphemes are split into subword units by byte-pair-encoding. (Sennrich et al., 2016)5. 160M sentences6 randomly selected from 3B web pages were used to train the LMs. Hyperparameters are shown in Appendix A. Given a sentence s, we calculate its generation probability p(s) = −→p (s) · ←−p (s), where −→p (·) and ←−p (·) are generation probabilities calculated by a left-to-right LM and a right-to-left LM, respectively. Depending on the hypothesis, we compare the generation probabilities of various variants of s with different word orders. We assume that the word order with the highest generation probability follows their canonical word order. 4 Experiment1: comparing human and LMs word order preference To examine the validity of using LMs for canonical word order analysis, we examined the parallels between the LMs and humans on the task determining the canonicality of the word order (Figure 2). First, we created data for this task (Section 4.1). We then compared the word order preference of LMs and that of humans (Section 4.2). 4https://unidic.ninjal.ac.jp/ 5Implemented in sentencepiece (Kudo and Richardson, 2018) We set character coverage to 0.9995,and vocab size to 100,000. 614GB in UTF-8 encoding. For reference, Japanese Wikipedia has around 2.5 GB of text. Because the focus of this study has context-independent nature, the sentences order is shuffled to prevent learning the inter-sentential characteristics of the language. corpus order!: compare humans 彼が he-NOM popular book-ACC gave. 流行りの 本を 買った 彼が he-NOM popular book-ACC gave. 流行りの 本を 買った original order scrambled order order1 is more natural order": order1 is more likely order1 order1 LM Figure 2: Overview of the experiment of comparing human and LMs word order preference. First, we created data for the task of comparing the appropriateness of the word order (left part), then we compare the preference of LMs and humans through this task (right part). 4.1 Human annotation Data We randomly collected 10k sentences from 3B web pages, which are not overlapped with the LM training data. To remove overly complex sentences, we extracted sentences that must: (i) have less than or equal to five clauses and one verb, (ii) have clauses with a sibling relationship in its dependency tree, and they accompany a particle or adverb, (iii) not have special symbols such as parentheses, and (iv) not have a backward dependency path. For each sentence, we created its scrambled version.7 The scrambling process is as follows: 1. Identify the dependency structure by using JUMAN8 and KNP9. 2. Randomly select a clause with several children. 3. Shuffle the position of its children along with their descendants. Annotation We used the crowdsourcing platform Yahoo Japan!10. For our task, we showed crowdworkers a pair of sentences (order1, order2), where one sentence has the original word order, and the other sentence has a scrambled word order.11 Each annotator was instructed to label the pair with one of the following choices: (1) order1 is better, (2) order2 is better, or (3) the pair contains a semantically broken sentence. Only the sentences (order1, order2) were shown to the annotators, and they were instructed not to imagine a specific context for the sentences. We filtered unmotivated workers by using check questions.12 For each pair 7When several scrambled versions were possible for a given sentence, we randomly selected one of them. 8http://nlp.ist.i.kyoto-u.ac.jp/EN/ index.php?JUMAN 9http://nlp.ist.i.kyoto-u.ac.jp/EN/ index.php?KNP 10https://crowdsourcing.yahoo.co.jp/ 11Crowdworkers did not know which sentence was the original sentence. 12We manually created check questions considering the Japanese speakers’ preference in trial experiments in advance. instance, we employed 10 crowdworkers. In total, 756 unique, motivated crowdworkers participated in our task. From the annotated data, we collected only the pairs satisfying the following conditions for our experiments: (i) none of 10 annotators determined that the pair contains a semantically broken sentence, and (ii) nine or more annotators preferred the same order. The majority decision is labeled in each pair; the task is binary classification. We assume that if many workers prefer a certain word order, then it follows its canonical word order, and the other one deviates from it. We collected 2.6k pair instances of sentences. 4.2 Results We compared the word order preference of LMs and that of the workers by using the 2.6K pairs created in Section 4.1. We calculated the correlation of the decisions between the LMs and the workers; which word order is more appropriate order1 or order2. The word orders supported by CLM and SLM are highly correlated with workers, with the Pearson correlation coefficient of 0.89 and 0.90, respectively. This supports the assumption that the generation probability of LMs can determine the canonical word order as accurately as humans do. Note that such a direct comparison of word order is difficult with the count-based methods because of the sparsity of the corpus. 5 Experiment2: consistency with previous studies This section examines whether LMs show word order preference consistent with previous linguistic studies. The results are entirely consistent, which support the validity of the LM-based methods in Japanese. Each subsection focuses on a specific component of Japanese sentences. 5.1 Double objects The order of double objects is one of the most controversial topics in Japanese word order. Examples of the possible order are as follows: (5) DAT-ACC: 生徒に student-DAT 本を book-ACC あげた gave. ACC-DAT: 本を book-ACC ::::: 生徒に student-DAT あげた gave. Henceforth, DAT-ACC / ACC-DAT denotes the word order in which the DAT / ACC argument precedes the ACC / DAT argument. We evaluate the 0 0.5 1 0 0.5 1 ACC-DAT rate (our results) ACC-DAT rate (S&O 2016) (a) Each verb’s ACC-DAT rate. 0 0.5 1 0 0.5 1 ACC-DAT rate RDAT-only (b) Relationship between each verb’s Rv DAT-only and the ACC-DAT rate. 0 0.5 1 -1 -0.5 0 0.5 1 ACC-DAT rate ΔNPMI CLM SLM S&O 2016 Linear approx. (CLM) Linear approx. (SLM) Linear approx. (S&O 2016) (c) Relationship between the degree of co-occurrence of verb and arguments, and the ACC-DAT rate in each example. For the results of LMs, the ACC-DAT rate of each example is regarded as 1 if LMs prefer ACC-DAT order, otherwise we regard the example as 0. Figure 3: Overlap of the results of Sasano and Okumura (2016) and that of LMs. In figures (a) and (b), each plot corresponds to each verb. In figure (c), each plot corresponds to each example. The legend of figure (a) and (b) is the same as in figure (c). “S&O 2016” refers to Sasano and Okumura (2016). claims Sasano and Okumura (2016) focused on with the data they collected.13 Word order for each verb First, we analyzed the trend of the double object order for each verb. We analyzed 620 verbs following Sasano and Okumura (2016).14 For each set of examples Sv corresponding to a verb v, we: (i) created an instance with the swapped order of ACC and DAT for each example, and (ii) compared the generation probabilities of the original and swapped instance. ˆSv is the set of examples preferred by LMs. Rv ACC-DAT is calculated as follows: Rv ACC-DAT = Nv ACC-DAT Nv ACC-DAT + Nv DAT-ACC , where Nv ACC-DAT / Nv DAT-ACC is the number of examples with the ACC-DAT / DAT-ACC order in ˆSv. Figure 3-(a) shows the relationship between Rv ACC-DAT determined by LMs and one reported in a 13We filtered the examples overlapping with the training data of LMs in advance. As a result, we collected 4.5M examples. 14We removed verbs for which all examples overlap with the data for training the LMs. previous count-based study (Sasano and Okumura, 2016). These results strongly correlate with the Pearson correlation coefficient of 0.91 and 0.88, in CLM and SLM, respectively. In addition, “canonical word order is DAT-ACC” (Hoji, 1985) is unlikely to be valid because there are verbs where Rv ACC-DAT is very high (details in Appendix B.1). This conclusion is consistent with Sasano and Okumura (2016). Word order and verb types In Japanese, there are show-type and pass-type verbs (details in Appendix B.2). Matsuoka (2003) claimed that the order of double objects differs depending on these verb types. Following Sasano and Okumura (2016), we analyzed this trends. We applied the Wilcoxon rank-sum test between the distributions of Rv ACC-DAT determined by LMs in the two groups (show-type and passtype verbs). The results show no significant difference between the two groups (p-value is 0.17 and 0.12 in the experiments using CLM and SLM, respectively). These results are consistent with the count-based (Sasano and Okumura, 2016) and the human-based (Miyamoto, 2002; Koizumi and Tamaoka, 2004) methods. Word order and argument omission Sasano and Okumura (2016) claimed that the frequently omitted case is placed near the verb. First, we calculated Rv DAT-only for each verb v as follows: Rv DAT-only = Nv DAT-only Nv DAT-only + Nv ACC-only , where Nv DAT-only / Nv ACC-only denotes the number of examples in which the DAT / ACC case appears, and the other case does not in Sv. A large Rv DAT-only score indicates that the DAT argument is less frequently omitted than the ACC argument in Sv. We analyzed the relationship between Rv DAT-only and Rv ACC-DAT for each verb. Figure 3-(b) shows that the regression lines from the LM-based method and Sasano and Okumura (2016) corroborate similar trends. The Pearson correlation coefficient between Rv DAT-only and Rv ACC-DAT is 0.404 for CLM and 0.374 for SLM. The results are consistent with Sasano and Okumura (2016), where they reported that the correlation coefficient was 0.391. Word order and semantic role of the dative argument Matsuoka (2003) claimed that the canonical word order differs depending on the semantic role of the dative argument. Sasano and Okumura TIM<LOC TIM<NOM LOC<NOM CLM .757 .642 .604 SLM .708 .632 .615 Count .686 .666 .681 Table 2: The columns a < b show the score o(a < b), which indicates the rate of case a being more likely to be placed before b. The row “Count” shows the countbased results in the dataset we used. (2016) evaluated this claim by analyzing the trend in the following two types of examples: (6) Type-A: 本を book-ACC ::::: 学校に school-DAT 返した returned. Type-B: ::::: 先生に teacher-DAT 本を book-ACC 返した returned. Type-A has an inanimate goal (school) as the DAT argument, while Type-B has an animate processor (teacher). It was reported that Type-A is likely to be the ACC-DAT order, while Type-B is likely to be the DAT-ACC order. Following Sasano and Okumura (2016), we analyzed 113 verbs.15 For each verb, we compared the ACC-DAT rate in its type-A examples and the rate in its type-B examples. The number of verbs where the ACC-DAT order is preferred in Type-A examples to Type-B examples is significantly larger (a two-sided sign test p < 0.05). This result is consistent with that of Sasano and Okumura (2016); Matsuoka (2003) and implies that the LMs capture the animacy of the nouns. Details are in Appendix B.3. Word order and co-occurrence of verb and arguments Sasano and Okumura (2016) claimed that an argument that frequently co-occurs with the verb tends to be placed near the verb. For each example, the LMs determine which word order (DAT-ACC or ACC-DAT) is appropriate. Each example also has a score ∆NPMI (definition in Appendix B.4). Higher ∆NPMI means that the DAT noun in the example more strongly co-occurs with the verb in the example than the ACC noun. Figure 3-(c) shows the relationship between ∆NPMI and the ACC-DAT rate in each example. ∆NPMI and the ACC-DAT rate are correlated with the Pearson correlation coefficient of 0.517 and 0.521 in CLM and SLM, respectively. These results are consistent with Sasano and Okumura (2016). 15Among the 126 verbs used in Sasano and Okumura (2016), 113 verbs with data that do not overlap with the LM training data were selected. Model MODAL TIME MANNER RESULTIVE CLM 1. 1 0.5 1. SLM 1. 0.5 1. 0.5 Table 3: The scores denote the rank correlation between the preference of each adverb position in LMs and that reported in (Koizumi and Tamaoka, 2006). 5.2 Order of constituents representing time, location, and subject information Our focus moves to the cases closer to the beginning of the sentences. The following claim is a well-known property of Japanese word order: “The case representing time information (TIM) is placed before the case representing location information (LOC), and the TIM and LOC cases are placed before the NOM case” (Saeki, 1960, 1998). We examined a parallel between the result obtained with the LM-based and count-based methods on this claim. We randomly collected 81k examples from 3B web pages.16 To create the examples, we identified the case components by KNP, and the TIM and LOC cases were categorized with JUMAN (details in Appendix C). For each example s, we created all possible word orders and obtained the word order with the highest generation probability (ˆs). Given ˆS a set of ˆs, we calculated a score o(a < b) for cases a and b as follows: o(a < b) = Na<b Na<b + Nb<a , where Nk<l is the number of examples where the case k precedes the case l in ˆS. Higher o(a < b) indicates that the case a is more likely to be placed before the case b. The results with the LM-based methods and the count-based method are consistent (Table 2). Both results show that o(TIM < LOC) is significantly larger than o(TIM > LOC) (p < 0.05 with a two-sided signed test), which indicates that the TIM case usually precedes the LOC case. Similarly, the results indicate that the TIM case and the LOC case precedes the NOM case. 5.3 Adverb position We checked the preference of the adverb position in LMs. The position of the adverb has no restriction except that it must be before the verb, which is similar to the trend of the case position. However, Koizumi and Tamaoka (2006) claimed that “There is a canonical position of an adverb depend16Without overlap with the training data of LMs. Model long precedes short short precedes long CLM 5,640 3,754 SLM 5,757 3,914 Table 4: Changes in the position of a constituent with the largest number of chunks. ing on its type.” They focus on four types of adverbs: MODAL, TIME, MANNER, and RESULTIVE. We used the same examples as Koizumi and Tamaoka (2006). For each example s, we created its three variants with a different adverb position as follows (“A friend handled the tools roughly.”): (10) ASOV: 乱暴に roughly 友達が friend-NOM 道具を tools-ACC 扱った handled. SAOV: 友達が friend-NOM 乱暴に roughly 道具を tools-ACC 扱った handled. SOAV: 友達が friend-NOM 道具を tools-ACC 乱暴に roughly 扱った handled. where the sequence of the alphabet such as “ASOV” denote the word order of its corresponding sentences. For example, “ASOV” indicates the order: adverb < subject < object < verb. “A,” “S,” “O,” and “V” denote “adverb,” “subject,” “object,” and “verb,” respectively. Then, we obtained the preferred adverb position by comparing their generation probabilities. Finally, for each adverb type and its examples, we ranked the preference of the possible adverb positions: “ASOV,” “SAOV,” and “SOAV.” Table 3 shows the rank correlation of the preference of the position of each adverb type. The results show similar trends of LMs with that of the human-based method (Koizumi and Tamaoka, 2006). 5.4 Long-before-short effect The effects of “long-before-short,” the trend that a long constituent precedes a short one, has been reported in several studies (Asahara et al., 2018; Orita, 2017).We checked whether this effect can be captured with the LM-based method. Among the examples used in Section 5.2, we analyzed about 9.5k examples in which the position of the constituent with the largest number of chunks17 differed between its canonical case order18 and the order supported by LMs. Table 4 shows that there are significantly (p < 0.05 with a two-sided signed test) large numbers 17chunks were identified by KNP. 18In this section, canonical case order is assumed to be TOM<LOC<NOM<DAT<ACC. of examples where the longest constituent moves closer to the beginning of the sentence. This result is consistent with existing studies and supports the tendency for longer constituents to appear before shorter ones. 5.5 Summary of the results We found parallels between the results with the LM-based method and that with the previously established method on various properties of canonical word order. These results support the use of LMs for analyzing Japanese canonical word order. 6 Analysis: word order and topicalization In the previous section, we tentatively concluded that LMs can be used for analyzing the intrasentential properties on the canonical word order. Based on this finding, in this section, we demonstrate the analysis of additional claims on the properties of the canonical word order with the LMbased method, which has been less explored by large-scale experiments. This section shows the analysis of the relationship between topicalization and the canonical word order. Additional analyses on the effect of various adverbial particles for the word order are shown in Appendix F. 6.1 Topicalization in Japanese The adverbial particle “は” (TOP) is usually used as a postpositional particle when a specific constituent represents the topic of the sentence (Heycock, 1993; Noda, 1996; Fry, 2003). When a case component is topicalized, the constituent moves to the beginning of the sentence, and the particle “は” (TOP) is added (Noda, 1996). Additionally, the original case particle is sometimes omitted,19 which makes the case of the constituent difficult to identify. For example, to topicalize “本を” (bookACC) in Example (8)-a, the constituent moves to the beginning of the sentence, and the original accusative case particle “を” (ACC) is omitted. Similarly, “先生が” (teacher-NOM) is topicalized in Example (8)-b. The original sentence is enclosed in the square brackets in Example (8). (8) a. 本をは [先生が 本を あげた.] book-TOP teacher-NOM book-ACC gave. b. 先生がは [先生が 本を あげた.] teacher-TOP teacher-NOM book-ACC gave. 19The particles “を” (ACC) and “が” (NOM) are omitted. With the above process, we can easily create a sentence with a topicalized constituent. On the other hand, identifying the original case of the topicalized case components is error-prone. Thus, the LM-based method can be suitable for empirically evaluating the claims related to the topicalization. 6.2 Experiments and results By using the LM-based method, we evaluate the following two claims: (i) The more anterior the case is in the canonical word order, the more likely its component is topicalized (Noda, 1996). (ii) The more the verb prefers the ACC-DAT order, the more likely the ACC case is topicalized than the DAT case. The claim (i) suggests that, for example, the NOM case is more likely to be topicalized than the ACC case because the NOM case is before the ACC case in the canonical word order of Japanese. The claim (ii) is based on our observation. It can be regarded as an extension of the claim (i) considering the effect of the verb on its argument order. We assume that the canonical word order of Japanese is TIM< LOC < NOM < DAT < ACC in this section. Claim (i) We examine which case is more likely to be topicalized. We collected 81k examples from Japanese Wikipedia (Details are in Appendix C). For each example, a set of candidates was created by topicalizing each case, as shown in Example (8). Then, we selected the sentences with the highest score by LMs in each candidate set. We denote the obtained sentences as ˆStopic. We calculated a score ta|b for pairs of cases a and b. ta|b = Na|b Na|b + Nb|a where Na|b is the examples where the case a and b appear, and case a is a topic of the sentence in ˆStopic. The higher the score is, the more the case a is likely to be topicalized than the case b is. We compared ta|b and tb|a among the pairs of cases a and b, where the case a precedes the case b in the canonical word order. Through our experiments, ta|b was significantly larger than tb|a (p < 0.05 with a paired t-test) in CLM and SLM results, which supports the claim (i) (Noda, 1996). Detailed results are shown in Appendix E. Claim (ii) The canonical word order of double objects is different for each verb (Section 5.1). Based on this assumption and the claim (i), we hypothesized that the more the verb prefers the ACC-DAT order, the more likely the ACC case of the verb is topicalized than the DAT case. We used the same data as in Section 5.1. For each example, we created two sentences by topicalizing the ACC or DAT argument. Then we compared their generation probabilities. In each set of examples corresponding to a verb v, we calculated the rate that the sentence with the topicalized ACC argument is preferred rather than that with the topicalized DAT argument. This rate and Rv ACC-DAT is significantly correlated with the Pearson correlation coefficient of 0.89 and 0.84 in CLM and SLM, respectively. This results support the claim (ii). Detailed results are shown in Appendix E. 7 Conclusion and Future work We have proposed to use LMs as a tool for analyzing word order in Japanese. Our experimental results support the validity of using Japanese LMs for canonical word order analysis, which has the potential to broaden the possibilities of linguistic research. From an engineering view, this study supports the use of LMs for scoring Japanese word order automatically. From the viewpoint of the linguistic field, we provide additional empirical evidence to various word order hypotheses as well as demonstrate the validity of the LM-based method. We plan to further explore the capability of LMs on other linguistic phenomena related to word order, such as “given new ordering” (Nakagawa, 2016; Asahara et al., 2018). Since LMs are language-agnostic, analyzing word order in another language with the LM-based method would also be an interesting direction to investigate. Furthermore, we would like to extend a comparison between machine and human language processing beyond the perspective of word order. 8 Acknowledgments We would like to offer our gratitude to Kaori Uchiyama for taking the time to discuss our paper and Ana Brassard for her sharp feedback on English. We also would like to show our appreciation to the Tohoku NLP lab members for their valuable advice. We are particularly grateful to Ryohei Sasano for sharing the data for double objects order analyses. This work was supported by JST CREST Grant Number JPMJCR1513, JSPS KAKENHI Grant Number JP19H04162, and Grant-inAid for JSPS Fellows Grant Number JP20J22697. References Nameera Akhtar. 1999. Acquiring basic word order: Evidence for data-driven learning of syntactic structure. Journal of child language, 26(2):339–356. Isabel Alonso Belmonte et al. 2000. Teaching English Word Order to ESL Spanish Students. A Functional Perspective. Inbal Arnon and Neal Snider. 2010. More than words: Frequency effects for multi-word phrases. Journal of Memory and Language, 62(1):67–82. Masayuki Asahara, Satoshi Nambu, and Shin-Ichiro Sano. 2018. Predicting Japanese Word Order in Double Object Constructions. In Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing, pages 36–40, Melbourne. Association for Computational Linguistics. J¨org Bahlmann, Antoni Rodriguez-Fornells, Michael Rotte, and Thomas F M¨unte. 2007. An fMRI study of canonical and noncanonical word order in German. Human brain mapping, 28(10):940–949. Jelke Bloem. 2016. Testing the Processing Hypothesis of word order variation using a probabilistic language model. In Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC), pages 174–185, Osaka, Japan. The COLING 2016 Organizing Committee. Joan Bresnan, Anna Cueni, Tatiana Nikitina, and R Harald Baayen. 2007. Predicting the dative alternation. In Cognitive foundations of interpretation, pages 69–94. KNAW. Shuk-Man Cheng, Chi-Hsin Yu, and Hsin-Hsi Chen. 2014. Chinese Word Ordering Errors Detection and Correction for Non-Native Chinese Language Learners. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 279–289, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Timothy Colleman. 2009. Verb disposition in argument structure alternations: a corpus study of the dative alternation in Dutch. Language Sciences, 31(5):593–611. Bernard Comrie. 1989. Language universals and linguistic typology: Syntax and morphology. University of Chicago press. John Fry. 2003. Ellipsis and wa-marking in Japanese conversation. Taylor & Francis. Richard Futrell and Roger P Levy. 2019. Do RNNs learn human-like abstract word order preferences? In Proceedings of the Society for Computation in Linguistics (SCiL) 2019, pages 50–59. ´Edouard Grave, Armand Joulin, Moustapha Ciss´e, David Grangier, and Herv´e J´egou. 2017. Efficient softmax approximation for GPUs. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1302–1310, International Convention Centre, Sydney, Australia. PMLR. Caroline Heycock. 1993. Syntactic predication in Japanese. Journal of East Asian Linguistics, 2(2):167–211. Hajime Hoji. 1985. Logical form constraints and configurational structures in Japanese. PHD Thesis. University of Washington. Malka Rappaport Hovav and Beth Levin. 2008. The English dative alternation: The case for verb sensitivity. Journal of linguistics, 44(1):129–167. Katharina Kann, Sascha Rothe, and Katja Filippova. 2018. Sentence-Level Fluency Evaluation: References Help, But Can Be Spared! In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 313–323, Brussels, Belgium. Association for Computational Linguistics. Gerard Kempen and Karin Harbusch. 2004. A corpus study into word order variation in German subordinate clauses: Animacy affects. Multidisciplinary approaches to language production, pages 173–181. Masatoshi Koizumi and Katsuo Tamaoka. 2004. Cognitive processing of Japanese sentences with ditransitive verbs. Gengo Kenkyu (Journal of the Linguistic Society of Japan), 2004(125):173–190. Masatoshi Koizumi and Katsuo Tamaoka. 2006. The Canonical Positions of Adjuncts in the Processing of Japanese Sentence. Cognitive Studies: Bulletin of the Japanese Cognitive Science Society, 13(3):392– 403. Taku Kudo. 2006. Mecab: Yet another part-of-speech and morphological analyzer. http://mecab.sourceforge.jp. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Mikinari Matsuoka. 2003. Two Types of Ditransitive Consturctions in Japanese. Journal of East Asian Linguistics, 12(2):171–203. Edson T Miyamoto. 2002. Sources of difficulty in the processing of scrambling in Japanese. Sentence processing in East Asian languages, pages 167–188. Natsuko Nakagawa. 2016. Information structure in spoken japanese: Particles, word order, and intonation. Keiko Nakamoto, Jae-ho Lee, and Kow Kuroda. 2006. Preferred Word Orders Correlate with “Sentential” Meanings That Cannot Be Reduced to Verb Meanings: A New Perspective on “Construction Effects” in Japanese. Cognitive Studies: Bulletin of the Japanese Cognitive Science Society, 13(3):334– 352. Hisashi Noda. 1996. Wa to ga [Wa and ga]. Kurosio Publishers. Marian Olteanu, Pasin Suriyentrakorn, and Dan Moldovan. 2006. Language models and reranking for machine translation. In Proceedings of the Workshop on Statistical Machine Translation, pages 150– 153, New York City. Association for Computational Linguistics. Naho Orita. 2017. Predicting japanese scrambling in the wild. In Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics, pages 41–45. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Tetsuo Saeki. 1960. Gendaigo ni okeru gojun no keik¯o – iwayuru hogo no baai [The trend of word order in modern writing– in so-called complements]. Gengo seikatsu [Language life], (111):56–63. Tetsuo Saeki. 1998. Y¯osetsu Nihongo no Gojun [Essentials of Japanese word order]. Kurosio Publishers. Ryohei Sasano and Manabu Okumura. 2016. A Corpus-Based Analysis of Canonical Word Order of Japanese Double Object Constructions. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2236–2244, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Yasumasa Shigenaga. 2014. Canonical Word Order of Japanese Ditransitive Sentences: A Preliminary Investigation through a Grammaticality Judgment Survey. Advances in Language and Literary Studies, 5(2):35–45. Dan I Slobin and Thomas G Bever. 1982. Children use canonical sentence schemas: A crosslinguistic study of word order and inflections. Cognition, 12(3):229– 265. Natsuko Tsujimura. 2013. An introduction to Japanese linguistics. John Wiley & Sons. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Karthik Visweswariah, Rajakrishnan Rajkumar, Ankur Gandhe, Ananthakrishnan Ramanathan, and Jiri Navratil. 2011. A Word Reordering Model for Improved Machine Translation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 486–496, Edinburgh, Scotland, UK. Association for Computational Linguistics. A Hyperparameters and implementation of the LMs We used the Transformer (Vaswani et al., 2017) LMs implemented in fairseq (Ott et al., 2019). Table 5 shows the hyperparameters of the LMs. The adaptive softmax cutoff (Grave et al., 2017) is only applied to SLM. We split 10K sentences for dev set. The left-to-right and right-to-left CLMs achieved a perplexity of 11.05 and 11.08, respectively. The left-to-right and right-to-left SLMs achieved a perplexity of 28.51 and 28.25, respectively. Note that the difference in the perplexities between CLM and SLM is due to the difference in the vocabulary size. B Details on Section 5.1 (double objects) B.1 Word order for each verb It is considered that different verbs have different preferences in the order of their object. For example, while the verb “例える” (compare) prefers the ACC-DAT order (Example (9)-a), the verb “表す る” (express) prefers the DAT-ACC order (Example (9)-b). (9) a. 人間を 色に 例えた. person-ACC color-DAT compared. (φI compared a person to color.) b. 店主に 敬意を 表した. shopkeeper-DAT respect-ACC expressed. (φI expressed a respect to a shopkeeper.) Table 6 shows the verbs with the top five and the five worst Rv ACC-DAT. B.2 Word order and verb types There are two types of causative-inchoative alternating verbs in Japanese: show-type verbs and passtype verbs. The verb types are determined by the subject of the sentence where the corresponding inchoative verb is used. For the show-type verbs, the DAT argument of a causative sentence becomes the subject in its corresponding inchoative sentence (Example (10)). On the other hand, the ACC argument of a causative sentence becomes the subject in its corresponding inchoative sentence for the pass-type verbs (Example (11)). (10) Causative: 生徒に student-DAT 本を book-ACC 見せた showed. (φI showed a student a book.) Inchoative: 生徒が student-NOM 見た saw. (A student saw φsomething.) (11) Causative: 生徒に student-DAT 本を book-ACC 渡した showed. (φI passed a student a book.) Inchoative: 本が book-NOM 渡った passed. (A book passed to φsomething.) Matsuoka (2003) claims that the show-type verb prefers the DAT-ACC order, while the pass-type verb prefers the ACC-DAT order. Table 7 shows Rv ACC-DAT of the show-type and pass-type verbs. The results show no significant difference in word order trends between show-type and pass-type verbs, which are consistent with that of Sasano and Okumura (2016). B.3 Word order and semantic role of the dative argument As described in Section 5.1, Sasano and Okumura (2016) reported that type-A examples prefer the ACC-DAT order and type-B examples prefer the DAT-ACC order. We used the same examples as Sasano and Okumura (2016) used. We analyzed the difference in the trend of argument order between type-A and type-B examples in each verb. Table 8 shows the verbs, which show a significant change in the argument order between type-A and type-B examples (p < 0.05 in a two-proportion z-test). In the experiment using CLM, 31 verbs show the trend that type-A examples more prefer the ACC-DAT order to type-B, and 17 verbs show contrary trends. In the experiment using SLM, 38 verbs show the trend that type-A examples more prefer the ACC-DAT order to type-B, and 11 verbs show contrary trends. These results show that the number of verbs, where the ACC-DAT order is preferred by type-A examples rather than type-B, is significantly larger (p < 0.05 with a two-sided sign test). This experimental design follows Sasano and Okumura (2016). B.4 Word order and co-occurrence of verb and arguments We evaluate the claim that an argument frequently co-occurring with the verb tends to be placed near the verb. We examine the relationship between each example’s word order trend and ∆NPMI. ∆NPMI is calculated as follows: Fairseq model architecture transformer lm adaptive softmax cut off 50,000, 140,000 Optimizer algorithm Nesterov accelerated gradient (nag) learning rates 1e-5 momentum 0.99 weight decay 0 clip norm 0.1 Learning rate scheduler type cosine warmup updates 16,000 warmup init lrarning rate 1e-7 max learning rate 0.1 min learning rate 1e-9 t mult (factor to grow the length of each period) 2 learning rate period updates 270,000 learning rate shrink 0.75 Training batch size 4608 tokens epochs 3 Table 5: Hyperparameters of the LMs. ACC-DAT is preferred DAT-ACC is preferred Model Verb Rv ACC-DAT S&O Verb Rv ACC-DAT S&O CLM “例える” (compare) 0.993 0.945 “表する” (to table) 0.001 0.013 “換算する” (converted) 0.992 0.935 “澄ます” (put on airs) 0.000 0.017 “押し出す” (extruded) 0.979 0.923 “煮やす” (cook inside) 0.000 0.019 “見立てる” (mitateru) 0.994 0.919 “瞑る” (close the eyes) 0.001 0.021 “変換” (conversion) 0.975 0.898 “竦める” (shrug) 0.002 0.022 SLM “例える” (compare) 0.993 0.926 “喫する” (kissuru) 0.003 0.018 “押し出す” (extruded) 0.979 0.914 “表する” (to table) 0.001 0.018 “監禁” (confinement) 0.885 0.912 “澄ます” (put on airs) 0.000 0.021 “役立てる” (help) 0.933 0.904 “抜かす” (leave out) 0.002 0.022 “帰す” (attributable) 0.838 0.903 “踏み入れる” (step into) 0.002 0.025 Table 6: The verbs with the top five and the worst five Rv ACC-DAT in each LM. The “S&O” columns show the ACC-DAT rate reported in Sasano and Okumura (2016). ∆NPMI = NPMI(nDAT, v) −NPMI(nACC, v) , where NPMI(nc, v) = PMI(nc, v) −log(p(nc, v)) , PMI(nc, v) = log p(nc, v) p(nc)p(v) , where, v is a verb and nc (c ∈DAT, ACC) is its argument. C Data used in Section 5.2, Section 6, and Appendix F First, we randomly collected 50M sentences from 3B web pages. Note that there is no overlap between the collected sentences and the training data of LMs. Next, we obtained the sentences that satisfy the following criteria: • There is a verb (placed at the end of the sentence) with more than two arguments (accompanying the case particle ga, o, ni, or de), where dependency distance between the verb and arguments is one. • Each argument (with its descendant) has fewer than 11 morphemes in the argument. In each example, the verb (satisfying the above condition), its arguments, and the descendants of the arguments are extracted. Example sentences are created by concatenating the verb, its argument, and the descendants of the arguments with preserving their order in the original sentences. In the experiments in Section 5.2, we analyzed the word order trend of the TIM and LOC constituents. We regard the constituent (argument and its descendants) satisfying the following condition as the TIM constituent: • Accompanying the postpositional case particle “に” (DAT). Show-type Pass-type Verb CLM SLM S&O Verb CLM SLM S&O Verb CLM SLM S&O “知らせる” (notify) .718 .754 .522 “戻す” (put back) .366 .395 .771 “漏らす” (leak) .152 .207 .332 “預ける” (deposit) .426 .391 .399 “止める” (lodge) .638 .704 .748 “浮かべる” (float) .387 .406 .255 “見せる” (show) .353 .429 .301 “包む” (wrap) .316 .356 .603 “向ける” (direct) .291 .319 .251 “被せる” (cover) .240 .224 .256 “伝える” (inform) .419 .460 .522 “残す” (leave) .323 .318 .238 “教える” (teach) .297 .293 .235 “乗せる” (place on) .556 .498 .496 “埋める” (bury) .405 .430 .223 “授ける” (give) .101 .084 .186 “届ける” (deliver) .364 .419 .491 “混ぜる” (blend) .336 .276 .200 “浴びせる” (shower) .113 .121 .177 “並べる” (range) .423 .485 .481 “当てる” (hit) .287 .320 .185 “貸す” (lend) .253 .213 .118 “ぶつける” (knock) .333 .344 .436 “掛ける” (hang) .285 .288 .108 “着せる” (dress) .115 .109 .113 “付ける” (attach) .326 .329 .368 “重ねる” (pile) .226 .263 .084 “渡す” (pass) .349 .336 .362 “建てる” (build) .117 .099 .069 “落とす” (drop) .379 .397 .351 Macro Avg. .291 .291 .305 Macro Avg. .347 .364 .361 Table 7: Overlap of the results of LMs and that of Sasano and Okumura (2016) on the relationship of the ACC-DAT rate and verb types. Each score corresponding to a verb denotes its DAT-ACC rate. The “S&O” columns show the ACC-DAT rate reported in Sasano and Okumura (2016). There is no significant difference between the distributions of the DAT-ACC rate in two verb types. • Containing time category morphemes20. We regard the constituent (argument and its descendants) satisfying the following condition as the LOC constituent: • Accompanying the postpositional case particle “で”. • Containing location category morphemes20. 81k examples were created. The averaged number of characters in a sentence was 45.1 characters. The number of occurrences of each case is shown in Table 9. The scrambling process conducted in the experiments (Sections 5.2 and 6) is the same as described in Section 4. D Details on Section 5.3 (adverb) Table 10 shows the correlation between the result of LMs and that of Koizumi and Tamaoka (2006). The column “Canonical” shows the position, which is significantly preferred over the other positions. “A,” “S,” “O,” and “V” denote “adverb,” “subject,” “object,” and “verb,” respectively. The sequence of the alphabets corresponds to their order; for example, “ASOV” indicates the order: adverb < subject < object < verb. Following Koizumi and Tamaoka (2006), we examined the three candidate positions of the adverb: “ASOV,” “SAOV,” and “SOAV.” The score r denotes the Pearson correlation coefficient of the preferred ranks of each adverb position to that reported in Koizumi and Tamaoka (2006). E Details on Section 6.2 (topicalization) We topicalized a specific constituent by moving the constituent to the beginning of the sentence and 20identified by JUMAN 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 rate of the examples where ACC is more likely to be topicalized ACC-DAT rate obtained from LMs CLM SLM Figure 4: Correlation between the ACC-DAT rate and the rate that the ACC argument is more likely to be topicalized than DAT for each verb. Each plot corresponds to the result of each verb. adding the adverbial particle “は” (TOP). Strictly speaking, conjunctions are preferentially placed at the beginning of the sentence rather than topicalized constituents. The examples we used do not include the conjunctions at the beginning of the sentence. The adverbial particle was added according to the rules shown in Table 12. Claim (i): Table 11 shows the ta|b for each pair of the case a (row) and b (column). The results show that the more anterior the case a is and the more posterior the case b is in the canonical word order, the larger the ta|b is. Claim (ii): Figure 4 shows that the more a verb prefers the ACC-DAT order, the more ACC case tends to be topicalized. The X-axis denotes the ACC-DAT rate of the verb, and the Y-axis denotes the trend that ACC is more likely to be topicalized than DAT. Model Verbs whose type-A examples prefer the ACC-DAT order Verbs whose type-B examples prefer the ACC-DAT order CLM “預ける” (deposit), “置く” (place), “持つ” (have), “入 れる” (put in), “納める” (pay), “郵送” (mail), “供 給” (supply), “出す” (put out), “運ぶ” (transport), “流 す” (shed), “掛ける” (hang), “飾る” (decorate), “広 げる” (spread), “移す” (transfer), “残す” (leave), “配 送” (deliver), “送る” (send), “投げる” (throw), “送 付” (send), “返却” (return), “届ける” (send), “戻す” (return), “着ける” (wear), “上げる” (increase), “落と す” (drop), “載せる” (publish), “変更” (change), “納 入” (deliver), “卸す” (unload), “掲載” (publish), “通 す” (get X through) “配布” (distribute), “渡す” (pass), “プレゼント” (present), “合わせる” (match), “見せる” (show), “提 供” (offer), “与える” (give), “当てる” (hit), “回す” (turn), “追加” (add), “貸す” (lend), “展示” (exhibit), “据える” (lay), “依頼” (request), “挿入” (insert), “纏 める” (collect), “請求” (claim) SLM “預ける” (deposit), “置く” (place), “頼む” (ask), “入 れる” (put in), “納める” (pay), “郵送” (mail), “出 す” (put out), “運ぶ” (transport), “流す” (shed), “掛け る” (hang), “広げる” (spread), “移す” (transfer), “残 す” (leave), “リクエスト” (request), “配送” (deliver), “送る” (send), “投げる” (throw), “送付” (send), “求 める” (ask), “提出” (submit), “届ける” (deliver), “要 求” (request), “戻す” (return), “寄付” (donate), “寄贈” (donation), “着ける” (wear), “乗せる” (place), “上げ る” (increase), “落とす” (drop), “貼る” (stick), “分け る” (divide), “ばらまく” (scatter), “はめる” (fit), “支 払う” (pay), “配達” (deliver), “卸す” (unload), “纏め る” (collect), “通す” (get X through) “プレゼント” (present), “持つ” (have), “合わせる” (match), “見せる” (show), “向ける” (point), “提供” (offer), “装備” (equip), “追加” (add), “展示” (exhibit), “据える” (lay), “採用” (adopt) Table 8: The verbs which show a significant change in the argument order trend depending on the semantic role of its dative argument. The scores denote the DAT-ACC rate. Type-A corresponds to the examples with an inanimate goal dative argument. Type-B corresponds to the examples with an animate processor dative argument. The number of type-A verbs is significantly larger than that of type-B verbs. Case #occurrence TIM 11,780 LOC 15,544 NOM 55,230 DAT 56,243 ACC 57,823 Table 9: The number of occurrence for each case in the data used in Section 5.2, Section 6, and Appendix F F Additional analysis: adverbial particles and their effect for word order The adverbial particles We can add supplementary information with adverbial particles. The adverbial particle “は” (TOP) is the typical one. In Example (12), the adverbial particle “も” (also), instead of “を” (ACC), implies that there is another thing the teacher gave to the student (“a teacher gave not only φ but also a book to a student.”). (12) ::::: 生徒に 本をも あげた. student-DAT also book-ACC gave. Experiments A constituent accompanying the adverbial particle “は” (TOP) is moved to the beginning of the sentence (Noda, 1996). However, it is not clear whether other adverbial particles also have the above property. In this section, we evaluate the following claim: a different adverbial particle shows different degrees of the effects for the word order. For each example s ∈S collected from Japanese Wikipedia, we replaced the postpositional particle with a specific adverbial particle, following the rules in Table 12. We used four typical adverbial particles: “は” (TOP), “こそ” (emphasis), “も” (also), and “だけ” (only). Two variants of word order, Non-moved, and Moved were created for each example. Example (13) is an example focusing on the ACC case with the particle “も” (also). (13) Original: :::: 生徒に student-DAT 本を book-ACC あげた. gave. Non-moved: :::: 生徒に student-DAT 本をも also book-ACC あげた. gave. Moved: 本をも also book-ACC :::: 生徒に student-DAT 本を book-ACC あげた. gave. We compared the generation probabilities between the Non-moved and Moved orders. We calculated the rate that the Moved order is preferred in each combination of the case types and the adverbial particles. Model MODAL TIME MANNER RESULTIVE Canonical r Canonical r Canonical r Canonical r CLM ASOV 1. ASOV, SAOV 1. SAOV, SOAV 0.5 SAOV, SOAV 1. SLM ASOV 1. SAOV 0.5 SAOV, SOAV 1. SOAV 0.5 Koizumi(2016) ASOV ASOV, SAOV SAOV, SOAV SAOV, SOAV Table 10: Overlap of the preference of the adverb position of LMs and that of Koizumi and Tamaoka (2006). The column “Canonical” shows the adverb position, which is significantly preferred over the other positions. The score r denotes the Pearson correlation coefficient of the preferred rank of three possible adverb positions obtained from LMs to that of Koizumi and Tamaoka (2006). TIM PLC NOM DAT NOM TIM .490 .329 .720 .698 PLC .510 .484 .748 .742 NOM .671 .516 .804 .852 DAT .280 .252 .196 .536 NOM .302 .258 .148 .464 (a) CLM TIM PLC NOM DAT NOM TIM .538 .402 .676 .711 PLC .462 .553 .757 .749 NOM .598 .447 .774 .834 DAT .324 .243 .226 .552 NOM .289 .251 .166 .448 (b) SLM Table 11: The scores denote ta|b. The row corresponds to the case a, the column corresponds to b. Higher ta|b suggests the trend that the case a is more likely to be topicalized than the case b. Results The results are shown in Table 13. When using “は” (TOP) as a postpositional particle, the Moved order is preferred to Non-moved, which is consistent with the well-known characteristics of topicalization described in Section 6. In addition, the degree of preference between Moved and Nonmoved differs depending on the adverbial particles. Furthermore, the results indicate that the anterior case in the canonical word order is likely to move to the beginning of the sentence by the effect of the adverbial particle. Additional experiments and results We analyzed the trend of double object order when a specific case accompanies an adverbial particle. Figure 5 shows the result when the ACC argument accompanies an adverbial particle, and Figure 6 shows the result when the DAT argument accompanies an adverbial particle. The left parts of these figures show the result of CLM, and the right part of these figures shows the result of SLM. The Xaxis denotes the ACC-DAT / DAT-ACC rate of the verb when both of the arguments do not accomOriginal case particle After the adverbial particle “は” (TOP) is added が(TOP) がは に(TIM, DAT) には を(ACC) をは で(LOC) では Table 12: Rules of deleting the original case particle when the adverbial particle “は” (TOP) is added. This rule is also applied when adding the other adverbial particles (Appendix F). pany an adverbial particle. The Y-axis denotes the ACC-DAT / DAT-ACC rate when a specific case accompanies an adverbial particle. The results show that the case accompanying an adverbial particle is likely to be placed near the beginning of the sentence. In addition, the degree of the above trend depends on the adverbial particles. These results suggest that some adverbial particles have a effect for word order. Model Toritate particle TIM LOC NOM DAT ACC Avg. CLM “は” (TOP) .715 .777 .675 .624 .623 .683 “こそ” (emphasis) .492 .423 .521 .313 .486 .447 “も” (also) .560 .557 .458 .343 .271 .438 “だけ” (only) .385 .340 .312 .227 .184 .331 Avg. .538 .525 .544 .377 .391 SLM “は” (TOP) .667 .751 .635 .565 .580 .640 “こそ” (emphasis) .567 .596 .574 .398 .462 .519 “も” (also) .511 .531 .457 .292 .259 .410 “だけ” (only) .334 .309 .285 .172 .126 .303 Avg. .520 .547 .560 .357 .357 Table 13: The scores denote that the Moved order is preferred over the Non-moved order when the corresponding case (column) accompanies the corresponding particle (row). The trend is different depending on the case and particle. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 ACC-DAT rate (CLM) ACC-DAT rate (CLM) は(TOP) こそ(emphasis) も(also) だけ(only) ACCadv moves closer to the verb ACCadv moves to the beginning of the sentence ACCadv-DAT rate (CLM) ACC-DAT rate (CLM) (a) CLM 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 ACC-DAT rate (SLM) ACC-DAT rate (SLM) は(TOP) こそ(emphasis) も(also) だけ(only) ACCadv moves closer to the verb ACCadv moves to the beginning of the sentence ACCadv-DAT rate (SLM) ACC-DAT rate (SLM) (b) SLM Figure 5: Change of the ACC-DAT order when the ACC argument accompanies an adverbial particle. These results indicate that the ACC argument with an adverbial particle (ACCadv) is more likely to be placed before the DAT argument. In addition, this trend differs for each particle. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 DAT-ACC rate (CLM) DAT-ACC rate (CLM) は(TOP) こそ(emphasis) も(also) だけ(only) DATadv moves closer to the verb DATadv moves to the beginning of the sentence DATadv-ACC rate (CLM) DAT-ACC rate (CLM) (a) CLM Research Seminar 19 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 ACC-DAT rate (SLM) ACC-DAT rate (SLM) は(TOP) こそ(emphasis) も(also) だけ(only) DATadv moves closer to the verb DATadv moves to the beginning of the sentence DATadv-ACC rate (SLM) DAT-ACC rate (SLM) (b) SLM Figure 6: Change of the DAT-ACC order when the DAT argument accompanies an adverbial particle. These results indicate that the DAT argument with an adverbial particle (DATadv) is more likely to be placed before the ACC argument. In addition, this trend differs for each particle.
2020
47
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5276–5289 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5276 Balancing Objectives in Counseling Conversations: Advancing Forwards or Looking Backwards Justine Zhang Cornell University [email protected] Cristian Danescu-Niculescu-Mizil Cornell University [email protected] Abstract Throughout a conversation, participants make choices that can orient the flow of the interaction. Such choices are particularly salient in the consequential domain of crisis counseling, where a difficulty for counselors is balancing between two key objectives: advancing the conversation towards a resolution, and empathetically addressing the crisis situation. In this work, we develop an unsupervised methodology to quantify how counselors manage this balance. Our main intuition is that if an utterance can only receive a narrow range of appropriate replies, then its likely aim is to advance the conversation forwards, towards a target within that range. Likewise, an utterance that can only appropriately follow a narrow range of possible utterances is likely aimed backwards at addressing a specific situation within that range. By applying this intuition, we can map each utterance to a continuous orientation axis that captures the degree to which it is intended to direct the flow of the conversation forwards or backwards. This unsupervised method allows us to characterize counselor behaviors in a large dataset of crisis counseling conversations, where we show that known counseling strategies intuitively align with this axis. We also illustrate how our measure can be indicative of a conversation’s progress, as well as its effectiveness. 1 Introduction Participants in a conversation constantly shape the flow of the interaction through their choices. In psychological crisis counseling conversations, where counselors support individuals in mental distress, these choices arise in uniquely complex and high-stakes circumstances, and are reflected in rich conversational dynamics (Sacks, 1992). As such, counseling is a valuable context for computationally modeling conversational behavior (Atkins when i tell my mom about the bullies she just ignores me Have you confided to anyone else about this? yeah there’s my sister… she just tells me to get over it That sounds so frustrating, you deserve to be listened to. t0 t1 t2 c1 c2 Figure 1: Two possible exchanges in a counseling conversation, illustrating key objectives that a counselor must balance: c1 aims to advance the conversation towards a discussion of possible confidants; c2 aims to address the emotion underlying the preceding utterance. et al., 2014; Althoff et al., 2016; P´erez-Rosas et al., 2018; Zhang et al., 2019). Modeling the conversational choices of counselors in this endeavor is an important step towards better supporting them. Counselors are driven by several objectives that serve the broader goal of helping the individual in distress; two key objectives are exemplified in Figure 1.1 The counselor must advance a conversation towards a calmer state where the individual is better equipped to cope with their situation (Mishara et al., 2007; Sandoval et al., 2009): in c1, the counselor prompts the individual to brainstorm options for social support. The counselor must also empathetically address what was already said, “coming to an empathic understanding” of the individual (Rogers, 1957; Hill and Nakayama, 2000): in c2, the counselor validates feelings that the individual has just shared. Balancing both objectives is often challenging, and overshooting in one direction can be detrimental to the conversation. A counselor who leans too much on advancing forwards could rush the conversation at the expense of establishing an empathetic connection; a counselor who leans too much backwards, on addressing what was already said, may fail to make any progress. 1These examples are derived from material used to train counselors in our particular setting, detailed in Section 2. 5277 In this work, we develop a method to examine counselor behaviors as they relate to this balancing challenge. We quantify the relative extent to which an utterance is aimed at advancing the conversation, versus addressing existing content. We thus map each utterance onto a continuous backwardsforwards axis which models the balance of these objectives, and refer to an utterance’s position on this axis as its orientation. At an intuitive level, our approach considers the range of content that is expected to follow or precede a particular utterance. For an utterance like c1 that aims to advance the conversation towards an intended target, we would expect a narrow range of appropriate replies, concentrated around that target (e.g., suggestions of possible confidants). We would likewise expect an utterance like c2 that aims to address a previously-discussed situation to only be an appropriate reply for a narrow range of possible utterances, concentrated around that specific type of situation (e.g., disclosures of negative feelings). Starting from this intuition, we develop an unsupervised method to quantify and compare these expected forwards and backwards ranges for any utterance, yielding our orientation measure. Using this measure, we characterize counselor behaviors in a large collection of text-message conversations from a crisis counseling service, which we accessed in collaboration with the service and with the participants’ consent. We show how orientation meaningfully distinguishes between key conversational strategies that counselors are taught during their training. We also show that our measure tracks a conversation’s progress and can signal its effectiveness, highlighting the importance of balancing the advancing and addressing objectives, and laying the basis for future inquiries in establishing potential causal effects. In summary, we develop an unsupervised methodology that captures how counselors balance the conversational objectives of advancing and addressing (Section 4), apply and validate it in a large dataset of counseling conversations (Section 5), and use it to investigate the relation between a counselor’s conversational behavior and their effectiveness (Section 5.4). While our method is motivated by a salient challenge in counseling, we expect similar balancing problems to recur in other conversational settings where participants must carefully direct the flow of the interaction, such as court trials and debates (Section 6). 2 Setting: Counseling Conversations We develop our method in the context of Crisis Text Line, a crisis counseling platform which provides a free 24/7 service for anyone in mental crisis—henceforth texters—to have one-onone conversations via text message with affiliated counselors. We accessed a version of this collection, with over 1.5 million conversations, in collaboration with the platform and with the consent of the participants. The data was scrubbed of personally identifiable information by the platform.2 These conversations are quite substantive, averaging 25 messages with 29 and 24 words per counselor and texter message, respectively. In each conversation, a crisis counselor’s highlevel goal is to guide the texter towards a calmer mental state. In service of this goal, all counselors first complete 30 hours of training provided by the platform, which draws on past literature in counseling to recommend best practices and conversational strategies. The first author also completed the training to gain familiarity with the domain. While the platform offers guidance to counselors, their task is inevitably open-ended, given the emotional complexity of crisis situations. As such, the counselors are motivated by an explicit goal that structures the interaction, but they face a challenging flexibility in choosing how to act. 3 Background and Related Work We now describe the conversational challenge of balancing between advancing the conversation forwards or addressing what was previously said. Our description of the challenge and our computational approach to studying it are informed by literature in counseling, on the platform’s training material and on informal interviews with its staff. A conversational balance. A crisis counselor must fulfill multiple objectives in their broader goal of helping a texter. One objective is guiding the texter through their initial distress to a calmer mental state (Mishara et al., 2007; Sandoval et al., 2009), as in Figure 1, c1. Various strategies that aim to facilitate this advancing process are taught to counselors during training: for instance, a counselor may prompt a texter to identify a goal or cop2The data can be accessed by applying at https:// www.crisistextline.org/data-philosophy/ data-fellows/. The extensive ethical and privacy considerations, and policies accordingly implemented by the platform, are detailed in Pisani et al. (2019). 5278 ing mechanism (Rollnick and Miller, 1995). As such, they attempt to move the conversation forwards, towards its eventual resolution. The counselor must also engage with the texter’s concerns (Rogers, 1957; Hill and Nakayama, 2000), as in c2, via strategies that empathetically address what the texter has already shared (Rollnick and Miller, 1995; Weger et al., 2010; Bodie et al., 2015). For instance, counselors are taught to reflect, i.e., reframe a texter’s previous message to convey understanding, or draw on what was said to affirm the texter’s positive qualities. In doing so, the counselor looks backwards in the conversation. Past work has posited the benefits of mixing between strategies that aim at either objective (Mishara et al., 2007). However, as the training acknowledges, striking this balance is challenging. Overzealously seeking to advance could cut short the process of establishing an empathetic connection. Conversely, focusing on the conversation’s past may not help with eventual problem solving (Bodie et al., 2015), and risks stalling it. A texter may start to counterproductively rehash or ruminate on their concerns (Nolen-Hoeksema et al., 2008; Jones et al., 2009); indeed, prior psychological work has highlighted the thin line between productive reflection and rumination (Rose et al., 2007; Landphair and Preddy, 2012). Orientation. To examine this balancing dynamic, we model the choices that counselors make at each turn in a conversation. Our approach is to derive a continuous axis spanned by advancing and addressing. We refer to an utterance’s position on this axis, representing the relative extent to which it aims at either objective, as its orientation Ω. We interpret a forwards-oriented utterance with positive Ωas aiming to advance the conversation, and a backwards-oriented utterance with negative Ωas aiming to address what was previously brought up. In the middle, the axis reflects the graded way in which a counselor can balance between aims—for instance, using something the texter has previously said to help motivate a problem-solving strategy. Related characterizations. While we develop orientation to model a dynamic in counseling, we view it as a complement to other characterizations of conversational behaviors in varied settings. Prior work has similarly considered how utterances relate to the preceding and subsequent discourse (Webber, 2001). Frameworks like centering theory (Grosz et al., 1995) aim at identifying referenced entities, while we aim to more abstractly model interlocutor choices. Past work has also examined how interlocutors mediate a conversation’s trajectory through taking or ceding control (Walker and Whittaker, 1990) or shifting topic (Nguyen et al., 2014); Althoff et al. (2016) considers the rate at which counselors in our setting advance across stages of a conversation. While these actions can be construed as forwards-oriented, we focus more on the interplay between forwards- and backwards-oriented actions. A counselor’s objectives may also cut across these concepts: for instance, the training stresses the need for empathetic reflecting across all stages and topics. Orientation also complements prior work on dialogue acts, which consider various roles that utterances play in discourse (Mann and Thompson, 1988; Core and Allen, 1997; Ritter et al., 2010; Bracewell et al., 2012; Rosenthal and McKeown, 2015; Prabhakaran et al., 2018; Wang et al., 2019). In counseling settings, such approaches have highlighted strategies like reflection and questionasking (Houck, 2008; Gaume et al., 2010; Atkins et al., 2014; Can et al., 2015; Tanana et al., 2016; P´erez-Rosas et al., 2017, 2018; Park et al., 2019; Lee et al., 2019; Cao et al., 2019). Instead of modeling a particular taxonomy of actions, we model how counselors balance among the underlying objectives; we later relate orientation to these strategies (Section 5). Most of these approaches use annotations or predefined labeling schemes, while our method is unsupervised. 4 Measuring Orientation We now describe our method to measure orientation, discussing our approach at a high level before elaborating on the particular operationalization. The code implementing our approach is distributed as part of the ConvoKit library (Chang et al., 2020), at http://convokit.cornell.edu. 4.1 High-Level Sketch Orientation compares the extent to which an utterance aims to advance the conversation forwards with the extent to which it looks backwards. Thus, we must somehow quantify how the utterance relates to the subsequent and preceding interaction. Naive attempt: direct comparison. As a natural starting point, we may opt for a similarity-based approach: an utterance that aims to address its preceding utterance, or predecessor, should be similar 5279 sounds frustrating confided to anyone ignores judges laughs doesn’t because just problem ignore nothing sister friend counselor expected predecessors: expected replies: Figure 2: Words representative of replies and predecessors for utterances with two example phrasings, as observed in training data. Top row: observed replies to utterances with w1 span a narrower range than observed predecessors (relative sizes of red and blue circles); w1 thus has smaller forwards-range −−→ σw1 than backwardsrange ←−− σw1 (i.e., it is forwards-oriented, Ωw1 > 0). Bottom row: observed predecessors to utterances with w2 span a narrower range than replies; w2 thus has smaller ←−− σw2 than −−→ σw2 (i.e., it is backwards-oriented Ωw2 < 0). to it; an utterance that aims to advance the conversation should be similar to the reply that it prompts. In practice, having to make these direct comparisons is limiting: an automated system could not characterize an utterance in an ongoing conversation by comparing it to a reply it has yet to receive. This approach also has important conceptual faults. First, addressing preceding content in a conversation is different from recapitulating it. For instance, counselors are instructed to reframe rather than outright restate a texter’s message, as in Figure 1, c2. Likewise, counselors need not advance the conversation by declaring something for the texter to simply repeat; rather than giving specific recommendations, counselors are instructed to prompt the texters to come up with coping strategies on their own, as in c1. Further, texters are not bound to the relatively formal linguistic style counselors must maintain, resulting in clear lexical differences. Measuring orientation is hence a distinct task from measuring similarity. Second, an utterance’s intent to advance need not actually be realized. A counselor’s cues may be rebuffed or misunderstood (Schegloff, 1987; Thomas, 1983): a texter could respond to c1 by continuing to articulate their problem with t2. Likewise, a counselor may intend to address a texter’s concerns but misinterpret them. To model the balance in objectives that a counselor is aiming for, our characterization of an utterance cannot be contingent on its actual reply and predecessor. Our approach: characterizing expectations. We instead consider the range of replies we might expect an utterance to receive, or the range of predecessors that it might follow. Intuitively, an utterance with a narrow range of appropriate replies aims to direct the conversation towards a particular target, moreso than an utterance whose appropriate replies span a broader range.3 Likewise, an utterance that is an appropriate reply to only a narrow range of possible predecessors aims to address a particular situation. We draw on unlabeled data of past conversations to form our expectations of these ranges, and build up our characterizations of utterances from their constituent phrasings, e.g., words or dependency-parse arcs. The intuition for our approach is sketched in Figure 2. From our data, we observe that utterances containing confided to anyone generally elicited replies about potential confidants (e.g., sister, friend), while the replies that followed utterances with sounds frustrating span a broader, less well-defined range. As such, we have a stronger expectation of what a reply prompted by a new utterance with confided to anyone might contain than a reply to a new utterance with sounds frustrating. More generally, for each phrasing w, we quantify the strength of our expectations of its potential replies by measuring the range spanned by the replies it has already received in the data, which we refer to as its forwards-range −→ σw. We would say that confided to anyone has a smaller −→ σw than sounds frustrating, meaning that its observed replies were more narrowly concentrated; this is represented as the relative size of the red regions on the right side of Figure 2. In the other direction, we observe in our data that sounds frustrating generally followed descriptions of frustrating situations (e.g., ignores, judges), while the range of predecessors to confided to anyone is broader. We thus have a stronger expectation of the types of situations that new utterances with sounds frustrating would respond to, compared to new utterances with confided to anyone. For a phrasing w, we quantify the strength of our expectations of its potential predecessors by measuring its backwards-range ←− σw, spanned by the predecessors we’ve observed. As such, sounds frustrating has a smaller ←− σw than confided to anyone, corresponding to the relative size of the blue regions on the left side of Figure 2. 3Consider leading versus open-ended questions. When people ask leading questions, they intend to direct the interaction towards specific answers they have in mind; when people ask open-ended questions, they are more open to what answers they receive and where the interaction is headed. 5280 The relative strengths of our expectations in either direction then indicate the balance of objectives. If we have a stronger expectation of w’s replies than of its predecessors—i.e., smaller −→ σw than ←− σw—we would infer that utterances with w aim to advance the conversation towards a targeted reply more than they aim to address a particular situation. Conversely, if we have stronger expectations of w’s predecessors—i.e., smaller ←− σw— we would infer that utterances with w aim to address the preceding interaction, rather than trying to drive the conversation towards some target. We thus measure orientation by comparing a phrasing’s forwards- and backwards-range. The expectation-based approach allows us to circumvent the shortcomings of a direct comparison; we may interpret it as modeling a counselor’s intent in advancing and addressing at each utterance (Moore and Paris, 1993; Zhang et al., 2017). 4.2 Operationalization We now detail the steps of our method, which are outlined in Figure 3. Formally, our input consists of a set of utterances from counselors {ci}, and a set of utterances from texters {ti}, which we’ve observed in a dataset of conversations (Figure 3A). We note that each texter utterance can be a reply to, or a predecessor of, a counselor utterance (or both). We use this unlabeled “training data” to measure the forwards-range −→ σw, the backwards-range ←− σw (Figures 3B-D), and hence the orientation Ωw of each phrasing w used by counselors (Figure 3E). We then aggregate to an utterance-level measure. For each counselor phrasing w, let −→ Tw denote the subset of texter utterances which are replies to counselor utterances containing w (Figure 3A). As described above, the forwards-range −→ σw quantifies the spread among elements of −→ Tw; we measure this by deriving vector representations of these utterances −→ Uw (Figure 3B, detailed below), and then comparing each vector in −→ Uw to a central reference point −→ uw (Figures 3C and 3D).4 Likewise, ←− σw quantifies the similarity among elements of ←− Tw, the set of predecessors to counselor utterances with w; we compute ←− σw by comparing each corresponding vector in ←− Uw to a central point ←− uw. 4Using a central reference point to calculate the forwardsrange, as opposed to directly computing pairwise similarities among replies in −→ Uw, allows us to account for the context of w in the utterances that prompted these replies (via tf-idf weighting, as subsequently discussed). A. Input: observed texter replies to counselor utterances B. Derive vector representations of texter utterances C. Derive central points D. Compute forwards-range E. Compute orientation: confided to anyone yeah there’s my sister i told my friend… the school counselor… ci: have you confided to anyone about this? cj: I wonder if you’ve confided to anyone… ck: have you confided to anyone recently? … … … … reply reply reply reply reply SVD where is the cosine distance between and where is the tf-idf weight of in where is a tf-idf reweighted term-document matrix of all texter utterances low-dimensional representations of texter utterances in example Figure 3: Outline of steps to compute orientation Ωw of phrasing w, as described in Section 4.2. Panels A-D show the procedure for computing forwards-range −→ σw; the procedure for backwards-range ←− σw is similar. Deriving vector representations (Figure 3B). To obtain vectors for each texter utterance, we construct X, a tf-idf reweighted term-document matrix where rows represent texter utterances and columns represent phrasings used by texters. To ensure that we go beyond lexical matches and capture conceptual classes (e.g., possible confidants, frustrating situations), we use singular value decomposition to get X ≈USV T . Each row of U is a vector representation ui of utterance ti in the induced low-dimensional space T. −→ Uw then consists of the corresponding subset of rows of U (highlighted in Figure 3B). Deriving central points (Figure 3C). For each w, we take the central point −→ uw to be a weighted average of vectors in −→ Uw. Intuitively, a texter utterance ti with vector ui should have a larger contribution to −→ uw if w is more prominent in the counselor utterance ci that preceded it. We let wi w denote the normalized tf-idf weight of w in ci, and use wi w as the weight of the corresponding vector ui. To properly map the resultant weighted sum ∑wi wui into T, we divide each dimension by the corresponding singular value in S. As such, if ww is a vector of weights wi w, we can calculate the central point −→ uw 5281 of −→ Uw as −→ uw = wT w −→ UwS−1. In the other direction, we likewise compute ←− uw = wT w ←− UwS−1. Forwards- and backwards-ranges (Figure 3D). We take the forwards-range −→ σw of w to be the average cosine distance from each vector in −→ Uw to the center point −→ uw. Likewise, we take ←− σw as the average distance from each vector in ←− Uw to ←− uw. Phrasing-level orientation (Figure 3E). Importantly, since we’ve computed the forwards- and backwards-ranges −→ σw and ←− σw using distances in the same space T, their values are comparable. We then compute the orientation of w as their difference: Ωw = ←− σw −−→ σw. Utterance-level orientation. To compute the orientation of an utterance ci, we first compute the orientation of each sentence in ci as the tf-idf weighted average Ωw of its constitutent phrasings. Note that a multi-sentence utterance can orient in both directions—e.g., a counselor could concatenate c2 and c1 from Figure 1 in a single utterance, addressing the texter’s previous utterance before moving ahead. To model this heterogeneity, we consider both the minimum and maximum sentence-orientations in an utterance: Ωmin captures the extent to which the utterance looks backwards, while Ωmax captures the extent to which it aims to advance forwards. 5 Application to Counseling Data We apply our method to characterize messages from crisis counselors on the platform. We compute the orientations of the phrasings they use, represented as dependency-parse arcs. We use a training set of 351,935 texter and counselor messages each, from a random sample of conversations omitted in subsequent analyses.5 Table 1 shows representative phrasings and sentences of different orientations.6 Around two-thirds of phrasings and sentences have Ω<0, echoing the importance of addressing the texter’s previous remarks. In what follows, we analyze counselor behaviors in terms of orientation, and illustrate how the measure can be useful for examining conversations. We start by validating our method via two complementary approaches. In a subset of sentences manually annotated with the counseling 5Further implementation details are listed in the appendix. 6Example sentences are derived from real sentences in the data, and modified for readability. The examples were chosen to reflect common situations in the data, and were vetted by the platform to ensure the privacy of counselors and texters. strategies they exhibit, we show that orientation meaningfully reflects these strategies (Section 5.1). At a larger scale, we show that the orientation of utterances over the course of a conversation aligns with domain knowledge about counseling conversation structure (Section 5.2). We also find that other measures for characterizing utterances are not as rich as orientation in capturing counseling strategies and conversation structure (Section 5.3). Finally, we show that a counselor’s orientation in a conversation is tied to indicators of their effectiveness in helping the texter (Section 5.4). 5.1 Validation: Counseling Strategies Even though it is computed without the guidance of any annotations, we expect orientation to meaningfully reflect strategies for advancing or addressing that crisis counselors are taught. The first author hand-labeled 400 randomly-selected sentences with a set of pre-defined strategies derived from techniques highlighted in the training material. We note example sentences in Table 1 which exemplify each strategy, and provide more extensive descriptions in the appendix. Figure 4A depicts the distributions of orientations across each label, sorted from most backwards- to most forwards-oriented. We find that the relative orientation of different strategies corroborates their intent as described in the literature. Statements reflecting or affirming what the texter has said to check understanding or convey empathy (characterized by phrasings like totally normal) tend to be backwards-oriented; statements prompting the texter to advance towards problem-solving (e.g., [what] has helped) are more forwards-oriented. Exploratory queries for more information on what the texter has already said (e.g., happened to make) tend to have middling orientation (around 0). The standard deviation of orientations over messages within most of the labels is significantly lower than across labels (bootstrapped p < .05, solid circles), showing that orientation yields interpretable groupings of messages in terms of important counseling strategies. The measure also offers complementary information. For instance, we find sentences that aren’t accounted for by pre-defined labels, but still map to interpretable orientations, such as backwardsoriented examples assuaging texter concerns about the platform being a safe space to self-disclose. 5282 Orientation Example phrasings Example sentences Backwardsoriented (bottom 25%) sounds frustrating, totally normal, great ways, on [your] plate, be overwhelming, sometimes feel frightening, on top [of] been struggling, feeling alone You have a lot of things on your plate, between family and financial problems. [reflection] It’s totally normal to feel lonely when you have no one to talk to. [reflection] Those are great ways to improve the relationship. [affirmation] (middle 25%) happened [to] make, mean [when you] say, is that, you recognized, source of the moment, are brave Has anything happened to make you anxious? [exploration] It’s good you recognized the need to reach out. [affirmation] Can you tell me what you mean when you say you’re giving up? [risk assessment] Forwardsoriented (top 25%) plan for, confided [to] anyone, usually do, has helped, been talking, best support have considered, any activities Can you think of anything that has helped when you’ve been stressed before? [problem solving] I want to be the best support for you today. [problem solving] We’ve been talking for a while now, how do you feel? [closing] Table 1: Example phrasings and sentences with labeled strategies from crisis counselors’ messages, at varying orientations: backwards-oriented (from the bottom 25% of Ω), middle, and forwards-oriented (from top 25%). A B Figure 4: Validating the orientation measure and comparing to alternatives. A Leftmost: Mean Ωper counseling strategy label (vertical line denotes Ω=0). Next three: same for other measures. B: Mean Ωmax and Ωmin per segment for risk-assessed (orange) and nonrisk-assessed (black) conversations. Both: Solid circles indicate statistically significant differences (Wilcoxon p<0.01, comparing within-counselor). 5.2 Validation: Conversation Structure We also show that orientation tracks with the structure of crisis counseling conversations as described in the training material. Following Althoff et al. (2016), we divide each conversation with at least ten counselor messages into five equal-sized segments and average Ωmax and Ωmin over messages in each segment. Figure 4B (black lines) shows that over the course of a conversation, messages tend to get more forwards-oriented (higher Ωmax and Ωmin). This matches a standard conversation structure taught in the training: addressing the texter’s existing problems before advancing towards problemsolving. While this correspondence holds in aggregate, orientation also captures complementary information to advancement through stages—e.g., while problem-solving, counselors may still address and affirm a texter’s ideas (Table 1, row 3). We also consider a subset of conversations where we expect a different trajectory: for potentially suicidal texters, the training directs counselors to immediately start a process of risk assessment in which actively prompting the texter to disclose their level of suicidal ideation takes precedence over other objectives. As such, we expect more forwards-oriented messages at the starts of conversations involving such texters. Indeed, in the 30% of conversations which are riskassessed, we find significantly larger Ωmax in the first segment (Figure 4B, orange line; Wilcoxon p < 0.01 in the first stage, comparing withincounselor). Ωmin is smaller at each stage, suggesting that counselors balance actively prompting these critical disclosures with addressing them. 5.3 Alternative Operationalizations We compare orientation to other methods for capturing a counselor’s balancing decisions: Naive distance. We conside the naive approach in Section 4, taking a difference in cosine distances between tf-idf representations of a message and its reply, and a message and its predecessor. Backwards-range. We consider just the message’s backwards-range. For each sentence we take tf-idf weighted averages of component ←− σw and take minimum ←− σ for each message.7 7We get qualitatively similar results with maximum −→ σ . 5283 Question-asking. We consider whether the message has a question. This was used in Walker and Whittaker (1990) as a signal of taking control, which could be construed as forwards-oriented. Within-label standard deviations of each alternative measure are generally not significantly smaller than across-label (Figure 4A), indicating that these measures are poorer reflections of the counseling strategies. Label rankings under the measures are also arguably less intuitive. For instance, reflection statements have relatively large (naive) cosine distance from their predecessors. Indeed, the training encourages counselors to process rather than simply restate the texter’s words. These measures also track with the conversation’s progress differently—notably, none of them distinguish the initial dynamics of risk-assessed conversations as reflected in Ωmax (see appendix). 5.4 Relation to Conversational Effectiveness Past work on counseling has extensively discussed the virtues of addressing a client’s situation (Rogers, 1957; Hill and Nakayama, 2000). Some studies also suggest that accounting for both addressing and advancing is important, such that effective counselors manage to mix backwards- and forwards-oriented actions (Mishara et al., 2007). We use orientation to examine how these strategies are tied to conversational effectiveness in crisis counseling at a larger scale, using our measures to provide a unified view of advancing and addressing. To derive simple conversation-level measures, we average Ωmax and Ωmin over each counselor message in a conversation. Adjudicating counseling conversation quality is known to be difficult (Tracey et al., 2014). As a starting point, we relate our conversation-level measures to two complementary indicators of a conversation’s effectiveness:8 Perceived helpfulness. We consider responses from a post-conversation survey asking the texter whether the conversation was helpful, following Althoff et al. (2016). Out of the 26% of conversations with a response, 89% were rated as helpful.9 Conversation length. We consider a conversation’s length as a simple indicator of the pace of its progress: short conversations may rush the texter, while prolonged conversations could suggest 8We perform all subsequent analyses on a subset of 234,433 conversations, detailed in the appendix. 9We note that this indicator is limited by important factors such as the selection bias in respondents. helpful unhelpful more forwards-oriented more backwards-oriented more forwardsoriented more backwardsoriented A B C Figure 5: Relation between orientation and conversational effectiveness. A: Mean Ωmin and Ωmax in conversations rated as helpful (green) or unhelpful (grey) (macroaveraged per conversation). Differences in both measures are significant (Mann Whitney U test p < 0.001). B, C: Mean Ωmin and Ωmax of conversations with varying lengths (in # of messages). Both plots: Error bars show 95% bootstrapped confidence intervals. stalling and could even demoralize the counselor (Landphair and Preddy, 2012).10 Figure 5A compares Ωmin and Ωmax in conversations rated as helpful and unhelpful by texters. Both measures are significantly smaller in conversations perceived as helpful, suggesting that texters have a better impression of relatively backwards-oriented interactions where the counselor is inclined towards addressing their situation. As such, this result echoes past findings relating addressing to effectiveness. Figure 5B compares Ωmin in conversations of varying lengths, showing that Ωmin increases with length, such that counselors exhibit less propensity for addressing in longer conversations. Anecdotal observations cited in interviews with the platform’s staff suggest one interpretation: conversations in which a texter feels their concerns were not satisfactorily addressed may be prolonged when they circle back to revisit these concerns. Figure 5C relates Ωmax to conversation length. We find that Ωmax is smaller in the lengthiest conversations, suggesting that such prolonged in10As the training material notes, conversation length and texter perception may signal complementary or even conflicting information about a texter’s experience of a conversation and its effectiveness: “Some texters resist the end of the conversation. They ruminate [...] causing the conversation to drag on without any progress.” 5284 teractions may be stalled by a weaker impulse to advance forwards. Extremely short conversations have smaller Ωmax as well, such that premature endings may also reflect issues in advancing. As such, we add credence to the previouslyposited benefits of mixing addressing and advancing: forwards-oriented actions may be tied to making progress, while a weaker propensity to advance may signal a suboptimal pace. Counselor-level analysis. These findings could reflect various confounds—for instance, a counselor’s choice of orientation may have no bearing on the rating they receive from a particularly difficult texter. To address this, we compute similar correspondences between orientation and our effectiveness indicators at the level of counselors rather than conversations; this analysis is detailed in the appendix. Our conversation-level results are replicated under these controls. 6 Discussion and Future Work In this work, we sought to examine a key balance in crisis counseling conversations between advancing forwards and addressing what has already been said. Realizing this balance is one of the many challenges that crisis counselors must manage, and modeling the actions they take in light of such challenges could point to policies to better support them. For instance, our method could assist human supervisors in monitoring the progress of ongoing conversations to detect instances of rushing or stalling, or enable largerscale analyses of conversational behaviors to inform how counselors are trained. The unsupervised approach we propose could circumvent difficulties in getting large-scale annotations of such sensitive content. Future work could bolster the measure’s usefulness in several ways. Technical improvements like richer utterance representations could improve the measure’s fidelity; more sophisticated analyses could better capture the dynamic ways in which the balance of objectives is negotiated across many turns. The preliminary explorations in Section 5.4 could also be extended to gauge the causal effects of counselors’ behaviors (Kazdin, 2007). We expect balancing problems to recur in conversational settings beyond crisis counseling, such as court proceedings, interviews, debates and other mental health contexts like long-term therapy. In these settings, individuals also make potentially consequential choices that span the backwardsforwards orientation axis, such as addressing previous arguments (Tan et al., 2016; Zhang et al., 2016) or asking leading questions (Leech, 2002). Our measure is designed to be broadly applicable, requiring no domain-specific annotations; we provide exploratory output on justice utterances from the Supreme Court’s oral arguments in the appendix and release code implementing our approach at http://convokit.cornell.edu to encourage experiments in other domains. However, the method’s efficacy in the present setting is likely boosted by the relative uniformity of crisis counseling conversations; and future work could aim to better accomodate settings with less structure and more linguistic variability. With such improvements, it would be interesting to study other domains where interlocutors are faced with conversational challenges. Acknowledgements We thank Jonathan P. Chang, Caleb Chiam, Liye Fu, Dan Jurafsky, Jack Hessel, and Lillian Lee for helpful conversations, and the anonymous reviewers for their thoughtful comments. We also thank Ana Smith for collecting and processing the Supreme Court oral argument transcripts we used in the supplementary material. This research, and the counseling service examined herein, would not have been possible without Crisis Text Line. We are particularly grateful to Robert Filbin, Christine Morrison, and Jaclyn Weiser for their valuable insights into the experiences of counselors and for their help with using the data. The research has been supported in part by NSF CAREER Award IIS1750615 and a Microsoft Research PhD Fellowship. The collaboration with Crisis Text Line was supported by the Robert Wood Johnson Foundation; the views expressed here do not necessarily reflect the views of the foundation. References Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health. Transactions of the Association for Computational Linguistics. David C. Atkins, Mark Steyvers, Zac E. Imel, and Padhraic Smyth. 2014. Scaling up the evaluation of psychotherapy: Evaluating motivational interviewing fidelity via statistical text classification. Implementation Science. 5285 Graham D. Bodie, Andrea J. Vickery, Kaitlin Cannava, and Susanne M. Jones. 2015. The Role of “Active Listening” in Informal Helping Conversations: Impact on Perceptions of Listener Helpfulness, Sensitivity, and Supportiveness and Discloser Emotional Improvement. Western Journal of Communication. David Bracewell, Marc Tomlinson, and Hui Wang. 2012. Identification of Social Acts in Dialogue. In Proceedings of COLING. Dogan Can, David C. Atkins, and Shrikanth S. Narayanan. 2015. A dialog act tagging approach to behavioral coding: A case study of addiction counseling conversations. In Proceedings of INTERSPEECH. Jie Cao, Michael Tanana, Zac Imel, Eric Poitras, David Atkins, and Vivek Srikumar. 2019. Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes. In Proceedings of ACL. Jonathan P. Chang, Caleb Chiam, Liye Fu, Andrew Wang, Justine Zhang, and Cristian DanescuNiculescu-Mizil. 2020. ConvoKit: A Toolkit for the Analysis of Conversations. In Proceedings of SIGDIAL. Mark G Core and James F Allen. 1997. Coding Dialogs with the DAMSL Annotation Scheme. AAAI fall symposium on communicative action in humans and machines. Jacques Gaume, Nicolas Bertholet, Mohamed Faouzi, Gerhard Gmel, and Jean-Bernard Daeppen. 2010. Counselor motivational interviewing skills and young adult change talk articulation during brief motivational interventions. Journal of Substance Abuse Treatment. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A Framework for Modeling the Local Coherence of Discourse. Computational Linguistics. Clara E. Hill and Emilie Y. Nakayama. 2000. Clientcentered therapy: Where has it been and where is it going? A comment on Hathaway (1948). Journal of Clinical Psychology. Jon Houck. 2008. Motivational Interviewing Skill Code (MISC) 2.1. Neil P. Jones, Alison A. Papadakis, Caitlin M. Hogan, and Timothy J. Strauman. 2009. Over and over again: Rumination, reflection, and promotion goal failure and their interactive effects on depressive symptoms. Behaviour Research and Therapy. Alan E. Kazdin. 2007. Mediators and mechanisms of change in psychotherapy research. Annual Review of Clinical Psychology. Juliette Landphair and Teri Preddy. 2012. More than talk: Co-Rumination among college students. About Campus. Fei-Tzin Lee, Derrick Hull, Jacob Levine, Bonnie Ray, and Kathy McKeown. 2019. Identifying therapist conversational actions across diverse psychotherapeutic approaches. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology. Beth L. Leech. 2002. Asking Questions: Techniques for Semistructured Interviews. Political Science & Politics. William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text - Interdisciplinary Journal for the Study of Discourse. Brian L. Mishara, Franc¸ois Chagnon, Marc S. Daigle, Bogdan Balan, Sylvaine Raymond, Isabelle Marcoux, C´ecile Bardon, Julie K. Campbell, and Alan D. Berman. 2007. Which helper behaviors and intervention styles are related to better short-term outcomes in telephone crisis intervention? Results from a Silent Monitoring Study of Calls to the U.S. 1-800SUICIDE Network. Suicide & life-threatening behavior. Johanna D. Moore and C´ecile Paris. 1993. Planning Text for Advisory Dialogues: Capturing Intentional and Rhetorical Information. Computational Linguistics. Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, Deborah A. Cai, Jennifer E. Midberry, and Yuanxin Wang. 2014. Modeling topic control to detect influence in conversations using nonparametric topic models. Machine Learning. Susan Nolen-Hoeksema, Blair E. Wisco, and Sonja Lyubomirsky. 2008. Rethinking Rumination. Perspectives on Psychological Science. Sungjoon Park, Donghyun Kim, and Alice Oh. 2019. Conversation Model Fine-Tuning for Classifying Client Utterances in Counseling Dialogues. In Proceedings of NAACL. Ver´onica P´erez-Rosas, Rada Mihalcea, Kenneth Resnicow, Satinder Singh, and Lawrence An. 2017. Understanding and Predicting Empathic Behavior in Counseling Therapy. In Proceedings of ACL. Ver´onica P´erez-Rosas, Xuetong Sun, Christy Li, Yuchen Wang, Kenneth Resnicow, and Rada Mihalcea. 2018. Analyzing the Quality of Counseling Conversations: The Tell-Tale Signs of High-quality Counseling. In Proceedings of LREC. Anthony R. Pisani, Nitya Kanuri, Bob Filbin, Carlos Gallo, Madelyn Gould, Lisa S. Lehmann, Robert Levine, John E. Marcotte, Brian Pascal, David Rousseau, Shairi Turner, Shirley Yen, and Megan L. Ranney. 2019. Protecting User Privacy and Rights in Academic Data-Sharing Partnerships: Principles From a Pilot Program at Crisis Text Line. Journal of Medical Internet Research. 5286 Vinodkumar Prabhakaran, Camilla Griffiths, Hang Su, Prateek Verma, Nelson Morgan, Jennifer L. Eberhardt, and Dan Jurafsky. 2018. Detecting Institutional Dialog Acts in Police Traffic Stops. Transactions of the Association for Computational Linguistics. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised Modeling of Twitter Conversations. In Proceedings of NAACL. Carl R. Rogers. 1957. The necessary and sufficient conditions of therapeutic personality change. Journal of Consulting Psychology. Stephen Rollnick and William R. Miller. 1995. What is Motivational Interviewing? Behavioural and Cognitive Psychotherapy. Amanda J. Rose, Wendy Carlson, and Erika M. Waller. 2007. Prospective Associations of CoRumination with Friendship and Emotional Adjustment: Considering the Socioemotional Trade-Offs of Co-Rumination. Developmental Psychology. Sara Rosenthal and Kathleen McKeown. 2015. I Couldn’t Agree More: The Role of Conversational Structure in Agreement and Disagreement Detection in Online Discussions. In Proceedings of SIGDIAL. Harvey Sacks. 1992. Lectures on Conversation. Blackwell. Jonathan Sandoval, Amy Nicole Scott, and Irene Padilla. 2009. Crisis counseling: An overview. Psychology in the Schools. Emanuel A. Schegloff. 1987. Some sources of misunderstanding in talk-in-interaction. Linguistics. Chenhao Tan, Vlad Niculae, Cristian DanescuNiculescu, and Lillian Lee. 2016. Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions. In Proceedings of WWW. Michael Tanana, Kevin A. Hallgren, Zac E. Imel, David C. Atkins, and Vivek Srikumar. 2016. A Comparison of Natural Language Processing Methods for Automated Coding of Motivational Interviewing. Journal of Substance Abuse Treatment. Jenny Thomas. 1983. Cross-Cultural Pragmatic Failure. Applied Linguistics, (2). Terence J. G. Tracey, Bruce E. Wampold, James W. Lichtenberg, and Rodney K. Goodyear. 2014. Expertise in psychotherapy: An elusive goal? The American Psychologist. Marilyn Walker and Steve Whittaker. 1990. Mixed Initiative in Dialogue: An Investigation into Discourse Segmentation. In Proceedings of ACL. Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Persuasion for Good: Towards a Personalized Persuasive Dialogue System for Social Good. In Proceedings of ACL. Bonnie Lynn Webber. 2001. Computational Perspectives on Discourse and Dialog. In The Handbook of Discourse Analysis. John Wiley & Sons, Ltd. Harry Weger, Gina R. Castle, and Melissa C. Emmett. 2010. Active Listening in Peer Interviews: The Influence of Message Paraphrasing on Perceptions of Listening Skill. International Journal of Listening. Justine Zhang, Robert Filbin, Christine Morrison, Jaclyn Weiser, and Cristian Danescu-Niculescu-Mizil. 2019. Finding Your Voice: The Linguistic Development of Mental Health Counselors. In Proceedings of ACL. Justine Zhang, Ravi Kumar, Sujith Ravi, and Cristian Danescu-Niculescu-Mizil. 2016. Conversational Flow in Oxford-style Debates. In Proceedings of NAACL. Justine Zhang, Arthur Spirling, and Cristian DanescuNiculescu-Mizil. 2017. Asking too Much? The Rhetorical Role of Questions in Political Discourse. In Proceedings of EMNLP. A Appendices A.1 Further Details About Methodology Here, we provide further details on our methodology for measuring orientation, to supplement the description in Section 4.2 and aid reproducibility. Our aim in the first part of our methodology is to measure the orientation of phrasings Ωw. We would like to ensure that our measure is not skewed by the relative frequencies of phrasings, and take two steps to this ends, which empirically produced more interpretable output. First, we scale rows of term-document matrix X (corresponding to texter phrasings) to unit ℓ2 norm before deriving their representation in T via singular value decomposition. Second, we remove the first SVD dimension, i.e., first column of U, and renormalize each row, before proceeding. A.2 Further Details About Application to Counseling Data Here, we provide further details on how we applied our methodology to the dataset of counseling conversations in order to measure the orientation of counselor utterances, as described in Section 5. In particular, we list empirical choices 5287 0% 20% 40% 60% 80% conversation segment 0.03 0.02 0.01 naive distance 0% 20% 40% 60% 80% conversation segment 0.810 0.815 0% 20% 40% 60% 80% conversation segment 40 60 % ?s Figure 6: Mean naive distance, backwards-range (←− σ ), and % of utterances with questions, per segment for riskassessed (orange) and non-risk-assessed (black) conversations; solid circles indicate statistically significant differences (Wilcoxon p < 0.01, comparing conversation types within counselor). made in extracting and then processing the training set of 351,935 texter and counselor messages used to measure phrasing orientations. We randomly sampled 20% of counselors in the data; all conversations by these counselors were omitted in subsequent analyses. We merged consecutive messages from the same interlocutor. To mitigate potential noise in characterizing phrasings, we considered only counselor and texter message pairs in which each message has between 15 and 45 words. We extracted all messages from the conversations which met these constraints. We represent counselor phrasings as dependency-parse arcs and texter messages as unigrams, reflecting the comparatively structured language of the counselors versus the texters (counselors are instructed to speak in grammatically well-formed sentences). We consider the 5000 most frequent phrasings for each role, and discard sentences without any such phrasings. Finally, we used 25 SVD dimensions to induce T. A.3 Full Listing of Counselor Action Labels Table 2 lists each counseling action derived from the training material and used during the validation procedure (Section 5.1) to label sentences. A.4 Orientation and Lexical Properties Here, we supplement our discussion of simple lexical properties that could be used to characterize messages (Section 5.3), discussing how orientation reflects these properties and showing that orientation is not subsumed by them. Backwards-range. As seen in their weak backwards-range (high, i.e., spread-out ←− σ ), affirmations that highlight the texter’s strengths can follow a variety of situations. However, the replies they prompt are yet more diffuse, emphasizing the need to compare both directions. reflection (113) re-wording to show understanding and validate feelings It can be overwhelming to go through that on your own. affirmation (60) pointing out the texter’s positive qualities and actions You showed a lot of strength in reaching out to us. exploration (44) prompting texters to expand on their situation Is this the first real fight you’ve had with your boyfriend? problem solving (110) identifying the texter’s goals and potential coping skills What do you usually do to help you feel calmer? closing (43) reviewing the conversation and transitioning to a close I think you have a good plan to get some rest tonight. risk assessment (19) assessing suicidal ideation or risk of self-harm Do you have access to the pills right now? Table 2: Counseling strategies and representative examples derived from the training material. The number of sentences (out of 400) assigned to each label is shown in parentheses (11 were not labeled as any action). Question-asking. We see that questions—which nominally prompt the texter for a response—are more forwards-oriented than non-questions; 61% of sentences with ‘?’ have Ω> 0 compared to 21% of sentences without. However, these numbers also show that explicitly-marked questions are inexact proxies of forwards-oriented sentences—as in Table 1, questions can address a past remark by prompting clarifications, while counselors can use non-questions to suggest an intent to advance stages (e.g., to transition to problem-solving). A.5 Relating Alternate Measures to Conversation Progress Figure 6 shows averages per conversation segment (i.e., 20% of a conversation) for each alternative measure considered in Section 5.3. Comparing to the average Ωmax and Ωmin shown in Figure 4, we see that these measures track with the conversa5288 tion’s progress differently, and none of them distinguish the initial dynamics of risk-assessed conversations as dramatically as reflected in Ωmax, e.g., simple counts of questions do not distinguish between questions geared towards risk-assessment versus more open-ended problem exploration. A.6 Further Details About Data Used in Analyses Here, we provide further details about the subset of data we used to analyze counselors’ orientation behavior (Section 5.4). In particular, our aim was to characterize behavior in typical conversations rather than exceptional cases or those that reflected earlier versions of the training curriculum. As such, we only considered the 234,433 conversations which had least five counselor messages, were not risk-assessed or disconnected before completion, and were taken by counselors who joined the platform after January 2017. A.7 Counselor-Level Analysis Here, we provide further details about our procedure for analyzing counselor-level correspondences between orientation and effectiveness indicators, as alluded to in Section 5.4. Recall that our conversation-level findings may be confounded by texter idiosyncracies: for instance, texters with particularly difficult situations might affect a counselor’s behaviour, but may also be more likely to give bad ratings, independent of how the counselor behaves. Alternatively, an overly long conversation could arise because the counselor is less forwards-oriented, or because the texter is reluctant to make progress from the outset, making it hard for the counselor to attempt to prompt them forwards. To separate a counselor’s decisions from these situational effects, we take a counselor-level perspective. While counselors cannot selectively talk with different types of texters, they can exhibit cross-conversational inclinations for particular behaviors. We therefore relate these cross-conversational tendencies in orienting a conversation to a counselor’s long-term propensity for receiving helpful ratings, or having long or short conversations. We proceed to describe our methodology for relating counselor tendencies to perceived helpfulness; an analogous procedure could be applied to conversation length as well. We characterize a counselor’s orienting behavior as the average Ωmax and Ωmin over the conversations they take; we likewise take the proportion of their (rated) conversations which were perceived as helpful. We restrict our counselor level analyses to the 20th to 120th conversations of the 1495 counselors who have taken at least 120 conversations (ignoring their initial conversations when they are still acclimatizing to the platform). To cleanly disentangle counselor tendency and conversational circumstance, we split each counselor’s conversations into two interleaved subsets (i.e., first, third, fifth . . . versus second, fourth . . . conversations), measuring orientation on one subset and computing a counselor’s propensity for helpful ratings on the other. Here, we draw an analogy to the machine learning paradigm of taking a train-test split: “training” counselor tendencies on one subset and “testing” their relation to rating on the other subset. In general, the directions of the effects we observe hold with stronger effects if we do not take this split. Echoing conversation-level effects, counselors that tend to be less forwards-oriented and more backwards-oriented (those in the bottom thirds of Ωmax and Ωmin respectively) are more likely to be perceived as helpful; this contrast is stronger in terms of Ωmin (Cohen’s d = 0.30, p < 0.001) than Ωmax (d = 0.13, p < 0.05), suggesting that a counselor’s tendency for advancing weighs less on their perceived helpfulness than their tendency for addressing. Also in line with the conversationlevel findings, counselors with smaller Ωmax tend to have longer conversations (d = 0.54, p < 0.001), as do counselors with larger Ωmin (d = 0.17)—here, a counselor’s tendency for advancing is more related to their propensity for shorter conversations than their tendency for addressing. We note that counselors on the platform cannot selectively take conversations with certain texters; rather, the platform automatically assigns incoming texters to a counselor. As such, the counselorlevel effects we observe cannot be explained by counselor self-selection for particular situations. A.8 Orientation in Multi-Sentence Utterances Our motivation in characterizing utterances using the minimum and maximum sentence orientation was to reflect potential heterogeneities in utterances which could be both backwards- and forwards-oriented (consider a message where c2 and c1 from Figure 1 are concatenated). Examin5289 Orientation Example phrasings Example sentences Less forwardsoriented (bottom 25%) i understand, have been, part of, so you, sentence, talking about might, particular but the, give to As I understand the facts [...] he had tried to kill the husband, shooting him twice in the head? (Scalia) You started out by talking about what the first jury knew, but [...] we aren’t reviewing that determination. (Roberts) I guess the problem is the list of absurdities that they point to, not the least of which is a dry dock. (Sotomayor) So you hedged, because it’s very hard to find the right sentence. (Breyer) More forwardsoriented (top 25%) hypothetical, would have, agree, difference [between], [do] you think, your position your argument, a question apply, was there Suppose under this hypothetical [...] the judge doesn’t say aggravated murder when he submits it to the jury. (Kennedy) I just want to know your position on the second, the cart before the horse point. (Souter) Do you also agree [...] that if not properly administered there is some risk of excruciating pain? (Stevens) What’s the difference between pigment and color [...] ? (Ginsburg) Table 3: Example phrasings and sentences from utterances of Supreme Court justices, identified in parentheses, which are less or more forwards-oriented (bottom and top 25% of Ω). ing the 64% of counselor messages with multiple sentences, we see that 52% of these messages have Ωmin < 0 and Ωmax > 0. Our method, which is able to account for this heterogeneity, thus points to one potential strategy for counselors to bridge between both objectives. A.9 Application to Supreme Court Oral Arguments Here, we include an exploratory study of how our approach could be adapted to analyze domains beyond crisis counseling conversations, as alluded to in Section 6. We apply the method to measure the orientation of utterances by Supreme Court justices during oral arguments, when they engage in exchanges with lawyers (so justices and lawyers play the roles of counselor and texter, respectively, in our method). We used transcripts of 668 cases, taken from the Oyez project (https://www.oyez. org/), averaging 120 justice utterances per case.11 Oral arguments contain more linguistic and topical heterogeneiety than counseling conversations, since they cover a wide variety of cases, and because the language used by each justice is more differentiated. In addition, the dataset is much smaller. As such, this represents a more challenging setting than the counseling context, requiring changes to the precise procedure used to measure orientation, and pointing to the need for further technical improvements, discussed in Section 6. Nonetheless, our present methodology is able to produce interpretable output. Table 3 shows representative phrasings and (paraphrased) sentences with different orientations. In contrast to the coun11The data used can be found at http://analysmith. com/research/scotus/data. seling domain, 70% of phrasings and 93% of sentences have Ω> 0, perhaps reflecting the particular power dynamic in the Supreme Court, where justices are tasked with scrutinizing the arguments made by lawyers. We find that highly forwardsoriented phrasings and utterances tend to reflect justices pressing on the lawyers to address a point (e.g., do you agree, what’s the difference between); the least forwards-oriented phrases involve the justice rehashing and reframing (not always in complimentary terms) a lawyer’s prior utterances (e.g., so you [...], [as] i understand). We used a training set of 15,862 justice and lawyer messages, where each utterance had between 10 and 100 words. Both lawyer and justice utterances were represented as dependency-parse arcs. Empirically, we found that the methodology was sensitive to idiosyncracies of particular cases and justices. To minimize this effect, we restricted the size of the justice’s vocabulary by only considering the 398 justice phrasings which occurred in at least 200 utterances.
2020
470
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5290–5305 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5290 Detecting Perceived Emotions in Hurricane Disasters Shrey Desai1 Cornelia Caragea2 Junyi Jessy Li1 1The University of Texas at Austin 2University of Illinois at Chicago {shreydesai@, jessy@austin.}utexas.edu [email protected] Abstract Natural disasters (e.g., hurricanes) affect millions of people each year, causing widespread destruction in their wake. People have recently taken to social media websites (e.g., Twitter) to share their sentiments and feelings with the larger community. Consequently, these platforms have become instrumental in understanding and perceiving emotions at scale. In this paper, we introduce HURRICANEEMO, an emotion dataset of 15,000 English tweets spanning three hurricanes: Harvey, Irma, and Maria. We present a comprehensive study of fine-grained emotions and propose classification tasks to discriminate between coarsegrained emotion groups. Our best BERT (Devlin et al., 2019) model, even after task-guided pre-training which leverages unlabeled Twitter data, achieves only 68% accuracy (averaged across all groups). HURRICANEEMO serves not only as a challenging benchmark for models but also as a valuable resource for analyzing emotions in disaster-centric domains. 1 Introduction Natural disasters cause thousands of deaths and displace hundreds of millions each year (Ritchie and Roser, 2020). These catastrophic events not only induce material destruction but also stir an integral part of being human: our emotions. Disasters adversely affect individuals’ mental states (Fritz and Marks, 1954; Kinston and Rosser, 1974), and therefore it is no surprise that many take to social media (e.g., Twitter) to share their feelings. Social media websites, as a result, have become an essential platform for understanding the expression and perception of emotions at a significantly larger scale (Mohammad, 2012; Wang et al., 2012; Mohammad and Kiritchenko, 2015; Volkova and Bachrach, 2016; Abdul-Mageed and Ungar, 2017), with far reaching potential influences from academic research to public policy (Dennis et al., 2006; Fritze et al., 2008; Fraustino et al., 2012). While natural language processing methods have been effective for emotion detection (Strapparava and Mihalcea, 2007), existing resources struggle in disaster-centric domains, in part due to distributional shifts. Emotion detection in natural disasters (e.g., hurricanes) requires implicit reasoning not available as surface-level lexical information. For example, in “of course, [we]1 still have the [storm surge]2 coming,” given the context, we can reasonably infer discontent towards the “storm surge” despite the absence of polarizing words. Therefore, distantly supervised techniques largely based on lexical units (Mohammad and Turney, 2013; Abdul-Mageed and Ungar, 2017) fail to capture this type of deeper semantic phenomena. Our paper presents a comprehensive investigation into perceived emotions in hurricane disasters. To this end, we introduce HURRICANEEMO, a dataset of 15,000 disaster-related tweets (in English) streamed during Hurricanes Harvey, Irma, and Maria, which were devastating tropical storms occurring in the 2017 Atlantic hurricane season (Belles, 2017). Our samples are annotated with fine-grained emotions derived from the Plutchik Wheel of Emotions (Plutchik, 2001), a well-defined ontology of emotion classes commonly used in computational social science (Abdul-Mageed and Ungar, 2017).1 To measure inter-annotator agreement on fine-grained emotion labels, we conceptualize the Plutchik Emotion Agreement (PEA) metric (§3). PEA is intuitively grounded; our human evaluation shows workers agree with PEA’s rankings 88% of the time. Furthermore, we perform insightful analyses on implicit and explicit emotions in hurricane tweets (§4). Quite surpris1Specifically, we use Plutchik-8 and Plutchik-24 emotions. We refer readers to Plutchik (2001) for an in-depth discussion on their conception. 5291 ingly, we find consistencies in Plutchik-24 emotion distributions across Hurricanes Harvey, Irma, and Maria. HURRICANEEMO also serves as a challenging new benchmark for large-scale, pre-trained language models. We establish baselines for a coarser Plutchik-8 emotion detection task using BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) (§5). Our experiments reveal: (1) BERT only achieves 64% (averaged) accuracy; and (2) using “better” pre-trained models (e.g., RoBERTa) does not help, which is a strikingly different trend than most leaderboards (Wang et al., 2018). To better understand their pitfalls, in particular BERT, we conduct a comprehensive error analysis of 200 incorrectly predicted samples. In addition, we incorporate stronger inductive biases into BERT via pretraining on related tasks, which culminates in (averaged, absolute) +4% accuracy (§6). Finally, we propose unsupervised domain adaptation to bridge the domain gap between existing large-scale emotion datasets (e.g., EMONET (Abdul-Mageed and Ungar, 2017)) and HURRICANEEMO (§7). Our code and datasets are made publicly available.2 2 Related Work Emotion detection has been extensively studied in news headlines (Strapparava and Mihalcea, 2007; Katz et al., 2007), blog posts (Aman and Szpakowicz, 2007), health-related posts (Khanpour and Caragea, 2018), and song lyrics (Strapparava et al., 2012), but only recently, in social media websites (e.g., Twitter, Facebook) (Mohammad, 2012; Wang et al., 2012; Mohammad and Kiritchenko, 2015; Volkova and Bachrach, 2016; Abdul-Mageed and Ungar, 2017). However, emotion detection in disaster-centric domains, despite its practical importance, is limited. Schulz et al. (2013) (singlehandedly) annotate 2,200 Hurricane Sandy tweets using Ekman-6 emotions (Ekman, 1992). In contrast, we introduce 15,000 annotated tweets from multiple hurricanes with (much more fine-grained) Plutchik-24 emotions. Unlike Abdul-Mageed and Ungar (2017), we focus on readers’ perceived emotions rather than writers’ intended emotions. Furthermore, in disaster-centric domains, the lack of labeled data required to train reliable models precludes the use of supervised learning techniques. Several works propose to use labeled data 2https://github.com/shreydesai/ hurricane from prior (source) disasters to learn classifiers for new (target) disasters (Verma et al., 2011; Nguyen et al., 2017; Imran et al., 2013, 2016; Caragea et al., 2016). However, due to the unique nature of each disaster (e.g., type, geographical location, season, cultural differences among the affected population), the source disaster may not accurately reflect the characteristics of the target disaster (Palen and Anderson, 2016; Imran et al., 2015). Domain adaptation techniques address these challenges by efficiently using large amounts of unlabeled target domain data, consequently outperforming the aforementioned supervised techniques (Alam et al., 2018; Li et al., 2017). Our work contributes to disaster-centric emotion detection in three ways by: (1) introducing a dataset large enough to train supervised classifiers; (2) exploring various forms of pre-training to instill strong inductive biases; and (3) establishing domain adaptation baselines by leveraging emotive samples obtainable via distant supervision. 3 Dataset Construction In this section, we present HURRICANEEMO, an annotated dataset of 15,000 English tweets from Hurricanes Harvey, Irma, and Maria. We detail each component, including the initial preprocessing (§3.1), annotation procedures (§3.2), and the formulation and calculation of inter-annotator agreement (§3.3). 3.1 Preprocessing Ray Chowdhury et al. (2019) release a repository of large-scale Twitter datasets consisting of tweets streamed during the Harvey, Irma, and Maria hurricanes, which we will refer to as HURRICANEEXT (i.e., extended). We use their tweets as a starting point for the construction of our dataset. We perform two types of preprocessing. First, we replace usernames and links with <USER> and <URL>, respectively, then eliminate duplicate tweets. Second, we use filtering techniques to ensure the resulting tweets contain emotive content. We assume a lexical prior over emotion tweets, that is, requiring that an emotive tweet consist of at least one word derived from EMOLEX (Mohammad and Turney, 2013). EMOLEX consists of 14,182 crowdsourced words associated with several emotion categories. Critically, these words appear in emotional contexts, but are not necessarily emotion words themselves. For example, “payback” is 5292 related to the emotion “anger,” but is also used extensively in finance. Significant past work (BravoMarquez et al., 2014; Majumder et al., 2017; Giatsoglou et al., 2017) has used this lexicon to bootstrap their emotion datasets, since the alternatives are (1) using unlabeled tweets as-is or (2) using a model to classify emotional tweets. Initially, we started with (1) and did no emotion-related preprocessing. However, the dataset contained many spurious tweets, such as snippets of news articles, that had little to do with emotions. The level of noise rendered the data prohibitively costly to annotate. For (2), there is simply no such large-scale data to train on, and existing resources like EMONET manifest an even stronger prior where tweets are only included if they explicitly contain an emotion hashtag (e.g., #sad, #angry, #happy). 3.2 Annotation We randomly sample 5,000 tweets each for annotation from the filtered datasets for Harvey, Irma, and Maria; in total, this yields 15,000 annotations. We request workers on Amazon Mechanical Turk to label tweets with a list of Plutchik-24 emotions. Furthermore, to enable fine-grained emotion analysis, we do not crowdsource Plutchik-8 emotions directly. We require that workers reside in the US and have completed 500+ HITs with an acceptance rate ≥95%. Each HIT is completed by 5 workers. 3.3 Inter-Annotator Agreement In this section, we elaborate on our PEA metric for computing inter-annotator agreement with finegrained emotion labels. Challenges. Fine-grained emotion annotation presents several challenges for evaluating interannotator agreement. First, because a tweet can convey multiple emotions, we allow workers to select more than one Plutchik-24 emotion. This implies an agreement metric must support scoring sets of categorical values. Passonneau (2004) use set distance metrics for capturing agreement between coreference cluster annotations. Similarly, Wood et al. (2018) incorporate Jaccard’s similarity in Krippendorff’s alpha. However, these methods would penalize fine-grained emotions equally, which is not ideal. For the Plutchik wheel, the proximity of any two emotions represents their relatedness. For example, TRUST and ADMIRATION belong to the same emotion group while LOATHING and ADMIRATION are orthogonal to each other. Figure 1: Visualization of the PEA metric. The unit circle is superimposed on the Plutchik wheel, and each Plutchik-8 emotion is assigned a radian value. In this example, the (normalized) distance between the emotions corresponding to 3π 2 and π 4 is 0.25. PEA Scores. We introduce the Plutchik Emotion Agreement—hereafter referred to as PEA—to address these challenges. We superimpose a unit circle onto the Plutchik wheel, representing each Plutchik-8 emotion as a polar coordinate (e.g., DISAPPROVAL = ( √ 2 2 , − √ 2 2 )). Intuitively, the angles between Plutchik-8 emotions represent how similar or dissimilar they are. If two Plutchik-24 annotations belong to the same Plutchik-8 group, we do not penalize them (e.g., JOY and ECSTASY incur no penalty). Otherwise, we enforce a linear penalty based on how radially separate the annotations are (e.g., ECSTASY and GRIEF incur the highest penalty). Higher PEA scores imply more agreement. Example. Figure 1 visualizes our metric. In this example, two annotators select emotions with radians 3π 2 and π 4 , respectively. The |f(e(i) x ) −f(e(j) y )| term evaluates to 5π 4 . Then, it is normalized using 1 π, yielding 5 4 = 1.25. Finally, we subtract to obtain the agreement score: |1 −1.25| = 0.25. Intuitively, this makes sense as the decisions are only slightly better than being in complete disagreement (i.e., orthogonal). Formulation. For clarity, we introduce notation. Let wx and wy denote workers with (categorical) annotation sets {e(i) x }n i=1 and {e(j) y }m j=1, respectively. The pairwise agreement d(wx, wy) between the workers is computed as: 1 n n X i=1 max j |1 −1 π|f(e(i) x ) −f(e(j) y )||  5293 Vocabulary Features (%) Hurricane Orig. Filt. # @ // Harvey 20.6 K 14.4 K 48.1 27.4 85.3 Irma 14.6 K 8.8 K 41.4 22.5 81.7 Maria 21.6 K 15.8 K 36.5 30.3 78.3 Table 1: Per-hurricane dataset statistics. In the vocabulary section, Orig. shows vocabulary counts (obtained through whitespace tokenization) and Filt. shows counts after <USER> and <URL> preprocessing. In the features section, we show the percentage of tweets with hashtags (#), user mentions (@), and links (//). where 1 π is a normalizing constant and f : Ω→R is a map from Plutchik-8 emotions to radians. Given a collection of workers that annotated a tweet, we obtain per-worker PEA scores by averaging over all possible pairwise agreements. For example, if workers w1−3 annotated the same tweet, PEA(w1) = 1 2(d(w1, w2) + d(w1, w3)). For quality control, we filter annotations from workers with PEA ≤0.55. This threshold is determined through manual inspection of 50 workers and their annotations. The (averaged, per-worker) PEA scores for each hurricane are: Harvey (65.7), Maria (67.3), and Irma (70.3).3 Human Evaluation. We perform a human evaluation with our proposed metric, which is absent in previous work for measuring inter-annotator agreement for emotion annotations (Wood et al., 2018; Öhman et al., 2018). Crowdsourced workers are asked to determine the agreement between two annotation pairs constructed from three annotators, that is, A: (e1, e2) and B: (e1, e3). They choose between three options: (1) A has higher agreement than B; (2) A and B have (roughly) the same agreement; and (3) B has higher agreement than A. 88.2% of the worker rankings match with PEA’s rankings, pointing towards strong human agreement. The workers themselves in this study also show good agreement according to Krippendorff’s alpha (α = 74.0) (Artstein and Poesio, 2008).4 4 Qualitative Analysis 4.1 Dataset Overview Table 1 presents several statistics of HURRICANEEMO. We make three observations. First, the 3A reasonable interpretation of PEA scores may be as follows: 0—25 (no agreement), 25—50 (poor agreement), 50—75 (moderate agreement), 75—100 (high agreement). 4See Appendix B for details on our procedures. Mexico helped us during Houston, lets return the favor! joy, admiration, pensiveness Hurricane Irma is hitting Florida. Everyone evacuated Here I am, still in Florida bring it on Irma, bring it on. acceptance, anticipation, vigilance puerto rico should be the ONLY THING in American News. <URL> anger, annoyance, interest Table 2: Samples from HURRICANEEMO. Each sample is annotated with multiple Plutchik-24 emotions. vocabularies across all datasets are large considering there are only 5,000 tweets per hurricane. The vocabularies do decrease by about 30% after preprocessing, although the resulting sizes still suggest users use a myriad of words to express their emotions. Second, only about 50% of Harvey tweets and 40% of Irma/Maria tweets contain hashtags. Hashtags are a unique marker of Twitter discourse (Ritter et al., 2011), but in our dataset specifically, hashtags are used to tag particular entities, spread disaster-relief awareness, and create trending content. This phenomena alone makes our tweets different from those collected through distant supervision (Abdul-Mageed and Ungar, 2017). Third, roughly 80-85% of tweets contain links to third-party content. Users commonly use links to share news articles, resources for humanitarian aid, and other miscellaneous multimedia. Table 2 shows three samples from HURRICANEEMO. Unlike EMONET (Abdul-Mageed and Ungar, 2017), our dataset does not have the strong assumption that only one emotion can be expressed in a tweet. For example, the first tweet lexically points towards the expression of more than one emotion. The predicate “helped us” implies the user admires Mexico for providing aid, and the exclamation mark is indicative of JOY . In addition, our samples contain a mix of implicit and explicit emotions, which lexical information alone cannot resolve. In the third tweet, there are no particular words that point towards ANGER and ANNOYANCE , but we can infer the user is upset that the media is not prioritizing Hurricane Maria. Finally, our emotion prediction tasks cannot be solved by simply retrofitting pre-trained word embeddings (Mikolov et al., 2013; Pennington et al., 2014) or contextualized representations (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019), which we also empirically show in our experiments (§5). These methods work best for explicit emotion detection as they largely overfit to sparse lex5294 Plutchik-8 Plutchik-24 Emotion Abbrv. Emotion Abbrv. aggressiveness agrsv rage rage anger anger annoyance anyce optimism optsm vigilance vglnc anticipation antcp interest inrst love love ecstasy ecsty joy joy serenity srnty submission sbmsn admiration admrn trust trust acceptance acptn awe awe terror trror fear fear apprehension aprhn disapproval dspvl amazement amzmt surprise srpse distraction dstrn remorse rmrse grief grief sadness sadns pensiveness psvne contempt cntmp loathing lthng disgust dsgst boredom brdom Table 3: Plutchik-8 (left) and Plutchik-24 (right) abbreviations used throughout this paper. ical features. Rather, in order to capture implicit emotions, models must carry an inductive bias that appropriately reasons over the context (e.g., what event(s) occurred?) and semantic roles (e.g., what happened to whom?) while balancing the aforementioned features. 4.2 Fine-Grained Emotions We begin to analyze the fine-grained emotions present in our datasets. We ask the following questions: What is the general distribution of emotions? Are certain emotion groups highlighted more than others? How does the distribution change across hurricanes? Figure 2 shows Plutchik-24 emotion distributions for Hurricanes Harvey, Irma, and Maria. From these plots, a couple of trends emerge. First, the Plutchik-24 emotion counts are within the ballpark of each other with the notable exceptions of ADMIRATION and FEAR . This suggests that, on average, hurricane disasters evoke a similar spread of implicit and explicit emotions among most emotion categories. Second, users tend to post more optimistic content during hurricane disasters. We hyFigure 2: Per-hurricane emotion counts where each box’s Plutchik-8 emotion is broken down into its respective Plutchik-24 emotions. Plutchik-24 emotions are abbreviated using the codes in Table 3. pothesize that users use Twitter as a social platform to spread awareness of the hurricanes themselves or post-disaster relief efforts, commonly using hashtags like #prayfortexas, #floridaevacuation, and #donationdrive. It is encouraging to see that although users do express natural emotions such as fear, sadness, and anger, many seek to help others in the face of adversity. Third, sharp changes in emotion counts between Harvey and Irma may be tied to their history. In the 2017 Atlantic hurricane season, Harvey materialized as a Cat-4 hurricane, and Irma followed around two weeks later as a Cat-5 hurricane.5 Through side-by-side comparisons of both hurricanes’ tweets, we found the Irma tweets had more descriptions of destruction and its aftermath. These changes in discourse potentially explain shifts between the emotion distributions. 4.3 Emotion Co-Occurrence Thus far, we have analyzed each Plutchik-24 emotion in isolation. In this section, we ask the following questions: How do Plutchik-8 emotion groups co-occur with one another? Do co-occurrence patterns change across hurricanes? Figure 3 shows co-occurrence heatmaps for each hurricane. Intuitively, we see strong correlations between polarized emotions, that is, emo5Abbreviations for Category-x. This refers to the SaffirSimpson scale for classifying hurricanes based on sustained wind speed, which ranges from 1-5 in order of severity. 5295 Figure 3: Per-hurricane Plutchik-8 emotion cooccurrences. The matrices are symmetric across the diagonal, so we mask the upper diagonal of the matrix for clarity. Plutchik-8 emotions are abbreviated using the codes in Table 3. tions categorized as positive and negative. For example, ( LOVE , AGGRESSIVENESS ) does not appear as frequently as ( LOVE , OPTIMISM ) or ( CONTEMPT , AGGRESSIVENESS ). However, this premise does not always hold; the pairs ({ DISAPPROVAL , REMORSE }, OPTIMISM ) also co-occur across all hurricanes. Representative of this phenomenon is the tweet: “I’m raising money for Hurricane Maria Destroyed Everything. Click to Donate: <URL> via <USER>.” The user indicates disapproval towards the hurricane by evoking pathos, but also shows optimism by donating money to a relief effort. Finally, similar to our previous observations (§4.2), we notice an increase in co-occurrence frequencies from Harvey →Irma. This increase is, somewhat surprisingly, most apparent with ( AWE , OPTIMISM ), although ({ DISAPPROVAL , REMORSE }, AWE ) frequencies also exhibit a noticeable gain. Once again, we posit that users may be expressing their sadness regarding the Cat-4 →Cat-5 jump, but at the same time, offering solidarity to those affected by the hurricanes. 5 Baseline Modeling We now turn to modeling the emotions in HURRICANEEMO. Because Plutchik-24 emotion counts are heavily imbalanced, we group them into Plutchik-8 emotions and consequently create 8 binary classification tasks. The tweets are assorted into their respective label buckets; because tweets may be labeled with more than one emotion, each belongs to one or more buckets. These buckets represent positive samples (i.e., tweets labeled with that emotion). To create negative samples, we sample an equal amount from Plutchik-8 Emotion Train Valid Test Aggressiveness 4,209 526 527 Optimism 11,902 1,488 1,488 Love 2,569 321 322 Submission 6,092 762 762 Awe 7,324 916 916 Disapproval 5,931 741 742 Remorse 7,732 967 967 Contempt 3,763 470 471 Table 4: Train, validation, and test splits for each Plutchik-8 emotion. other buckets. From here, we shuffle the positive and negative samples and perform an 80/10/10 split to create the train, validation, and test sets.6 Table 4 enumerates the splits. 5.1 Experimental Setup We consider both traditional neural models and pretrained language models. We implement our models in PyTorch (Paszke et al., 2019) and perform all experiments on an NVIDIA Titan V GPU. Training and optimization hyperparameters are detailed in Appendix C. We report mean performance across 10 runs, each with a different random initialization. Below, we elaborate on our models: Traditional Neural Models. Each is equipped with 200D GloVe embeddings pre-trained on 2B tweets (Pennington et al., 2014): (1) Logistic Regression: We average the word embeddings of each token in the sequence (Iyyer et al., 2015); (2) CNN: A word-level CNN (Kim, 2014) with 100 filters of size [3, 4, 5] obtains representations. They are max-pooled and concatenated row-wise. We also experiment with a character-level CNN with filter sizes [5, 6, 7]; (3) GRU: A one-layer, unidirectional GRU (Cho et al., 2014) with a hidden dimension of 100 obtains features, which are mean pooled. For all models, penultimate representations are projected with a weight matrix W ∈Rd×2. Pre-trained Language Models. We fine-tune base versions of BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) using the HuggingFace Transformers library (Wolf et al., 2019). We 6We also experimented with keeping all negative samples as opposed to sampling an equal amount. Each binary task had around 5-7x more negative samples; this significantly hurt model performance. Even with a class imbalance penalty, the models almost never predicted positive samples. Note that although, in aggregate, the number of positive and negative samples match, they do not necessarily match in the train, validation, and test splits. 5296 AGR OPT LOV SBM AWE DSP RMR CNT AVG Logistic Reg. 49.8 74.7 50.9 50.6 48.9 49.7 48.3 46.8 52.5 Char CNN 50.2 74.3 43.0 47.2 44.7 47.1 47.4 48.8 50.3 Word CNN 43.6 74.5 44.7 45.4 44.2 47.0 46.9 43.9 48.8 GRU 48.4 74.7 54.0 50.9 50.1 49.9 48.9 49.2 53.3 BERT 67.6 75.0 54.0 67.4 68.3 55.7 58.5 66.8 64.1 RoBERTa 59.7 74.7 54.0 62.3 56.0 50.9 49.7 56.4 58.0 Table 5: Plutchik-8 binary task accuracies, including aggressiveness (agr), optimism (opt), love (lov), submission (sbm), awe (awe), disapproval (dsp), remorse (rmr), contempt (cnt). We also report an average (avg) across all binary tasks. Best results are bolded. use the sentence representations embedded in the [CLS] token, then project it with a weight matrix W ∈Rd×2. The language model and classification parameters are jointly fine-tuned. 5.2 Results Table 5 presents our classification results. We make the following observations: BERT consistently outperforms other models on most emotion tasks. BERT shows strong performance across all 8 binary tasks in comparison to traditional neural models and RoBERTa. Unlike most traditional neural models, its accuracy never falls below random chance, showing it captures at least some of the complex phenomena present in our dataset. However, our tasks remain challenging for both types of models alike. For traditional models, word embeddings alone do not provide enough representational power to model our emotional contexts. Although GRUs perform well on EMONET (Abdul-Mageed and Ungar, 2017), we suspect that they simply memorize emotion lexicons (§4.1), which is not a notable strategy for capturing implicit emotions. Nevertheless, BERT only obtains an average accuracy of about 64%. This leaves plenty of room for future work; we perform a comprehensive error analysis as a step towards this goal (§5.3). “Better” pre-trained models (e.g., RoBERTa) do not necessarily help performance. Unlike popular benchmarks such as GLUE (Wang et al., 2018) where more pre-training monotonically increases performance, rather encouragingly, we do not observe the same trend. RoBERTa’s average performance is around 5% better than GRU’s, but still around 6% worse than BERT’s. We hypothesize that this drop in performance is attributed to pre-training →fine-tuning domain discrepancies. That is, RoBERTa’s (additional) pre-training data (e.g., CC-News) may be too distant from Twitter data, which is known for its short contexts and unique vernacular (Ritter et al., 2011). We encourage practitioners to avoid applying state-of-the-art models without augmenting them with task-guided pre-training objectives, as we explore later (§6). 5.3 Error Analysis Using our BERT model, we sample 25 test errors from each of the 8 emotion tasks, yielding a total of 200 errors. We group the errors into the following categories: lexical and syntactic cues (45%), insufficient context (24%), entity mentions (15%), subjective labeling (10%), and unknown reasons (6%). The top three categories are discussed below: Lexical and Syntactic Cues. BERT often relies on surface-level lexical features to make predictions, as do most emotion prediction models. This bias also extends to certain syntactic features, such as punctuation. In “pls be safe everyone!!!!”, BERT associates the exclamation mark with a positive emotion, but here, the speaker is more concerned. Insufficient Context. Users often comment on events, public policies, or linked content that, by themselves, do not carry features for supervised learning. This type of error is not necessarily a shortcoming of BERT, but rather our dataset. For example, in “for [tracy mcgrady]1, [hall induction]2 muted by effects of [hurricane harvey]3 at home”, one use external knowledge to reason between the noun phrases and discern the latent emotions. Entity Mentions. BERT also makes erroneous predictions in the presence of certain entity mentions. For example, BERT classifies this tweet as AGGRESSIVENESS : “nytimesworld: mexico offered aid to texas after harvey. but after an earthquake and hurricane, it says all help is needed at home.” Here, the user is merely quoting a 5297 AGR OPT LOV SBM AWE DSP RMR CNT AVG NO-PRETRAIN 67.6 75.0 54.0 67.4 68.3 55.7 58.5 66.8 64.1 Supervised Transfer EMONET 73.5 75.2 55.2 68.8 67.5 53.1 60.0 71.7 65.6 SENTIMENT 72.8 75.8 62.7 71.0 65.6 53.4 57.0 67.3 65.7 Unsupervised Transfer EMONET 72.1 75.1 54.0 61.0 65.1 54.2 60.7 69.4 63.9 SENTIMENT 69.1 74.9 53.6 66.2 67.3 54.3 57.9 64.4 63.5 HURRICANEEXT 73.6 75.4 69.8 68.9 69.7 57.9 60.2 70.2 68.2 Table 6: Task-guided pre-training accuracies (abbreviations defined in Table 5). Displayed in order of supervised (middle) and unsupervised (bottom) pre-training. Results are highlighted with blue (↑) and red (↓) with respect to NO-PRETRAIN. Best viewed in color. news statement as opposed to formulating opinions regarding NY Times’ discourse. Because the sentiment towards NY Times is negative in our datasets overall (due to public backlash on its stories), BERT likely capitalizes on this mentionemotion bias. 6 Task-Guided Pre-training To improve upon our baselines, we explore pretraining as a means of implicitly incorporating an inductive bias into our BERT model. Our hope is that these pre-training tasks will not only make BERT more robust in the Twitter domain, but also provide useful (albeit abstract) knowledge for the end emotion prediction tasks. For brevity, we chiefly focus on BERT, although our methods can be generalized to other pre-trained models. Setup. We explore, in isolation, supervised and unsupervised pre-training tasks. For the supervised setting, we pre-train on a multi-class emotion task (EMONET) (Abdul-Mageed and Ungar, 2017) and binary sentiment analysis task (SENTIMENT) (Go et al., 2009). For the unsupervised setting, we pretrain on dynamic masked language modeling (Liu et al., 2019) on (unlabeled) samples from EMONET, SENTIMENT, and HURRICANEEXT (§3.1). For both types of tasks, we further pre-train BERT for a fixed number of epochs, then fine-tune it on a HURRICANEEMO task. We compare these results to NO-PRETRAIN, namely the BERT results verbatim from Table 5. We report mean performance across 10 pre-training →fine-tuning runs. Further training details, including samples sizes for the pre-training tasks, are available in Appendix D. Results. Table 6 shows the pre-training results. Supervised pre-training significantly helps with 34 emotions, but degrades overall performance on 2-4 emotions. We posit SENTIMENT aids emotions with highly predictive features. For example, “wtf” in “it’s literally the size of texas. wtf” is correlated with AGGRESSIVENESS , but no such lexical cues exist in “not all heros wear capes <3 thank you stanley - homeless #hurricane evacuee grooms lost pets,” which is an AWE sample. The unsupervised pre-training results also show a couple trends. First, EMONET largely hurts downstream performance, especially reducing SUBMISSION accuracy by -6%. Second, SENTIMENT (in its unlabeled form) yields no noticeable benefits. This implies sentiment information is much more valuable, but of course, subject to the fact that the emotion task is heavily aligned with the original sentiment task. Third, we obtain encouraging results with HURRICANEEXT pre-training. The gains are most noticeable on AGGRESSIVENESS and LOVE , but this objective adds +1-2% accuracy for tasks on which supervised pre-training suffered. 7 Fine-Grained Unsupervised Domain Adaptation When new disasters emerge, it is likely we may not have emotion annotations, as alluded to previously (§2). Nevertheless, these annotations would be valuable for organizations trying to understand the emotional profile of users during a crisis (Fraustino et al., 2012). In this section, we explore ways to leverage supervision from large-scale emotion datasets (e.g., EMONET (Abdul-Mageed and Ungar, 2017)) in providing labels for our hurricane emotion datasets. We frame this problem as unsupervised domain adaptation; EMONET is the labeled source domain and our hurricane datasets are the unlabeled target domain. Below, we elaborate 5298 AGR OPT LOV SBM AWE DSP RMR CNT AVG SRC-ONLY 53.3 42.2 43.4 47.1 54.7 49.8 62.5 56.5 51.2 PRETRAIN-SRC 54.8 43.2 45.1 47.8 54.4 50.4 63.3 57.1 52.0 PRETRAIN-TRG 55.0 44.2 46.2 48.0 55.5 49.9 63.7 60.5 52.9 PRETRAIN-JOINT 52.7 44.2 45.5 47.8 54.8 49.9 61.6 56.3 51.6 TRG-ONLY 67.6 75.0 54.0 67.4 68.3 55.7 58.5 66.8 64.1 Table 7: Unsupervised domain adaptation accuracies (abbreviations defined in Table 5). Results are highlighted with blue (↑) and red (↓) with respect to SRC-ONLY. Best viewed in color. on our methods. Framework. EMONET was conceived as a multiclass classification task for Plutchik-8 emotions (Abdul-Mageed and Ungar, 2017). In contrast, we introduce binary classification tasks, one for each Plutchik-8 emotion. We split the EMONET multiclass task into 8 binary tasks; this creates a oneto-one alignment between each source and target domain task. We separately perform unsupervised domain adaptation for each binary task. Methods. We use our BERT model (without taskguided pre-training) as the underlying classifier. Following Han and Eisenstein (2019), we chiefly focus on using strategic pre-training techniques that enable effective transfer between disparate domains. The systems for comparison are: (1) SRCONLY: BERT is trained in the source domain and evaluated in the target domain; (2) TRG-ONLY: BERT is trained and evaluated in the target domain. These results are borrowed verbatim from Table 5; (3) PRETRAIN-*: BERT undergoes dynamic masked language modeling pre-training using data from domain *, is trained in the source domain, and finally evaluated in the target domain (Han and Eisenstein, 2019). PRETRAIN-SRC only uses pre-training samples from the source domain, PRETRAIN-TRG only uses samples from the target domain, and PRETRAIN-JOINT uses samples from both the source and target domains.7 We report mean performance across 10 pre-training → fine-tuning runs. Results. Table 7 shows the unsupervised domain adaptation results. Overall, we do not find a significant increase in performance over the SRCONLY baseline. Pre-training consistently adds +1% in average accuracy, but still leaves a large gap between PRETRAIN-SRC and TRG-ONLY. Re7PRETRAIN-JOINT is conceptually similar to ADAPTABERT in Han and Eisenstein (2019), however, we dynamically generate pre-training data (Liu et al., 2019). gardless, we have a few observations. First, we do not see a (relatively) large increase in performance for SUBMISSION , AWE , DISAPPROVAL , and REMORSE . These emotions may need more explicit strategies to enable domain adaptation. This is also supported by our previous results (§6), where we also do not see a (relatively) large benefit from task-guided pre-training. Second, PRETRAINJOINT performs worse than both PRETRAIN-SRC and PRETRAIN-TRG. We posit that, for our emotion tasks, pre-training with a mixture of domains yields a noisier training signal compared to a parameter bias towards the target domain. 8 Conclusion We present HURRICANEEMO, an annotated dataset of perceived emotions spanning 15,000 tweets from multiple hurricanes. Tweets are annotated with finegrained Plutchik-24 emotions, from which we analyze implicit and explicit emotions and construct Plutchik-8 binary classification tasks. Comprehensive experiments demonstrate our dataset is a challenging benchmark, even for large-scale pre-trained language models. We release our code and datasets as a step towards facilitating research in disastercentric domains. Acknowledgements Thanks to Katrin Erk for reviewing an early version of this manuscript, Yasumasa Onoe for discussions on masked language model pre-training, and the anonymous reviewers for their helpful comments. This work was partially supported by the NSF Grants IIS-1850153, IIS-1912887, and IIS1903963. References Muhammad Abdul-Mageed and Lyle Ungar. 2017. EmoNet: Fine-Grained Emotion Detection with Gated Recurrent Neural Networks. In Proceedings 5299 of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 718–728, Vancouver, Canada. Association for Computational Linguistics. Firoj Alam, Shafiq Joty, and Muhammad Imran. 2018. Domain Adaptation with Adversarial Training and Graph Embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1077– 1087, Melbourne, Australia. Association for Computational Linguistics. Saima Aman and Stan Szpakowicz. 2007. Identifying Expressions of Emotion in Text". In Text, Speech and Dialogue, pages 196–205, Berlin, Heidelberg. Springer Berlin Heidelberg. Ron Artstein and Massimo Poesio. 2008. Inter-coder Agreement for Computational Linguistics. Computational Linguistics, 34(4):555–596. Jonathan Belles. 2017. 2017 Atlantic Hurricane Season Recap: 17 Moments We’ll Never Forget. Weather.com. Felipe Bravo-Marquez, Marcelo Mendoza, and Barbara Poblete. 2014. Meta-level Sentiment Models for Big Social Data Analysis. Knowledge-Based Systems, 69:86–99. Cornelia Caragea, Adrian Silvescu, and Andrea H. Tapia. 2016. Identifying informative messages in disaster events using convolutional neural networks. In Proceedings of the 13th International Conference on Information Systems for Crisis Response and Management (ISCRAM). Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder– Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar. Association for Computational Linguistics. Michael Robert Dennis, Adrianne Kunkel, Gillian Woods, and Paul Schrodt. 2006. Making Sense of New Orleans Flood Trauma Recovery: Ethics, Research Design, and Policy Considerations for Future Disasters. Analyses of Social Issues and Public Policy, 6(1):191–213. Shrey Desai, Hongyuan Zhan, and Ahmed Aly. 2019. Evaluating Lottery Tickets under Distributional Shifts. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 153–162, Hong Kong, China. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Paul Ekman. 1992. An Argument for Basic Emotions. Cognition and Emotion, 6(3-4):169–200. Julia Daisy Fraustino, Brooke Fisher Liu, and Yan Xian Jin. 2012. Social Media Use During Disasters: A Review of the Knowledge Base and Gaps. National Consortium for the Study of Terrorism and Responses to Terrorism. Charles Fritz and Eli Marks. 1954. The NORC Studies of Human Behavior in Disaster. Journal of Social Issues, 10(3):26–41. Jessica Fritze, Grant Blashki, Susie Burke, and John Wiseman. 2008. Hope, Despair and Transformation: Climate Change and the Promotion of Mental Health and Wellbeing. International Journal of Mental Health Systems, 2(1):13. Maria Giatsoglou, Manolis Vozalis, Konstantinos Diamantaras, Athena Vakali, George Sarigiannidis, and Konstantinos Chatzisavvas. 2017. Sentiment Analysis Leveraging Emotions and Word Embeddings. Expert Systems with Applications, 69:214–224. Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter Sentiment Classification using Distant Supervision. Stanford University CS224N Project Report. Xiaochuang Han and Jacob Eisenstein. 2019. Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4237–4247, Hong Kong, China. Association for Computational Linguistics. Muhammad Imran, Carlos Castillo, Fernando Diaz, and Sarah Vieweg. 2015. Processing Social Media Messages in Mass Emergency: A Survey. Association for Computing Machinery (ACM) Computing Surveys, 47(4):67:1–67:38. Muhammad Imran, Shady Elbassuoni, Carlos Castillo, Fernando Diaz, and Patrick Meier. 2013. Practical Extraction of Disaster-relevant Information from Social Media. In Proceedings of the 22Nd International Conference on World Wide Web, WWW 2013 Companion, pages 1021–1024, New York, NY, USA. Association for Computing Machinery (ACM). Muhammad Imran, Prasenjit Mitra, and Jaideep Srivastava. 2016. Cross-Language Domain Adaptation for Classifying Crisis-Related Short Messages. In 13th Proceedings of the International Conference on Information Systems for Crisis Response and Management, Rio de Janeiro, Brasil, May 22-25, 2016. 5300 Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. 2015. Deep Unordered Composition Rivals Syntactic Methods for Text Classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1681–1691, Beijing, China. Association for Computational Linguistics. Phil Katz, Matthew Singleton, and Richard Wicentowski. 2007. SWAT-MP: The SemEval-2007 Systems for Task 5 and Task 14. In 4th International Workshop on Semantic Evaluations, pages 308–313. Hamed Khanpour and Cornelia Caragea. 2018. Finegrained emotion detection in health-related online posts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1160–1166, Brussels, Belgium. Association for Computational Linguistics. Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Warren Kinston and Rachel Rosser. 1974. Disaster: Effects on Mental and Physical State. Journal of Psychosomatic Research, 18(6):437–456. Hongmin Li, Doina Caragea, and Cornelia Caragea. 2017. Towards Practical Usage of a Domain Adaptation Algorithm in the Early Hours of a Disaster. In Proceedings of the 14th International Conference on Information Systems for Crisis Response and Management (ISCRAM). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. Navonil Majumder, Soujanya Poria, Alexander Gelbukh, and Erik Cambria. 2017. Deep LearningBased Document Modeling for Personality Detection from Text. IEEE Intelligent Systems, 32(2):74– 79. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and Their Compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS 2013, pages 3111–3119, USA. Curran Associates Inc. Saif Mohammad. 2012. #Emotional Tweets. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 246–255, Montréal, Canada. Association for Computational Linguistics. Saif Mohammad and Svetlana Kiritchenko. 2015. Using Hashtags to Capture Fine Emotion Categories from Tweets. Computational Intelligence, 31(2):301–326. Saif Mohammad and Peter Turney. 2013. Crowdsourcing a Word-Emotion Association Lexicon. Computational Intelligence, 29(3):436–465. Dat Nguyen, Kamela Ali Al Mannai, Shafiq Joty, Hassan Sajjad, Muhammad Imran, and Prasenjit Mitra. 2017. Robust Classification of Crisis-Related Data on Social Networks Using Convolutional Neural Networks. In Proceedings of the International AAAI Conference on Web and Social Media (ICWSM 2017). Emily Öhman, Kaisla Kajava, Jörg Tiedemann, and Timo Honkela. 2018. Creating a Dataset for Multilingual Fine-grained Emotion-detection Using Gamification-based Annotation. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 24–30, Brussels, Belgium. Association for Computational Linguistics. Leysia Palen and Kenneth Anderson. 2016. Crisis Informatics—New Data for Extraordinary Times. Science, 353(6296):224–225. Rebecca Passonneau. 2004. Computing Reliability for Coreference Annotation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal. European Language Resources Association (ELRA). Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Álché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke 5301 Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Robert Plutchik. 2001. The Nature of Emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American Scientist, 89(4):344– 350. Jishnu Ray Chowdhury, Cornelia Caragea, and Doina Caragea. 2019. Keyphrase Extraction from Disasterrelated Tweets. In The World Wide Web Conference, WWW 2019, pages 1555–1566, New York, NY, USA. Association for Computing Machinery (ACM). Hannah Ritchie and Max Roser. 2020. Natural Disasters. Our World in Data. Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named Entity Recognition in Tweets: An Experimental Study. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1524–1534, Edinburgh, Scotland, UK. Association for Computational Linguistics. Axel Schulz, Tung Dang Thanh, Heiko Paulheim, and Immanuel Schweizer. 2013. A Fine-Grained Sentiment Analysis Approach for Detecting Crisis Related Microposts. In Information Systems for Crisis Response and Management (ISCRAM). Carlo Strapparava and Rada Mihalcea. 2007. SemEval2007 Task 14: Affective Text. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 70–74, Prague, Czech Republic. Association for Computational Linguistics. Carlo Strapparava, Rada Mihalcea, and Alberto Battocchi. 2012. A Parallel Corpus of Music and Lyrics Annotated with Emotions. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012), pages 2343– 2346, Istanbul, Turkey. European Language Resources Association (ELRA). Sudha Verma, Sarah Vieweg, William Corvey, Leysia Palen, James Martin, Martha Palmer, Aaron Schram, and Kenneth Anderson. 2011. Natural Language Processing to the Rescue? Extracting "Situational Awareness" Tweets During Mass Emergency. In Proceedings of the International AAAI Conference on Web and Social Media (ICWSM 2017). Svitlana Volkova and Yoram Bachrach. 2016. Inferring Perceived Demographics from User Emotional Tone and User-Environment Emotional Contrast. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1567–1578, Berlin, Germany. Association for Computational Linguistics. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit Sheth. 2012. Harnessing Twitter "Big Data" for Automatic Emotion Identification. In Proceedings of the 2012 ASE/IEEE International Conference on Social Computing and 2012 ASE/IEEE International Conference on Privacy, Security, Risk and Trust, SOCIALCOM-PASSAT 2012, pages 587–592, Washington, DC, USA. IEEE Computer Society. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. arXiv preprint arXiv:1910.03771. Ian Wood, John McCrae, Vladimir Andryushechkin, and Paul Buitelaar. 2018. A Comparison Of Emotion Annotation Schemes And A New Annotated Data Set. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). 5302 Figure 4: Top 1000 (common) wordpiece densities for EMONET (left) and HURRICANEEMO (right). Densities are calculated by counting wordpiece occurrences and normalizing by the total number of occurrences. A Domain Shifts Following the methodology outlined in Desai et al. (2019), we use the Jenson-Shannon Divergence (JSD) between the vocabulary distributions in EMONET and HURRICANEEMO to quantify the domain divergence. The JSD is 0.199, approximately 1e5 larger than those reported in Desai et al. (2019). Figure 4 shows the densities of the top 1000 common wordpieces between both domains. The striking visual differences, even among common wordpieces, indicates a large discrepancy in the input distributions. B Plutchik Emotion Agreement Interpretable Scale. To assign PEA scores an interpretable scale, we compare randomly generated annotations against our obtained annotations. We detail the process to create random annotations. First, we compute the average number of emotions a worker assigns to a tweet, which evaluates to 3 for all hurricanes. Second, we sample 3 random emotions from the Plutchik-8 wheel for 5000 total annotations. Figure 5 compares the two types of annotations. The per-worker PEA scores for the random annotations collect around the mean (0.5), which is expected due to the law of large numbers. In contrast, the per-worker PEA scores for our annotations are shifted towards the right, indicating better agreement than the random baseline. Therefore, we interpret our annotations as showing “moderate agreement” under the PEA metric. Human Evaluation. Using our worker annotations across all three hurricanes, we create two annotation pairs for three workers, that is, A: (w1, w2) and B: (w1, w3), where A and B have a shared worker w1. This format lends a total of 73,418 A/B total pairs. We sample 500 A/B pairs from this pool, initialize each HIT with 10 pairs, and assign 5 total workers per HIT. C Baseline Modeling Table 8 shows the hyperparameters. For our pretrained models (e.g., BERT and RoBERTa), we use the default dropout rate (0.1) on the self-attention layers, but do not use additional dropout on the top linear layer. Furthermore, we use gradient accumulation to enable training with larger mini-batches. D Task-Guided Pre-training Masked Language Modeling. Following Devlin et al. (2019), we select 15% of inputs uniformly at random (except for [CLS] and [SEP]) as prediction targets for the masked language modeling task. From the corresponding inputs, 80% are set to [MASK], 10% are set to random tokens, and 10% are set to the original tokens. However, we follow Liu et al. (2019) in creating pre-training data dynamically, rather than statically. This merely leads to slower convergence times as it becomes more difficult to fit the data. We fine-tune on the pre-training data for 10 epochs using a batch size of 16 and learning rate of 2e-5. Once pre-training concludes, we initialize a BERT model with these weights and fine-tune it on our emotion tasks using the hyperparameters in Table 8 with a learning rate of 3e-5. Pre-training Corpus. Our pre-training corpus is created by concatenating a collection of (shuffled) tweets x1, x2, · · · , xn together, each separated by [SEP]. The corpus is split into segments of size 512 with [CLS] prepended to each one. For clarity, each batch consisting of tokens xi, · · · , xj is constructed as [CLS] xi [SEP] · · · [SEP] xj [SEP]. We elaborate on two design decisions. First, prepending [CLS] to each batch, as opposed to each tweet, leads to better results. Second, largely due to computational reasons, we pack disparate tweets together in the same batch. E Extended Pre-training Experiments E.1 EmoNet Binary Task Pre-training In Section 6, we pre-trained on a EMONET multiclass classification task. In this section, we explore a fine-grained pre-training scheme. We create Plutchik-8 binary tasks from EMONET, then fine-tune each emotion model separately on their respective HURRICANEEMO tasks. Table 9 shows 5303 Figure 5: Histograms corresponding to PEA score distributions for random annotations (top) and our annotations (bottom). Logistic Reg. Word CNN Char CNN GRU BERT RoBERTa Epochs 5 5 5 5 3 3 Batch Size 64 64 64 64 16 16 Learning Rate 1e-4 1e-3 5e-5 1e-4 2e-5 2e-5 Weight Decay 0 0 0 0 0 1e-3 Dropout 0 0.5 0.7 0.7 – – Table 8: Hyperparameters for the baseline modeling experiments (§5). the results. EMONET-BINARY performs markedly worse than EMONET-MULTI and leads to a -2% reduction in averaged accuracy. Therefore, multiclass pre-training creates better representations for downstream evaluation, although they are still not as effective as other pre-training methods (e.g., masked language modeling). E.2 Varying Amounts of Pre-training Data The SENTIMENT and HURRICANEEXT datasets contain significantly more samples than currently used. In this section, we study the effects of using varying amounts of pre-training data on downstream HURRICANEEMO performance. For both pre-training datasets, we use 1.6M samples. Table 10 shows the supervised SENTIMENT results. Tables 11 and 12 show the unsupervised SENTIMENT and HURRICANEEXT results, respectively. For both types of pre-training tasks, there is no noticeable benefit to using more pre-training data. The supervised SENTIMENT and unsupervised HURRICANEEXT results both saturate around 200K samples, which is what we report in our paper. The results for unsupervised HURRICANEEXT pre-training are especially compelling because they show that, without any labeled data, we can achieve strong downstream results. Finally, the unsupervised SENTIMENT task yields almost no gains for most emotions, showing that the type of data used for masked language modeling matters. Through side-by-side comparisons, we notice that the SENTIMENT samples are shorter in length and the HURRICANEEXT samples contain more relevant content, such as hurricane-specific hashtags. 5304 AGR OPT LOV SBM AWE DSP RMR CNT AVG NO-PRETRAIN 67.6 75.0 54.0 67.4 68.3 55.7 58.5 66.8 64.1 Multi 73.5 75.2 55.2 68.8 67.5 53.1 60.0 71.7 65.6 Binary 67.7 74.9 53.7 64.7 67.5 54.5 55.8 63.6 62.8 Table 9: Pre-training using multi-class and binary EMONET tasks. See Table 6 for styling considerations. AGR OPT LOV SBM AWE DSP RMR CNT AVG NO-PRETRAIN 67.6 75.0 54.0 67.4 68.3 55.7 58.5 66.8 64.1 50 K 73.5 75.3 60.7 69.7 67.1 51.3 55.2 66.3 64.9 100 K 72.8 75.8 62.7 71.0 65.6 53.4 57.0 67.3 65.7 200 K 73.4 75.6 69.1 69.8 66.5 53.3 57.1 69.8 66.8 400 K 73.1 75.4 67.2 70.1 65.7 53.2 57.2 67.4 66.2 800 K 73.5 75.3 56.2 69.4 65.1 54.4 57.1 68.2 64.9 1600 K 71.2 75.2 64.8 68.8 64.7 55.1 56.1 70.7 65.8 Table 10: Pre-training using 50-1600K labeled samples from SENTIMENT. See Table 6 for styling considerations. AGR OPT LOV SBM AWE DSP RMR CNT AVG NO-PRETRAIN 67.6 75.0 54.0 67.4 68.3 55.7 58.5 66.8 64.1 50 K 70.7 74.9 54.6 66.3 67.0 53.9 59.3 65.8 64.0 100 K 71.6 75.0 54.0 66.3 68.6 55.1 57.4 62.3 63.8 200 K 69.1 74.9 53.6 66.2 67.3 54.3 57.9 64.4 63.5 400 K 70.0 74.9 53.8 69.0 68.8 54.5 60.1 64.5 64.5 800 K 70.5 74.9 55.1 66.2 69.0 53.3 59.4 63.4 64.0 1600 K 69.1 74.9 55.3 66.5 67.2 54.6 59.3 65.0 64.0 Table 11: Pre-training using 50-1600K unlabeled samples from SENTIMENT. See Table 6 for styling considerations. AGR OPT LOV SBM AWE DSP RMR CNT AVG NO-PRETRAIN 67.6 75.0 54.0 67.4 68.3 55.7 58.5 66.8 64.1 50 K 72.7 75.0 60.0 67.2 69.0 56.4 60.4 72.2 66.6 100 K 71.8 75.1 57.4 69.1 70.3 55.2 62.4 65.3 65.8 200 K 73.6 75.4 69.8 68.9 69.7 57.9 60.2 70.2 68.2 400 K 71.4 75.2 59.7 69.7 68.8 55.2 60.7 63.6 65.5 800 K 71.4 75.3 58.9 69.4 69.6 54.0 60.3 71.3 66.3 1600 K 73.3 75.7 50.7 68.3 65.5 55.8 61.0 64.1 64.3 Table 12: Pre-training using 50-1600K unlabeled samples from HURRICANEEXT. See Table 6 for styling considerations. 5305 Figure 6: Visualization of BERT’s self-attention on a Hurricane Irma sample. In particular, this head captures the entities “hurricane irma,” “florida,” “everyone” and the verb phrase “crane collapses.”
2020
471
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5306–5316 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5306 Hierarchical Modeling for User Personality Prediction: The Role of Message-Level Attention Veronica E. Lynn, Niranjan Balasubramanian, and H. Andrew Schwartz Stony Brook University {velynn, niranjan, has}@cs.stonybrook.edu Abstract Not all documents are equally important. Language processing is increasingly finding use as a supplement for questionnaires to assess psychological attributes of consenting individuals, but most approaches neglect to consider whether all documents of an individual are equally informative. In this paper, we present a novel model that uses message-level attention to learn the relative weight of users’ social media posts for assessing their five factor personality traits. We demonstrate that models with message-level attention outperform those with word-level attention, and ultimately yield stateof-the-art accuracies for all five traits by using both word and message attention in combination with past approaches (an average increase in Pearson r of 2.5%). In addition, examination of the high-signal posts identified by our model provides insight into the relationship between language and personality, helping to inform future work. 1 Introduction Most language-based methods for human attribute prediction assume all documents generated by a person are equally informative. However, this is not necessarily true. Figure 1 gives examples of high and low signal messages for predicting extraversion — one’s tendency to be energized by social interaction. The high signal messages contain words relating to social interaction (hangin out, chillin), whereas the low signal messages, while still containing social-related words, have little clear relevance to extraversion. The former examples would ideally be weighted higher by a personality prediction model than the latter. This paper applies the idea of modeling document relevance to the task of personality prediction. Inferring an individual’s personality traits is a fundamental task in psychology (McCrae and Costa Jr, (a) High signal messages (b) Low signal messages Figure 1: Examples of high and low signal messages identified by our proposed model for predicting extraversion. All examples are from the same highly-extroverted user. Shading indicates strength of message-level (blue) and word-level (green) attention. 1997; Mischel et al., 2007), with social scientific applications ranging from public health (Friedman and Kern, 2014) and marketing (Matz et al., 2017) to personalized medicine (Chapman et al., 2011), mental health care (Bagby et al., 1995), and even providing useful information for downstream NLP tasks (Preot¸iuc-Pietro et al., 2015; Lynn et al., 2017). Recently, researchers from both NLP and psychology have turned toward more accurately assessing personality and other human attributes via language (Mairesse et al., 2007; Schwartz et al., 2013; Park et al., 2015; Kulkarni et al., 2018). The idea behind “language-based assessments” (Park et al., 2015) is that language use patterns can supplement and, in part, replace traditional and expensive questionnaire-based human assessments. Here, we present a hierarchical neural sequence model over both the words and messages of the user and correspondingly applies attention to each level. The document-level attention learns the relative importance of each social media post for predicting personality. Contributions. Our main contributions include: 1. A neural model for personality prediction that uses message-level attention to recover highsignal messages from noisy data. 5307 2. An empirical demonstration that shows models with message-level attention outperform those without. 3. State-of-the-art performance for languagebased assessment of personality. 4. Insight into the relationship between messagelevel language use and personality. 2 Model Architecture Our goal is to encode user messages into a representation that can be used to predict the personality of the user. We can use a two-step process to produce such a representation: First encode the sequences of words in each message to form message-level representations and then encode the message-level representations to form a user-level representation. Social media users write hundreds or even thousands of messages; while the messages, and the words within them, contain valuable clues to their personality, not all of it is equally valuable. An ideal representation of user text, therefore, should pay particular attention to personality-revealing portions of a user’s text. Hierarchical attention is a natural fit for this problem. At the message level, a word-attention model can learn to emphasize personality related words in the message representation, while at the user-level, a message attention model can learn to emphasize personality-related messages in the overall user representation. We instantiate this idea using a hierarchical sequence architecture shown in Figure 2. Given a set of n messages from a user u, the first step of the model is to produce an encoding for each message mi. Each word wi j in message mi is fed through a Gated Recurrent Unit (GRU) (Cho et al., 2014) to produce a hidden state: hi j = GRU(wi j) (1) We then apply an attention mechanism over the sequence of hidden states [hi 1, hi 2, ..., hi l]: di j = tanh(Wwordhi j + bword) (2) αi j = exp(di j ⊤dword) Σl k=0exp(di k ⊤dword) (3) si = l X k=0 αi khi k (4) where dword is a learned context vector for wordlevel attention, bword is a bias term, and αi j is a Figure 2: Diagram of our proposed model for personality prediction. (A) Each post is passed through a GRU to produce a message-level encoding. (B) A wordlevel attention mechanism learns weights for each of the words in the message. (C) All message representations are passed to a second GRU to produce a userlevel encoding. (D) A message-level attention mechanism learns weights for each of that user’s posts. (E) The user representation passes through two hidden layers and a final prediction layer. normalized attention weight for hi j. si is thus a weighted combination of the hidden states representing {wi 1, wi 2, ..., wi l}. Once we have these message representations, the next step is to encode each sequence of messages into a user representation. Each message representation si is passed through another encoder, also using Gated Recurrent Units: hi = GRU(si) (5) As before, the hidden states are then passed through another message-level attention mechanism: ei = tanh(Wmessagehi + bmessage) (6) βi = exp(e⊤ i emessage) Σn k=0exp(e⊤ k emessage) (7) u = n X k=0 βkhk (8) As before, emessage is a learned context vector for message-level attention. The representation for a user u is thus a weighted combination of the hidden states representing that person’s messages. Once the user representation has been produced, u is further passed through some fully-connected layers before being used for prediction at the final layer. 5308 In this way, important words and messages don’t get lost to noise and are instead carried through to later portions of the model, where they can have a greater impact on the final prediction. Our model is similar in structure and motivation to the Hierarchical Attention Network proposed by Yang et al. (2016). However, our work focuses on a different level of analysis: whereas Yang et al. (2016) encode words →sentences →documents, our work seeks to encode words →documents →users. This idea of applying attention at a document level when modeling user-level attributes is, to the best of our knowledge, entirely novel. We hypothesize that where attention is applied is crucial and that message-level attention is of particular importance for modeling personality. 3 Dataset We draw our data from consenting users of a Facebook application (Kosinski et al., 2013), which allowed users to take various psychological assessments and voluntarily share their data with researchers. Following the work of Schwartz et al. (2013) and Park et al. (2015), the current state of the art on this dataset, we filtered the users to those who shared their Facebook status posts, wrote at least 1,000 words across those statuses, provided their age and gender, and were less than 65 years old. All users completed psychological measures, ranging from 20 to 100 items, that assessed their Big Five personality traits (Costa and McCrae, 1992): conscientiousness, agreeableness, neuroticism, openness to experience, and extraversion. Each of the five dimensions is represented by a normalized, continuous score representing the degree to which that trait is exhibited. We refer to these as personality scores. The Big Five personality traits are described more fully in Section 4. Overall, our dataset contains Facebook statuses and personality scores for 68,687 users. To allow for direct comparisons, we use the same test set (n=1,943) as Park et al. (2015). Each of these test users completed a longer 100-item questionnaire, ensuring higher-quality scores. We sample an additional 4,998 for use as a development set, and leave the remaining 61,746 for training. On average, users in our dataset are 23 years old and 63% are female. Users had an average of 3,619 words and 165 messages, all posted to Facebook between 2009 and 2011. Ethical Research Statement. All participants consented to sharing their status updates and personality questionnaire results for research purposes, and the study has been approved by an academic institutional review board. 4 Big Five Personality Traits Discovery of the “Big Five” personality traits began nearly a century ago with some of the first datadriven, statistical latent variable modeling techniques (Thurstone, 1934). The goal in this decadeslong pursuit was not very different from that of producing latent vector embeddings of words:1 to use latent factor analysis to reveal underlying, stable dimensional vectors that distinguish people. However, rather than finding latent semantic dimensions of words, the models (run by hand at first) focused on how individuals answered questions about themselves. For example, modern questions include: “How much do you agree with these statements? (1) I am the life of the party; (2) I have difficulty understanding abstract ideas; (3) I like order; (4) I worry about things” (Goldberg et al., 2006). The idea behind this data-driven approach was that if such latent dimensions could be found to be stable across time and differing populations, that suggests they are fundamental to what makes each of us different. Such work continued for decades, documented across thousands of studies to eventually arrive at the acceptance of five such factors being fundamental and consistent across time and populations (Costa and McCrae, 1992). Those fundamental human factors, the target of our human language predictive task, are described below. The big five often goes by the acronym “OCEAN”, standing for openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. High scores for openness to experience are correlated with philosophical and free thought, as well as an interest in the arts, music, and cinema (Schwartz et al., 2013; Kern et al., 2014). Those who score low here may be more practical, realistic, or close-minded (Costa and McCrae, 1992). Individuals with high conscientiousness tend to be well organized and have a lot of self-discipline, which may be expressed through discussions of work or school-related responsibilities (Yarkoni, 2010; Kern et al., 2014). Those who score low 1In fact Thurstone referred to the latent variables as “vectors of the mind”. 5309 on this dimension may appear impulsive, disorganized, or unreliable. Those with high extraversion are likely to talk about friends, social situations, and interpersonal interaction. On the other hand, those with low extraversion may be more independent and may focus more on solo activities (e.g. watching television) (Costa and McCrae, 1992; Park et al., 2015). Agreeableness is associated with being friendly and good-natured, while those who score low may be selfish or rude. Swearing is highly correlated with low agreeableness (Yarkoni, 2010; Schwartz et al., 2013). High neuroticism is strongly linked to anxiety and depression, while low neuroticism is linked to emotional stability.2 This dimension may be expressed through feelings such as fear, sadness, or frustration (Costa and McCrae, 1992; Kern et al., 2014). 5 Evaluation In this section, we describe the method for training and evaluating our proposed model, along with the various baseline models we compared against. 5.1 Features Each user was represented as a sequence of their messages, from most to least recent, which were themselves represented as a sequence of word embeddings. To do so, we pre-trained 200dimensional word2vec embeddings (Mikolov et al., 2013) over all messages belonging to the training set users. The vocabulary was limited to words that appear in at least 50 messages. Words that occurred fewer times were replaced by an out-of-vocabulary token. The Language Detection Library (Shuyo, 2010) was used to filter out non-English texts.3 5.2 Baseline Models Ridge Regression (N-Grams/Topics). We compare against Park et al. (2015), which is the current state of the art on this dataset and, to the best of our knowledge, demonstrated the best published regression predictions over a Big Five personality factors from language alone. Their model uses a combination of n-gram features and LDA-based topics extracted from the training data. These features then undergo dimensionality reduction in the 2Some versions of the Big Five flip this dimension and call it “emotional stability”. 3Even without this step, the models tended to artificially exclude non-English texts by assigning them very low attention weights. form of univariate feature selection and randomized principal component analysis, resulting in a total of 5106 features. These features are then used to train ridge regression models, one per personality dimension, for prediction. Because we use the same test set users as Park et al. (2015), we compare directly against their reported results. Ridge Regression (Embeddings). In addition to the n-gram and topic-based ridge models of Park et al. (2015), we train ridge regression models using the word embeddings described in Section 5.1. These embeddings are averaged first per-message and then per-user, creating a 200-dimensional embedding per user to input to the model. DAN. We modify the model proposed in Section 2 to use a Deep Averaging Network (Iyyer et al., 2015), rather than a GRU, at the word and/or message level. This takes the average across all word (or message) embeddings to produce a message(or user-) level representation. DAN + Attn. Identical to the DAN variant except takes the weighted (rather than unweighted) average using learned attention weights. Sequence Network (SN). Similar to our proposed model but using the final state of each GRU, rather than word or message attention. Transformer (TN). This variant of our proposed model uses a two-layer transformer (Vaswani et al., 2017) with double-headed attention, rather than a GRU, at the message or word level. BERT. Whereas our proposed model learns message-level representations, we instead experiment with using pre-trained BERT embeddings (Devlin et al., 2019) as our message representations. These 768-dimension message embeddings are produced by averaging across all BERT token embeddings for each message (Matero et al., 2019). 5.3 Training All models were implemented using PyTorch (Paszke et al., 2017), with the exception of Ridge Regression which used scikit-learn (Pedregosa et al., 2011). One model was trained for each of the five personality dimensions. All deep learning models use two feed-forward layers with 512 hidden units each, followed by a final prediction layer. The GRU layers have a hidden size of 200 to match the number of embedding dimensions. Similarly, we learn a projection down to 200 dimensions for our BERT embeddings. All hyperparameters (dropout and learning rate 5310 word-to-message message-to-user OPE CON EXT AGR NEU DAN DAN .579 .516 .509 .474† .516 SN SN .601 .506 .512 .431 .523 DAN + Attn DAN + Attn .615† .506 .530† .499† .528† DAN + Attn SN + Attn .605 .510 .535† .501† .560† SN + Attn DAN + Attn .625 .497 .539† .519† .532† SN + Attn SN + Attn .626 .521 .552† .509† .541 TN (Attn) SN + Attn .544 .474 .513† .483† .526 Table 1: Comparison of Disattenuated Pearson R of different models for personality prediction on the test set users (n=1943), using different architectures to aggregate from word to message level and message to user level. † indicates statistically significant improves over the SN (No Attention) baseline, based on a paired t-test on the errors of each model. word-to-message message-to-user OPE CON EXT AGR NEU SN + Attn SN + Attn .626 .521 .552 .509 .541 BERT DAN .602 .512 .537 .505 .520 BERT SN .597 .511 .520 .522 .507 BERT DAN + Attn .613 .511 .570† .533† .536 BERT SN + Attn .610 .519 .544 .538† .547† BERT TN (Attn) .590 .501 .526 .523 .516 Table 2: Performance as Disattenuated Pearson R measures when using pre-trained BERT embeddings (Devlin et al., 2019) at the message level, compared to our proposed model which learns message-level representations. † indicates statistically significant improvement over the SN + Attn model based on a paired t-test on the errors of each approach. for deep models; alpha for ridge) were tuned over the development set for a single personality dimension (OPE), with the best parameters being used to train models for the remaining dimensions. The deep models were trained using a batch size of 64. Training lasted for a maximum of 20 epochs, with most models stopping after around 10 epochs due to early stopping with a patience of two epochs. To reduce memory requirements during training, each user’s post history was “chunked” into sequences of at most 500 messages each. For example, a user with 1250 messages total would be divided into three instances with 500, 500, and 250 messages. This was only done for the training set; the testing and tuning sets used all messages at once. 6 Results Our evaluation aims to answer the following: 1. How successful are attention-based models at predicting personality? 2. What is the distribution of high signal versus low signal messages? 3. What is the relative importance of messagelevel attention over word-level attention? 6.1 Attention for Personality Prediction Table 1 compares the performance of our proposed model, SN+Attn, against variations using different architectures to aggregate from the word to message level and message to user level. Model performance is given as the disattenuated Pearson correlation coefficient4 between the predicted and questionnaire-based personality scores. Overall the models with attention outperform those without. Perhaps surprisingly, the SN+Attn at the message level typically outperformed the DAN+Attn, which may be due to the messages forming a sort of personal narrative, containing repeated themes and follow-ups to previous messages. The SN+Attn also tended to outperform the DAN+Attn at the word level. Our proposed model, using SN+Attn at both word and message level, is best for three out of five dimensions. Table 2 shows the performance when using pretrained BERT embeddings (Devlin et al., 2019) as our message representations, rather than learning them as part of the model. As before, we see that message-level attention is generally beneficial, and additionally we find that the BERT-based models outperform our proposed model in 3 out of 5 cases. Table 3 compares our proposed model against the state-of-the-art. Unsurprisingly, Ridge (Embeddings) is the worst-performing model overall. Although Park et al. (2015) also used ridge 4Disattenuated Pearson correlation helps account for the error of the measurement instrument (Murphy and Davidshofer, 1988; Kosinski et al., 2013). Following Lynn et al. (2018), we use reliabilities: rxx = 0.70 and ryy = 0.77. 5311 d OPE CON EXT AGR NEU Ridge (Embeddings) 200 .538 .500 .505 .444 .505 Our Proposed Model 200 .626 .521 .552 .509 .541† Ridge with PCA (N-Grams/Topics) (Park et al., 2015) 5106 .627 .518 .558 .545 .531 Ridge with PCA (N-Grams/Topics) + Our Proposed Model 5306 .657† .538† .583† .557† .564† Table 3: Combining our best model with that of Park et al. (2015) obtains new state-of-the-art performance in terms of Disattenuated Pearson R. Number of input dimensions (d) is shown for each model. † indicates a statistically significant improvement over Park et al. (2015) based on a paired t-test on the errors of each approach. regression, their models used significantly more features (d=5106 (dimensionally reduced, supervised, from an original of over d > 50, 000) compared to our d=200). Finally, we find that by averaging the z-scored predictions of our proposed model and Ridge (N-Grams/Topics), we obtain the overall best performance, outperforming current state-of-the-art. This suggests that the models are able to learn complementary information. These results show that neural models with attention are better able to predict personality than those without. Because some messages are of more relevance than others, attention allows the model to better separate the signal from noise. In addition, combining the predictions of the best attentionbased model, SN+Attn, with those from Park et al. (2015), the previous best, advances the state-of-theart results over all 5 factors by a signficant margin (p < .05 from a paired t-test on error) and an average increase of .025, demonstrating the complementary value in these methods. 6.2 Message Attention Distribution Results suggest not all text is equally informative when it comes to personality prediction, which is why attention helps. Figure 3 shows the distribution of standardized message-level attention weights, obtained from our proposed model, for 100 randomly-sampled test set users. Sampled users had 742 messages on average. The figure shows that any single user’s messages encompass a range of relative importance. OPE skews negative, indicating that most messages of a user are of little relevance with a few being very relevant, while NEU was slightly more likely to mark messages as relevant but with less variance. By incorporating that concept of message (and word) importance via attention, we can produce better user-level representations from which to predict personality. 6.3 Effects of Word and Message Attention Thus far we have demonstrated the importance of attention for personality prediction. However, our Figure 3: Standardized distribution of message-level attention weights for 100 randomly-sampled test set users with at least 20 messages. The black dot indicates the max density per user (i.e. the most frequent attention weight for that person). OPE CON EXT AGR NEU No Attn .601 .506 .512 .431 .523 Word Only .612† .510 .516† .456† .541† Msg Only .621† .511 .535† .521† .544† Word + Msg .626 .521 .552† .509† .541 Table 4: Ablation demonstrating the importance of using word- and message-level attention. All models are sequence networks (SNs) with or without attention at the word and message levels. † indicates statistically significant improvements (p < 0.05) over the No Attention baseline based on a paired t-test on the errors of each approach. proposed model incorporates attention at two different levels of analysis: word and message level. We examine each attention mechanism’s impact on the overall performance of the model. Table 4 shows ablation results for word and message attentions. As expected, adding any attention results in improvements over the No Attn model. In addition, using only message-level attention generally outperforms using only word-level attention. This may be because message-level attention oc5312 Figure 4: Performance of our model when keeping only the top n percent highest or lowest weighted messages. curs later in the model, where its impacts are less likely to get washed out by downstream layers. While adding message attention provides the single largest boost, in 3 out of 5 cases combining it with word attention results in additional gains. This may be because the word-level attention helped the model to better encode longer messages: the average message length for the top 5% highestweighted messages were, on average, 4.4 tokens longer for Word+Msg than for Msg Only. The inclusion of message-level attention appears to have little direct impact on the word-level attention. On examination, Word+Msg and Word Only typically assigned roughly the same wordlevel attention weights to the same sentences. This suggests the strength of adding message-level attention is in learning how best to weight messages, rather than how to better represent each individual message. We further explore the impact of the learned message-level attention weights. Figure 4 shows our proposed model’s performance when evaluated over the top n percent highest or lowest weighted messages, as learned by our model. We see that performance is much better when using high-attention messages than low-attention ones in all cases but CON, which we saw in Table 4 did not benefit much from message-level attention. Another note of interest is that AGR plateaus very quickly for high attention messages, which suggests that high-signal messages are rare but extremely predictive. In conclusion, while adding any attention is helpful, message-level attention provides overall larger gains than word-level attention. 7 Qualitative Value of Identifying Informative Text The high-signal text identified by our attentionbased models potentially provides additional, qualitative value for researchers interested in the relationship between language and personality. Bag-ofwords approaches to language modeling can identify attribute-relevant words (e.g. word clouds), but this can be limiting as it lacks the context in which the words appear. By contrast, a personality researcher interested in how high extraversion, for example, manifests itself in one’s language use can use our learned attention weights to identify whole messages that may warrant further study. Table 5 shows examples of messages that received high and low attention weights from the SN+Attn model for users at the extreme ends of each personality dimension. Overall, the highattention messages are thematically relevant to the target personality dimension. For example, the messages for conscientiousness focus on work and school responsibilities, while those for extraversion discuss social interactions. The high-attention words, highlighted in green, are also consistent with each personality dimension. For example, openness to experience highlights philosophical words (weird, nothingness, trippy) while agreeableness favors swear words (shit). In contrast, the low-attention messages have little relevance. To test whether our high-signal text might be of qualitative value to researchers, we asked two experts on personality (psychologists with past research in the area) to view 100 paired messages sets (20 per dimension) and select which set was more informative of the individual’s personality. Each paired set consisted of 5 messages within the top third of message weights and 5 in the bottom third for a given user. To reduce the frequency of long messages, we only selected messages whose length was at most 20 characters above or below that user’s average message length. The users themselves were randomly sampled from those in the top or bottom 10th percentile of each dimension and who had at least 20 messages total. Note that personality psychologists, though experts in how 5313 High OPE trippy day ahead .... nothingness at last .... shutter island was good .. . they are over ... yah my phone is not working ... High CON stoked on the exam schedule ! 40 % math midterm ? thank god 3/4 count . got a co-op job interview ! woo ! just had some damn good pears . note to self : buy more ? damnit . found free bag of skittles in the vending machine , jackpot . High EXT at the beach with keira ! ! ! getting ready for brittany’s dance recital tonight ! ! had fun at nathans barmitzvah last night ! ! ! i have made 72 cupcakes in the last 3 days ! ! ! ! lol just finished my science project :) Low AGR sooo excited for new school year :) going top make it awesome grudges are so ridiculous and pointless ¿ ¿ ahh shit almost 1 ! ? i need to finish this paper ! ! ! that sure was a fun ride home O.o wants to just skip to the next weekend . High NEU can’t believe i got that done in time ....... packing to go back to school makes me sad . losing things and is getting extremely frustrated . :( is amazed at how similar cameras are to your eyes . whhhaaa ? it’s only wednesday ... Table 5: Random selection of messages that received high (top) and low (bottom) attention weights from the SN+Attn model. Blue shades indicate strength of message-level attention and green indicates word-level attention. Each set of messages is from a single user, with that user having a personality score in the top or bottom 10th percentile. For brevity, only messages with 70 or fewer characters were included. personality manifests in behaviors like language, are not trained necessarily to identify it from microblog posts. The goal here is not to simply validate the attention, but to shed some light on where message attention helps and whether it is consistent with expectations from personality theory. Table 6 shows the percentage of instances where each expert identified the high-attention set as most informative, and their inter-rater agreement. Judges showed a preference towards the high-attention messages for OPE and AGR, while CON and NEU were no better than chance. These findings are somewhat consistent with Table 4, which showed that OPE and AGR benefited from message-level attention more than CON. Not only were EXT judgements no better than chance, but there was virtually no agreement among experts. This suggests that for some personality dimensions, individual messages have more or less relevance for personality, while for other dimensions there is little difference between messages (or at least it is difficult for both experts and our approach to capture differences). In general, our proposed model seems to identify text that is informative of one’s personality, both in terms of individual words and the overarching themes of the message as a whole, though this is easier for some dimensions than others. Modeling document relevance is useful, then, not just as a means to boost performance but as a tool to aid those seeking to better understand language. 8 Related Work Personality modeling from language is becoming increasingly important for many social scientific 5314 Percent Preferred High Expert 1 Expert 2 Cohen’s κ OPE 75% 75% .60 CON 55% 55% .60 EXT 55% 45% .08 AGR 75% 75% .76 NEU 40% 55% .79 Table 6: Personality experts picked which of a pair of message sets were most informative for prediction. Each pair contained five of the highest and five of the lowest-weighted messages for a user. Table shows the percentage of instances where the expert selected the high-attention message set as most informative, as well as Cohen’s κ inter-rater agreement. applications. For example, Preot¸iuc-Pietro et al. (2015) found personality features to be highly predictive of depression and PTSD. Lynn et al. (2017) demonstrated that the performance of document classification models can be improved by adapting to a variety of human factors, including personality. Personality has also been shown to be useful for deception detection (Fornaciari et al., 2013) and recommendation systems (Roshchina et al., 2011). Most research on personality modeling focuses on the Big Five, or Five-Factor Model (Costa and McCrae, 1992). Personality is traditionally measured using questionnaires, but cost and scalability issues make computational methods preferable. Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) features are popular for personality modeling (Yarkoni, 2010; Schwartz et al., 2013; Gjurkovi´c and ˇSnajder, 2018), as they readily provide insight into the type of language that correlates with certain personality dimensions. However, using predefined lexica is limiting; Schwartz et al. (2013) and Park et al. (2015) showed significantly improved prediction when using topics and n-grams extracted from their training set. When working with a very limited amount of data, Arnoux et al. (2017) found pre-trained word embeddings to be effective. Deep learning approaches to personality prediction are limited. Majumder et al. (2017) used a convolutional neural network (CNN) with max pooling, alongside traditional document features (e.g. word count). Their best results were obtained when they filtered out sentences that did not contain strong emotion words (as determined via lexica) during preprocessing. This supports our intuition that some messages contain stronger signal than others, though our approach allows the model to identify such cases. Yu and Markov (2017) also used CNNs with max- and average-pooling to predict personality over Facebook statuses. They experimented with fully-connected neural networks and bidirectional recurrent neural networks, but ultimately CNNs performed best. Both Majumder et al. (2017) and Yu and Markov (2017) used datasets that were significantly smaller than ours (n=2467 and n=9917, respectively) and their problems were framed as binary classification rather than regression5. 9 Conclusion Language-based personality prediction is an important task with many applications in social science and natural language processing. We presented a hierarchical sequence model with message- and word-level attention that learns to differentiate highand low-signal messages. Our approach, which novelly models the idea that all messages are not equally valuable for psychological regression tasks, achieves new state-of-the-art results for personality prediction and provides insight into the relationship between language and personality. Our analysis demonstrates that the level of abstraction at which attention is applied can have a significant impact on a model’s overall performance. Finally, this work highlights the critical role of document relevance as we progress with further human-centered natural language processing. Acknowledgments This work is supported in part by the National Science Foundation under Grant IIS-1815358. Data set used in grateful collaboration with Michal Kosinski and David Stillwell. We thank Google for supporting this research through the Google Cloud Platform credits. Thanks also to social and personality psychologists Sandra Matz and David Yaden for their help with the expert evaluation task. References Pierre-Hadrien Arnoux, Anbang Xu, Neil Boyette, Jalal Mahmud, Rama Akkiraju, and Vibha Sinha. 2017. 25 tweets to know you: A new model to predict personality with social media. ICWSM. R Michael Bagby, Russell T Joffe, James DA Parker, Valery Kalemba, and Kate L Harkness. 1995. Major 5Personality theory suggests factors are better represented as continuous dimensions than discrete types (McCrae and Costa Jr, 1989). 5315 depression and the five-factor model of personality. Journal of Personality Disorders, 9(3):224–234. Benjamin P Chapman, Brent Roberts, and Paul Duberstein. 2011. Personality and longevity: knowns, unknowns, and implications for public health and personalized medicine. Journal of aging research, 2011. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. P.T. Costa and R.R. McCrae. 1992. Revised NEO Personality Inventory (Neo-PI-R) and NEO Five-Factor Inventory (NEO-FFI). Psychological Assessment Resources. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Tommaso Fornaciari, Fabio Celli, and Massimo Poesio. 2013. The effect of personality type on deceptive communication style. In Intelligence and Security Informatics Conference (EISIC), 2013 European, pages 1–6. IEEE. Howard S Friedman and Margaret L Kern. 2014. Personality, well-being, and health. Annual review of psychology, 65:719–742. Matej Gjurkovi´c and Jan ˇSnajder. 2018. Reddit: A gold mine for personality prediction. In Proceedings of the Second Workshop on Computational Modeling of Peoples Opinions, Personality, and Emotions in Social Media, pages 87–97. Lewis R Goldberg, John A Johnson, Herbert W Eber, Robert Hogan, Michael C Ashton, C Robert Cloninger, and Harrison G Gough. 2006. The international personality item pool and the future of public-domain personality measures. Journal of Research in personality, 40(1):84–96. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Association for Computational Linguistics. Margaret L Kern, Johannes C Eichstaedt, H Andrew Schwartz, Lukasz Dziurzynski, Lyle H Ungar, David J Stillwell, Michal Kosinski, Stephanie M Ramones, and Martin EP Seligman. 2014. The online social self: An open vocabulary approach to personality. Assessment, 21(2):158–169. Michal Kosinski, David Stillwell, and Thore Graepel. 2013. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences, 110(15):5802–5805. Vivek Kulkarni, Margaret L Kern, David Stillwell, Michal Kosinski, Sandra Matz, Lyle Ungar, Steven Skiena, and H Andrew Schwartz. 2018. Latent human traits in the language of social media: An openvocabulary approach. PloS one, 13(11):e0201703. Veronica E. Lynn, Alissa Goodman, Kate Niederhoffer, Kate Loveys, Philip Resnik, and H. Andrew Schwartz. 2018. CLPsych 2018 shared task: Predicting current and future psychological health from childhood essays. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 37–46, New Orleans, LA. Association for Computational Linguistics. Veronica E. Lynn, Youngseo Son, Vivek Kulkarni, Niranjan Balasubramanian, and H. Andrew Schwartz. 2017. Human centered NLP with user-factor adaptation. In Empirical Methods in Natural Language Processing, pages 1146–1155. Franc¸ois Mairesse, Marilyn A Walker, Matthias R Mehl, and Roger K Moore. 2007. Using linguistic cues for the automatic recognition of personality in conversation and text. Journal of artificial intelligence research, 30:457–500. Navonil Majumder, Soujanya Poria, Alexander Gelbukh, and Erik Cambria. 2017. Deep learning-based document modeling for personality detection from text. IEEE Intelligent Systems, 32(2):74–79. Matthew Matero, Akash Idnani, Youngseo Son, Salvatore Giorgi, Huy Vu, Mohammad Zamani, Parth Limbachiya, Sharath Chandra Guntuku, and H. Andrew Schwartz. 2019. Suicide risk assessment with multi-level dual-context language and BERT. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 39–44, Minneapolis, Minnesota. Association for Computational Linguistics. Sandra C Matz, Michal Kosinski, Gideon Nave, and David J Stillwell. 2017. Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the national academy of sciences, 114(48):12714–12719. Robert R McCrae and Paul T Costa Jr. 1989. Reinterpreting the myers-briggs type indicator from the perspective of the five-factor model of personality. Journal of personality, 57(1):17–40. Robert R McCrae and Paul T Costa Jr. 1997. Personality trait structure as a human universal. American psychologist, 52(5):509. 5316 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. Walter Mischel, Yuichi Shoda, and Ozlem Ayduk. 2007. Introduction to personality: Toward an integrative science of the person. John Wiley & Sons. Kevin R Murphy and Charles O Davidshofer. 1988. Psychological Testing: Principles and Applications. Pearson. Gregory Park, H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Michal Kosinski, David J Stillwell, Lyle H Ungar, and Martin EP Seligman. 2015. Automatic personality assessment through social media language. Journal of Personality and Social Psychology, 108. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates, 71(2001):2001. Daniel Preot¸iuc-Pietro, Johannes Eichstaedt, Gregory Park, Maarten Sap, Laura Smith, Victoria Tobolsky, H Andrew Schwartz, and Lyle Ungar. 2015. The role of personality, age, and gender in tweeting about mental illness. In Proceedings of the 2nd workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality, pages 21–30. Alexandra Roshchina, John Cardiff, and Paolo Rosso. 2011. A comparative evaluation of personality estimation algorithms for the twin recommender system. In Proceedings of the 3rd international workshop on Search and mining user-generated contents, pages 11–18. ACM. H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, Martin EP Seligman, et al. 2013. Personality, gender, and age in the language of social media: The open-vocabulary approach. PloS one, 8(9):e73791. Nakatani Shuyo. 2010. Language detection library for java. Louis Leon Thurstone. 1934. The vectors of mind. Psychological review, 41(1):1. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Tal Yarkoni. 2010. Personality in 100,000 words: A large-scale analysis of personality and word use among bloggers. Journal of research in personality, 44(3):363–373. Jianguo Yu and Konstantin Markov. 2017. Deep learning based personality recognition from Facebook status updates. In 2017 IEEE 8th International Conference on Awareness Science and Technology (iCAST), pages 383–387. IEEE.
2020
472
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5317–5331 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5317 Measuring Forecasting Skill from Text Shi Zong1 Alan Ritter1 Eduard Hovy2 1Department of Computer Science and Engineering, The Ohio State University 2Language Technologies Institute, Carnegie Mellon University {zong.56, ritter.1492}@osu.edu, [email protected] Abstract People vary in their ability to make accurate predictions about the future. Prior studies have shown that some individuals can predict the outcome of future events with consistently better accuracy. This leads to a natural question: what makes some forecasters better than others? In this paper we explore connections between the language people use to describe their predictions and their forecasting skill. Datasets from two different forecasting domains are explored: (1) geopolitical forecasts from Good Judgment Open, an online prediction forum and (2) a corpus of company earnings forecasts made by financial analysts. We present a number of linguistic metrics which are computed over text associated with people’s predictions about the future including: uncertainty, readability, and emotion. By studying linguistic factors associated with predictions, we are able to shed some light on the approach taken by skilled forecasters. Furthermore, we demonstrate that it is possible to accurately predict forecasting skill using a model that is based solely on language. This could potentially be useful for identifying accurate predictions or potentially skilled forecasters earlier.1 1 Introduction People often make predictions about the future, for example meteorologists tell us what the weather might look like tomorrow, financial analysts predict which companies will report favorable earnings and intelligence analysts evaluate the likelihood of future geopolitical events. An interesting question is why some individuals are significantly better forecasters (Mellers et al., 2015b)? Previous work has analyzed to what degree various factors (intelligence, thinking style, knowledge 1We provide our code and dataset descriptions at: https://github.com/viczong/measuring_ forecasting_skill_from_text. of a specific topic, etc.) contribute to a person’s skill. These studies have used surveys or psychological tests to measure dispositional, situational and behavioral variables (Mellers et al., 2015a). Another source of information has been largely overlooked, however: the language forecasters use to justify their predictions. Recent research has demonstrated that it is possible to accurately forecast the outcome of future events by aggregating social media users’ predictions and analyzing their veridicality (Swamy et al., 2017), but to our knowledge, no prior work has investigated whether it might be possible to measure a forecaster’s ability by analyzing their language. In this paper, we present the first systematic study of the connection between language and forecasting ability. To do so, we analyze texts written by top forecasters (ranked by accuracy against ground truth) in two domains: geopolitical forecasts from an online prediction forum, and company earnings forecasts made by financial analysts. To shed light on the differences in approach employed by skilled and unskilled forecasters, we investigate a variety of linguistic metrics. These metrics are computed using natural language processing methods to analyze sentiment (Pang et al., 2002; Wilson et al., 2005), uncertainty (de Marneffe et al., 2012; Saur´ı and Pustejovsky, 2012), readability, etc. In addition we make use of word lists taken from the Linguistic Inquiry and Word Count (LIWC) software (Tausczik and Pennebaker, 2010), which is widely used in psychological research. By analyzing forecasters’ texts, we are able to provide evidence to support or refute hypotheses about factors that may influence forecasting skill. For example, we show forecasters whose justifications contain a higher proportion of uncertain statements tend to make more accurate predictions. This supports the hypothesis that more open-minded thinkers, who have a higher tolerance for ambiguity tend to make 5318 better predictions (Tetlock, 2005). Beyond analyzing linguistic factors associated with forecasting ability, we further demonstrate that it is possible to identify skilled forecasters and accurate predictions based only on relevant text. Estimating the quality of a prediction using the forecaster’s language could potentially be very beneficial. For example, this does not require access to historical predictions to evaluate past performance, so it could help to identify potentially skilled individuals sooner. Also, forecasters do not always provide an explicit estimate of their confidence, so a confidence measure derived directly from text could be very useful. 2 Linguistic Cues of Accurate Forecasting In this section, we are interested in uncovering linguistic cues in people’s writing that are predictive of forecasting skill. We start by analyzing texts written by forecasters to justify their predictions in a geopolitical forecasting forum. Linguistic differences between forecasters are explored by aggregating metrics across each forecaster’s predictions. In §3, we analyze the accuracy of individual predictions using a dataset of financial analysts’ forecasts towards companies’ (continuous) earnings per share. By controlling for differences between analysts and companies, we are able to analyze intra-analyst differences between accurate and inaccurate forecasts. 2.1 Geopolitical Forecasting Data To explore the connections between language and forecasting skill, we make use of data from Good Judgment Open,2 an online prediction forum. Users of this website share predictions in response to a number of pre-specified questions about future events with uncertain outcomes, such as: “Will North Korea fire another intercontinental ballistic missile before August 2019?” Users’ predictions consist of an estimated chance the event will occur (for example, 5%) in addition to an optional text justification that explains why the forecast was made. A sample is presented in Figure 1. Preprocessing. Not all predictions contain associated text justifications; in this work, we only consider predictions with justifications containing more than 10 tokens. We ran langid.py (Lui 2https://www.gjopen.com/ Question: Will Kim Jong Un visit Seoul before 1 October 2019? Estimated Chance: 5% Forecast Justification: No North Korean leader has stepped foot in Seoul since the partition of the Koreas at the end of the Korean War. . . . Figure 1: A sample prediction made by a user in response to a question posted by the Economist. and Baldwin, 2012) to remove forecasts with nonEnglish text, and further restrict our data to contain only users that made at least 5 predictions with text. In our pilot studies, we also notice some forecasters directly quote text from outside resources (like Wikipedia, New York Times, etc.) as part of their justifications. To avoid including justifications that are mostly copied from external sources, we remove forecasts that consist of more than 50% text enclosed in quotation marks from the data. Dataset statistics. We collected all questions with binary answers that closed before April 9, 2019, leading to a total of 441 questions. 23,530 forecasters made 426,909 predictions. During preprocessing steps, 3,873 forecasts are identified as heavily quoted and thus removed. After removing nonEnglish and heavily quoted forecasts, forecasts with no text justifications or justifications less than 10 tokens, in addition users with fewer than 5 predictions with text, 55,099 forecasts made by 2,284 forecasters are selected for the final dataset. The distribution of predictions made by each forecaster is heavily skewed. 8.0% of forecasters make over 50 forecasts.3 On average, each forecaster makes 10.3 forecasts, excluding those who made over 50 predictions. In Table 1, we also provide breakdown statistics for top and bottom forecasters. 2.2 Measuring Ground Truth In order to build a model that can accurately classify good forecasters based on features of their language, we first need a metric to measure people’s forecasting skill. For this purpose we use Brier score (Brier, 1950), a commonly used measure for evaluating probabilistic forecasts.4 For questions 3In our dataset, forecasters could even make over 1,000 forecasts with justifications. 4Other possible scoring rules exist, for example ranking forecasters by log-likelihood. For a log-likelihood scoring rule, however, we need to adjust estimates of 1.00 and 0.00, which are not uncommon in the data, to avoid zero probability events. There are many ways this adjustment could be done and it is difficult to justify one choice over another. 5319 with binary answers, it is defined as: Forecaster’s Brier Score = 1 N N X i=1 (fi −oi)2 Here fi is the forecaster’s estimated probability, oi is a binary variable indicating the final outcome of the event, and N is the total number of forecasts. Brier scores can be interpreted as the mean squared error between the forecast probability and true answer; lower scores indicate better forecasts. Ranking forecasters. Directly comparing raw Brier scores is problematic, because users are free to choose questions they prefer, and could achieve a lower Brier score simply by selecting easier questions. To address this issue, we standardized Brier scores by subtracting the mean Brier scores and dividing by the standard deviation within questions (Mellers et al., 2015a). We construct a set of balanced datasets for training and evaluating classifiers by choosing the top K and bottom K forecasters respectively. In our experiments, we vary K from 100 to 1,000; when K=1,000, the task can be interpreted roughly as classifying all ∼2k users into the top or bottom half of forecasters.5 2.3 Linguistic Analysis In §2.2, we discussed how to measure ground-truth forecasting skill by comparing a user’s predictions against ground-truth outcomes. In the following subsections, we examine a selected series of linguistic phenomenon and their connections with forecasting ability. Statistical tests are conducted using the paired bootstrap (Efron and Tibshirani, 1994). As we are performing multiple hypothesis testing, we also report results for Bonferronicorrected significance level 0.05/30. As discussed in §2.1, the distribution of forecasts per user is highly skewed. To control for this, we compute averages for each forecaster and use aggregate statistics to compare differences between the two groups at the user-level. Analyses are performed over 6,639 justifications from the top 500 forecasters and 6,040 from bottom 500. 2.3.1 Textual Factors Length. We first check the average length of justifications from different groups and report our results 5Readers may wonder if there do exist differences between top and bottom forecasters. We provide justifications for our ranking approach in Appendix A.1. in Table 1. We observe that skilled forecasters normally write significantly longer justifications with more tokens per sentence. This suggests that good forecasters tend to provide more rationale to support their predictions. Metric Top 500 Btm 500 p Forecasters statistics # users making ≥50 forecasts 20 14 Avg. forecasts (w/o above users) 9.4 9.2 Length & word counts Avg. # tokens per user 69.1 47.0 ↑↑↑ % answers ≥100 tokens per user 18.5 8.3 ↑↑↑ Avg. # tokens per sentence 20.9 19.2 ↑↑↑ Table 1: Statistics of our dataset. p-values are calculated by bootstrap test. ↑↑↑: p < 0.001. Readability. We compute two widely used metrics for readability: (1) Flesch reading ease (Flesch, 1948) and (2) Dale-Chall formula (Dale and Chall, 1948). Table 2 summarizes our results on average readability scores. We find good forecasters have lower readability compared to bad forecasters. It is interesting to compare this result with the findings reported by Ganjigunte Ashok et al. (2013), who found a negative correlation between the success of novels and their readability, and also the work of Sawyer et al. (2008) who found award winning articles in academic marketing journals had higher readability. Our finding that more accurate forecasters write justifications that have lower readability suggests that skilled forecasters tend to use more complex language. Emotion. We also analyze the sentiment reflected in forecasters’ written text. Rather than analyzing sentiment orientation (“positive”, “negative”, or “neutral”), here we focus on measuring sentiment strength. We hypothesize that skilled forecasters organize their supporting claims in a more rational way using less emotional language. Many existing sentiment analysis tools (e.g., Socher et al. (2013)) are built on corpora such as the Stanford Sentiment Treebank, which are composed of movie reviews or similar texts. However, justifications in our dataset focus on expressing opinions towards future uncertain events, rather than simply expressing preferences toward a movie or restaurant, leading to a significant domain mismatch. In pilot studies, we noticed many sentences that are marked as negative by the Stanford sentiment analyzer on our data do not in fact express a negative emotion. We thus use Semantic Orientation CALculator (SO5320 Metric p Bonferroni Textual Factors Readability Flesch reading ease ↓↓ Dale-Chall ↑↑↑ ∗ Emotion Absolute sentiment strength ↓↓↓ ∗ Parts of Speech Cardinal ↑↑↑ ∗ Noun ↑↑ Preposition ↑↑↑ ∗ Pronoun ↓↓↓ ∗ 1st personal pronoun ↑ Verb ↓↓↓ ∗ Cognitive Factors Uncertainty % uncertain statements ↑↑↑ ∗ Tentative (LIWC) ↑↑↑ ∗ Thinking style % forecasts with quoted text ↑↑↑ ∗ Temporal orientation Focus on past ↑↑ Focus on present & future ↓↓↓ ∗ Table 2: Comparison of various metrics computed over text written by the top 500 and bottom 500 forecasters. Good forecasters tend to exhibit more uncertainty, cite outside resources, and tend toward neutral sentiment; they also use more complex language resulting in lower readability and focus more on past events. p-values are calculated by bootstrap test. The number of arrows indicates the level of p-value, while the direction shows the relative relationship between top and bottom forecasters, ↑↑↑: top group is higher than bottom group with p < 0.001, ↑↑: p < 0.01, ↑: p < 0.05. Tests that pass Bonferroni correction are marked by ∗. CAL), a lexicon-based model proposed by Taboada et al. (2011) which has been demonstrated to have good performance across a variety of domains. The model generates a score for each justification by adding together semantic scores of words present in the justification, with a 0 score indicating a neutral sentiment. We then take the absolute values of scores from the model and calculate averages for each group. Results in Table 2 show that the top 500 forecasters have a significantly lower average sentiment strength compared to bottom 500 forecasters, indicating statements from skilled forecasters tend to express neutral sentiment. Parts of Speech. As shown in Table 2, we observe that top forecasters use a higher percentage of cardinal numbers and nouns, while higher numbers of verbs are associated with lower forecasting ability.6 We also note the bottom 500 use a higher percentage of pronouns when justifying their predictions. To investigate this difference, we further separate first person pronouns7 from second or third person pronouns. As presented in Table 2, first person pronouns are used more often by the top forecasters. 2.3.2 Cognitive Factors We now evaluate a number of factors that were found to be related to decision making processes based on prior psychological studies (e.g., Mellers et al. (2015a)), that can be tested using computational tools. A number of these metrics are calculated by using the Linguistic Inquiry and Word Count (LIWC) lexicon (Tausczik and Pennebaker, 2010), a widely used tool for psychological and social science research. Uncertainty. To test the hypothesis that good forecasters have a greater tolerance for uncertainty and ambiguity, we employ several metrics to evaluate the degree of uncertainty reflected in their written language. We use the model proposed by Adel and Sch¨utze (2017) to estimate the proportion of uncertain statements made by each forecaster in our dataset. It is an attention based convolutional neural network model, that achieves state-of-theart results on a Wikipedia benchmark dataset from the 2010 CoNLL shared task (Farkas et al., 2010); we use the trained parameters provided by Adel and Sch¨utze (2017). After the model assigns an uncertainty label for each sentence, we calculate the percentage of sentences marked as uncertain. Results of this analysis are reported in Table 2; we observe that the top 500 forecasters make a significantly greater number of uncertain statements compared to the bottom 500, supporting the hypothesis mentioned above. Thinking style. In §2.1, we discussed the issue that many forecasts contain quoted text. Although we removed posts consisting of mostly quoted text as a preprocessing step, we are interested in how people use outside resources during their decision making process. We thus calculate the portion of forecasts with quotes for the two groups. We notice skilled forecasters cite outside resources more frequently. This may indicate that skilled forecasters tend to account for more information taken from external sources when making predictions. 6POS tags were obtained using Stanford CoreNLP. 7“I”, “me”, “mine”, “my” and “myself”. 5321 Temporal orientation. We make use of the LIWC lexicon (Tausczik and Pennebaker, 2010) to analyze the temporal orientation of forecasters’ justifications. We notice good forecasters tend to focus more on past events (reflected by tokens like “ago” and “talked”); bad forecasters pay more attention to what is currently happening or potential future events (using tokens like “now”, “will”, and “soon”). We conjecture this is because past events can provide more reliable evidence for what is likely to happen in the future. 2.4 Predicting Forecasting Skill In §2.3, we showed there are significant linguistic differences between justifications written by skilled and unskilled forecasters. This leads to a natural question: is it possible to automatically identify skilled forecasters based on the written text associated with their predictions? We examine this question in general terms first, then present experiments using a realistic setup for early prediction of forecasting skill in §2.5. Models and features. We start with a log-linear model using bag-of-ngram features extracted from the combined answers for each forecaster. We experimented with different combinations of n-gram features from sizes 1 to 4. N-grams of size 1 and 2 have best classification accuracy. We map n-grams that occur only once to a ⟨UNK⟩token, and replace all digits with 0. Inspired by our findings in §2.3, we also incorporate textual and cognition factors as features in our log-linear model. We also experiment with convolutional neural networks (Kim, 2014) and BERT (Devlin et al., 2019). The 1D convolutional neural network consists of a convolution layer, a max-pooling layer, and a fully connected layer. We minimize cross entropy loss using Adam (Kingma and Ba, 2015); the learning rate is 0.01 with a batch size of 32. We fine-tune BERT on our dataset, using a batch size of 5 and a learning rate of 5e-6. All hyperparameters were selected using a held-out dev set. Model performance. Results are presented in Table 3. As we increase the number of forecasters K, the task becomes more difficult as more forecasters are ranked in the middle. However, we observe a stable accuracy around 70%. All models consistently outperform a random baseline (50% accuracy), suggesting that the language users use to describe their predictions does indeed contain information that is predictive of forecasting ability. The n-grams with largest weights in the logistic regression model are presented in Table 4. We find that n-grams that seem to indicate uncertainty, including: “it seems unlikely”, “seem to have” and “it is likely” are among the largest positive weights. K 100 200 300 500 1000 LR Bag-of-ngrams 69.5 74.2 72.5 69.2 64.8 Textual 66.0 60.8 62.0 59.3 57.4 Cognitive 69.0 68.0 67.3 65.5 61.0 All above 70.5 73.5 73.3 69.8 64.7 Neural CNN 71.5 75.0 72.0 69.6 64.0 BERT-base 74.5 77.3 74.3 69.7 65.1 Table 3: Accuracy (%) on classifying skilled forecasters when choosing the top K and bottom K forecasters. For logistic regression (LR), we experiment with different sets of features: bag of {1, 2}-grams, textual factors in §2.3.1, cognitive factors in §2.3.2, and combination of all above. For neural networks (Neural), we use convolutional neural network (CNN) and BERT-base. All results are based on 5-fold cross validation. Top15 (High-weight) in the next / . also , / . however , / based on the / there are no / . according to / of time . / . based on / they wo n’t / there is no / it seems unlikely / do n’t see / it is likely / more of a / seem to have Bottom15 (Low-weight) will continue to / it will be / the world . / . it ’s / there is a / is not a / the west . / to be on / to be the / . yes , / he ’s a / there will be / in the world / will still be / . he will Table 4: High and low-weight n-gram features from the logistic regression model trained to identify good forecasters (K=500 with only 3-gram features for interpretability). Positive features indicate some uncertainty (e.g., “it is likely”, “seem to have” , “it seems unlikely”), in addition to consideration of evidence from many sources (e.g., “based on the”, “. according to”). 2.5 Identifying Good Forecasters Earlier With the model developed in §2.4, we are now ready to answer the following question: using only their first written justification, can we foresee a forecaster’s future performance? Setup. Our goal is to rank forecasters by their performance. We first equally split all 2,284 forecasters into two groups (top half versus bottom half) based on their standardized Brier scores. We then partition them into 60% train, 20% validation, and 20% test splits within each group. We combine all justifications for each forecaster in the training set. For forecasters in the validation and test sets, 5322 we only use their single earliest forecast. We use forecasters’ final rank sorted by averaged standardized Brier score over all forecasts as ground truth. We then compare our text-based model to the following two baselines: (1) a random baseline (50%) and (2) the standardized Brier score of the users’ single earliest forecast. Results. We calculate the proportion of good forecasters identified in the top N, ranked by our textbased model, and report results in Table 5. We observe that our models achieve comparable or even better performance relative to the first prediction’s adjusted Brier score. Calculating Brier scores requires knowing ground-truth, while our model can evaluate the performance of a forecaster without waiting to know the outcome of a predicted event. P@10 P@50 P@100 Brier score 60 64 62 Text-based (LR) 70 70 65 Text-based (CNN) 90 68 64 Text-based (BERT-base) 80 70 67 Table 5: Precision@N of identifying skilled forecasters based on their first prediction. 3 Companies’ Earnings Forecasts In §2, we showed that linguistic differences exist between good and bad forecasters, and furthermore, these differences can be used to predict which forecasters will perform better. We now turn to the question of whether it is possible to identify which individual forecasts, made by the same person, are more likely to be correct. The Good Judgment Open data is not suitable to answer this question, because forecasts are discrete, and thus do not provide a way to rank individual predictions by accuracy beyond whether they are correct or not. Therefore, in this section, we consider numerical forecasts in the financial domain, which can be ranked by their accuracy as measured against ground truth. In this paper, we analyze forecasts of companies’ earnings per share (EPS). Earnings per share is defined as the portion of a company’s profit allocated to each share of common stock. It is an important indicator of a company’s ability to make profits. For our purposes, EPS also supports a cleaner experimental design as compared to stock prices, which constantly change in real time. Data. We analyze reports from the Center for Financial Research and Analysis (CFRA).8 These reports provide frequent updates for analysts’ estimates and are also organized in a structured way, enabling us to accurately extract numerical forecasts and corresponding text justifications. We collected CFRA’s analyst reports from the Thomson ONE database9 from 2014 to 2018. All notes making forecasts are extracted under the “Analyst Research Notes and other Company News” section. The dataset contains a total of 32,807 notes from analysts, covering 1,320 companies. 3.1 Measuring Ground Truth We use a pattern-based approach (in Appendix B.1) for extracting numerical forecasts. After removing notes without EPS estimates, 16,044 notes on 1,135 companies remain (this is after removing analysts who make fewer than 100 forecasts as discussed later in this section). We next evaluate whether the text can reflect how accurate these predictions are. Forecast error. We measure the correctness of forecasts by absolute relative error (Barefield and Comiskey, 1975; Dreman and Berry, 1995). The error is defined by the absolute difference between the analyst’s estimate e and corresponding actual EPS o, scaled by the actual EPS: Forecast Error = |e −o| |o| Low forecast errors indicate accurate forecasts.10 Ranking individual forecasts. As our goal is to study the intra-analyst differences between accurate and inaccurate forecasts, we standardize forecast errors within each analyst by subtracting the analyst’s mean forecast error and then dividing by the standard deviation. To guarantee we have a good estimate for the mean, we only include analysts who make at least 100 forecasts (19 analysts are selected). We notice most forecast errors are smaller than 1, while a few forecasts are associated with very large forecasting errors.11 Including these outliers would greatly affect our estimation for analysts’ mean error. Thus, we only use the first 90% of the sorted forecast errors in this calculation. 8https://www.cfraresearch.com/ 9https://www.thomsonone.com/ 10Other methods for measuring the forecasting error have been proposed, for example to scale the relative error by stock price. We do not take this approach as stock prices are dynamically changing. 11For example, one analyst estimated an EPS for Fiscal Year 2015 of Olin Corporation (OLN) as $1.63, while the actual EPS was $-0.01, a standardized forecast error of 164. 5323 3.2 Predicting Forecasting Error from Text Our goal is to test whether linguistic differences exist between accurate and inaccurate forecasts, independently of who made the prediction, or how difficult a specific company’s earnings might be to predict. To control for these factors, we standardize forecasting errors within analysts (as described in §3.1), and create training/dev/test splits across companies and dates. Setting. We collect the top K and bottom K predictions and split train, dev and test sets by time range and company. All company names are randomly split into 80% train and 20% evaluation sets. We use predictions for companies in the train group that were made in 2014-2016 as our training data. The dev set and test set consist of predictions for companies in evaluation group made during the years 2017 and 2018, respectively. All hyperparameters are the same as those used in §2.4. When evaluating the classifier’s performance, we balance the data for positive and negative categories. Results. Table 6 shows the performance of our classifier on the test set. We observe our classifiers consistently achieve around 60% accuracy when varying the number of top and bottom forecasts, K. K 1000 2000 3000 5000 LR Bag-of-ngrams 63.9 62.5 61.9 59.3 Linguistic 56.3 59.2 55.4 55.5 All above 64.3 64.1 61.5 59.7 Neural CNN 66.7 67.8 64.7 64.0 BERT-base 70.8 66.7 65.8 64.4 Table 6: Accuracy (%) for classifying accurate predictions when using top K and bottom K analysts’ predictions. We choose n-gram sizes to be 1 and 2. All reported results are on the test set. 3.3 Linguistic Analysis We present our linguistic analysis in Table 7. The same set of linguistic features in §2.3 is applied to top 4,000 accurate and bottom 4,000 inaccurate analysts notes, excluding readability metric and quotation measure in thinking style metric. Analysts’ notes are written in a professional manner, which makes readability metric not applicable. The notes do not contain many quoted text so we exclude quotation measure from the analysis. We also replace the emotion metric with a sentiment lexicon specifically tailored for financial domain and provide our discussions. The Bonferroni-corrected significance level is 0.05/15. We defer discussions to §4 for comparing across different domains. On average, each forecast contains 132.2 tokens with 5.5 sentences. Financial sentiment. We make use of a lexicon developed by Loughran and Mcdonald (2011), which is specifically designed for financial domain. The ratio of positive and negative sentiment terms to total number of tokens is compared. Our results show that inaccurate forecasts use significantly more negative sentiment terms. Metric p Bonferroni Parts of Speech Cardinal ↑↑ Noun ↑↑ Verb ↓↓↓ ∗ Uncertainty % uncertain statements ↓↓ ∗ Temporal orientation Focus on past ↑↑ ∗ Focus on present & future ↓↓↓ ∗ Financial sentiment Positive ↑↑ Negative ↓↓↓ ∗ Table 7: Comparison of various metrics over top 4,000 accurate and bottom 4,000 inaccurate forecasts. Only hypotheses with p < 0.05 are reported. See §3.3 for detailed justifications. We follow the same notation as in Table 2, ↑↑↑: p < 0.001, ↑↑: p < 0.01, ↑: p < 0.05. 4 Comparison of Findings Across Domains In §2 and §3, we analyze the language people use when they make forecasts in geopolitical and financial domains. Specifically, these two sections reveal how language is associated with accuracy both within and across forecasters. In this section, we compare our findings from these domains. Our studies reveal several shared characteristics of accurate forecasts from a linguistic perspective over geopolitical and financial domains (in Table 2 and Table 7). For example, we notice that skilled forecasters and accurate forecasts more frequently refer to past events. We also notice accurate predictions consistently use more nouns while unskilled forecasters use more verbs. We also note one main difference between two domains is uncertainty metric: in Good Judgment Open dataset, we observe that more skilled forecast5324 ers employ a higher level of uncertainty; while for individual forecasts, less uncertainty seems to be better. It makes us consider the following hypothesis: within each forecaster, people are more likely to be correct when they are more certain about their judgments, while in general skilled forecasters exhibit a higher level of uncertainty. To test this hypothesis, we calculate the Spearman’s ρ between the financial analysts’ mean forecasting errors and their average portion of uncertain statements. Results show that these two variables are negative correlated with ρ=-0.24, which provides some support for our hypothesis, however the sample size is very small (there are only 19 analysts in the financial dataset). Also, these mean forecasting errors are not standardized by the difficulty of companies analysts are forecasting. 5 Related Work Many recent studies have analyzed connections between users’ language and human attributes (Hovy et al., 2015; Nguyen et al., 2013; Volkova et al., 2014; Tan et al., 2016; Althoff et al., 2014). Son et al. (2018) developed a tool for discourse analysis in social media and found that older individuals and females tend to use more causal explanations. Another example is work by Schwartz et al. (2015), who developed automatic classifiers for temporal orientation and found important differences relating to age, gender in addition to Big Five personality traits. Eichstaedt et al. (2015) showed that language expressed on Twitter can be predictive of community-level psychological correlates, in addition to rates of heart disease. Demszky et al. (2019) analyzed political polarization in social media and Voigt et al. (2017) examined the connections between police officers’ politeness and race by analyzing language. A number of studies (De Choudhury et al., 2014; Eichstaedt et al., 2018; Benton et al., 2017; Park et al., 2017) have examined the connection between users’ language on social media and depression and alcohol use (Kiciman et al., 2018). Other work has analyzed users’ language to study the effect of attributes, such as gender, in online communication (Bamman et al., 2014; Wang and Jurgens, 2018; Voigt et al., 2018). In this work we study the relationship between people’s language and their forecasting skill. To the best of our knowledge, this is the first work that presents a computational way of exploring this direction. Our work is also closely related to prior research on predicting various phenomenon from users’ language. For example Tan et al. (2014) study the effect of wording on message propagation, Gillick and Bamman (2018) examine the connection between language used by politicians in campaign speeches and applause and P´erez-Rosas and Mihalcea (2015) explored linguistic differences between truthful and deceptive statements. Ganjigunte Ashok et al. (2013) show linguistic cues drawn from authors’ language are strong indicators of the success of their books and Tsur and Rappoport (2009) presented an unsupervised model to analyze the helpfulness of book reviews by analyzing their text. There have been several studies using data from Good Judgment Open or Good Judgment Project (Mellers et al., 2015b). One recent study examining the language side of this data is Schwartz et al. (2017). Their main goal is to suggest objective metrics as alternatives for subjective ratings when evaluating the quality of recommendations. To achieve this, justifications written by one group are provided as tips to another group. These justifications are then evaluated on their ability to persuade people to update their predictions, leading to real benefits that can be measured by objective metrics. Prior work has also studied persuasive language on crowdfunding platforms (Yang et al., 2019). In contrast, our work focuses on directly measuring forecasting skill based on text justifications. Finally we note that there is a long history of research on financial analysts’ forecasting ability (Crichfield et al., 1978; Chopra, 1998; Loh and Mian, 2006). Most work relies on regression models to test if pre-identified factors are correlated with forecasting skill (e.g., Loh and Mian (2006); Call et al. (2009)). Some work has also explored the use of textual information in financial domain. For example, Kogan et al. (2009) present a study of predicting companies’ risk by using financial reports. We also note a recent paper on studying financial analysts’ decision making process by using text-based features from earning calls (Keith and Stent, 2019). As far as we aware, our work is the first to evaluate analysts’ forecasting skill based on their language. 6 Limitations and Future Work Our experiments demonstrated it is possible to analyze language to estimate people’s skill at making predictions about the future. In this section we 5325 highlight several limitations of our study and ethical issues that should be considered before applying our predictive models in a real-world application. In our study, we only considered questions with binary answers; future work might explore questions with multiple-choice outcomes. Prior studies have found that people’s forecasting skills can be improved through experience and training (Mellers et al., 2014). Our study does not take this into account as we do not have detailed information on the forecasters’ prior experience. Finally, we have not investigated the differences in our model’s outputs on different demographic groups (e.g., men versus women), so our models may contain unknown biases and should not be used to make decisions that might affect people’s careers. 7 Conclusion In this work, we presented the first study of connections between people’s forecasting skill and language used to justify their predictions. We analyzed people’s forecasts in two domains: geopolitical forecasts from an online prediction forum and a corpus of company earning forecasts made by financial analysts. We investigated a number of linguistic metrics that are related to people’s cognitive processes while making predictions, including: uncertainty, readability and emotion. Our experimental results support several findings from the psychology literature. For example, we observe that skilled forecasters are more open-minded and exhibit a higher level of uncertainty about future events. We further demonstrated that it is possible to identify skilled forecasters and accurate predictions based solely on language. Acknowledgments We would like to thank the anonymous reviewers for providing valuable feedback on an earlier draft of this paper. This material is based in part on research sponsored by the NSF (IIS-1845670), ODNI and IARPA via the BETTER program (201919051600004) DARPA via the ARO (W911NF-17C-0095) in addition to an Amazon Research Award. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, ARO, IARPA, DARPA or the U.S. Government. References Heike Adel and Hinrich Sch¨utze. 2017. Exploring different dimensions of attention for uncertainty detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 22–34, Valencia, Spain. Association for Computational Linguistics. Tim Althoff, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2014. How to ask for a favor: A case study on the success of altruistic requests. David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014. Gender identity and lexical variation in social media. Journal of Sociolinguistics. Russell M. Barefield and Eugene E. Comiskey. 1975. The accuracy of analysts’ forecasts of earnings per share. Journal of Business Research, 3(3):241–252. Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multitask learning for mental health conditions with limited social media data. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Glenn W Brier. 1950. Verification of forecasts expressed in terms of probability. Monthly Weather Review. Andrew C. Call, Shuping Chen, and Yen H. Tong. 2009. Are analysts’ earnings forecasts more accurate when accompanied by cash flow forecasts? Review of Accounting Studies, 14(2):358–391. Vijay Kumar Chopra. 1998. Why so much error in analysts’ earnings forecasts? Financial Analysts Journal, 54(6):35–42. Timothy Crichfield, Thomas Dyckman, and Josef Lakonishok. 1978. An evaluation of security analysts’ forecasts. The Accounting Review, 53(3):651– 668. Edgar Dale and Jeanne S. Chall. 1948. A formula for predicting readability. Educational Research Bulletin, 27(1):11–28. Debopam Das, Tatjana Scheffler, Peter Bourgonje, and Manfred Stede. 2018. Constructing a lexicon of English discourse connectives. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 360–365, Melbourne, Australia. Association for Computational Linguistics. Munmun De Choudhury, Scott Counts, Eric J. Horvitz, and Aaron Hoff. 2014. Characterizing and predicting postpartum depression from shared facebook data. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work &#38; Social Computing, CSCW ’14. 5326 Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Matthew Gentzkow, Jesse Shapiro, and Dan Jurafsky. 2019. Analyzing polarization in social media: Method and application to tweets on 21 mass shootings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. David N. Dreman and Michael A. Berry. 1995. Analyst forecasting errors and their implications for security analysis. Financial Analysts Journal, 51(3):30–41. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Johannes C Eichstaedt, Hansen Andrew Schwartz, Margaret L Kern, Gregory Park, Darwin R Labarthe, Raina M Merchant, Sneha Jha, Megha Agrawal, Lukasz A Dziurzynski, Maarten Sap, et al. 2015. Psychological language on twitter predicts countylevel heart disease mortality. Psychological science. Johannes C. Eichstaedt, Robert J. Smith, Raina M. Merchant, Lyle H. Ungar, Patrick Crutchley, Daniel Preot¸iuc-Pietro, David A. Asch, and H. Andrew Schwartz. 2018. Facebook language predicts depression in medical records. Proceedings of the National Academy of Sciences, 115(44):11203–11208. Rich´ard Farkas, Veronika Vincze, Gy¨orgy M´ora, J´anos Csirik, and Gy¨orgy Szarvas. 2010. The CoNLL2010 shared task: Learning to detect hedges and their scope in natural language text. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 1–12, Uppsala, Sweden. Association for Computational Linguistics. Rudolph Flesch. 1948. A new readability yardstick. Journal of Applied Psychology, 32(3):221–233. Vikas Ganjigunte Ashok, Song Feng, and Yejin Choi. 2013. Success with style: Using writing style to predict the success of novels. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1753–1764, Seattle, Washington, USA. Association for Computational Linguistics. Jon Gillick and David Bamman. 2018. Please clap: Modeling applause in campaign speeches. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Dirk Hovy, Anders Johannsen, and Anders Søgaard. 2015. User review sites as a resource for largescale sociolinguistic studies. In Proceedings of the 24th international conference on World Wide Web. International World Wide Web Conferences Steering Committee. Katherine Keith and Amanda Stent. 2019. Modeling financial analysts’ decision making via the pragmatics and semantics of earnings calls. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 493–503, Florence, Italy. Association for Computational Linguistics. Emre Kiciman, Scott Counts, and Melissa Gasser. 2018. Using longitudinal social media analysis to understand the effects of early college alcohol use. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Shimon Kogan, Dimitry Levin, Bryan R. Routledge, Jacob S. Sagi, and Noah A. Smith. 2009. Predicting risk from financial reports with regression. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 272–280, Boulder, Colorado. Association for Computational Linguistics. Roger K. Loh and G. Mujtaba Mian. 2006. Do accurate earnings forecasts facilitate superior investment recommendations? Journal of Financial Economics, 80(2):455 – 483. Tim Loughran and Bill Mcdonald. 2011. When is a liability not a liability? textual analysis, dictionaries, and 10-ks. The Journal of Finance, 66(1):35–65. Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Proceedings of the ACL 2012 System Demonstrations, pages 25–30, Jeju Island, Korea. Association for Computational Linguistics. Marie-Catherine de Marneffe, Christopher D Manning, and Christopher Potts. 2012. Did it happen? the pragmatic complexity of veridicality assessment. Computational linguistics. Barbara Mellers, Eric Stone, Pavel Atanasov, Nick Rohrbaugh, S Emlen Metz, Lyle Ungar, Michael M Bishop, Michael Horowitz, Ed Merkle, and Philip Tetlock. 2015a. The psychology of intelligence analysis: Drivers of prediction accuracy in world politics. Journal of Experimental Psychology: Applied, 21. 5327 Barbara Mellers, Eric Stone, Terry Murray, Angela Minster, Nick Rohrbaugh, Michael Bishop, Eva Chen, Joshua Baker, Yuan Hou, Michael Horowitz, et al. 2015b. Identifying and cultivating superforecasters as a method of improving probabilistic predictions. Perspectives on Psychological Science. Barbara Mellers, Lyle Ungar, Jonathan Baron, Jaime Ramos, Burcu Gurcay, Katrina Fincher, Sydney E. Scott, Don Moore, Pavel Atanasov, Samuel A. Swift, Terry Murray, Eric Stone, and Philip E. Tetlock. 2014. Psychological strategies for winning a geopolitical forecasting tournament. Psychological Science, 25(5):1106–1115. PMID: 24659192. Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. ” how old do you think i am?” a study of language and age in twitter. In Seventh International AAAI Conference on Weblogs and Social Media. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10. Association for Computational Linguistics. Gregory Park, H Andrew Schwartz, Maarten Sap, Margaret L Kern, Evan Weingarten, Johannes C Eichstaedt, Jonah Berger, David J Stillwell, Michal Kosinski, Lyle H Ungar, et al. 2017. Living in the past, present, and future: Measuring temporal orientation with language. Journal of personality. Ver´onica P´erez-Rosas and Rada Mihalcea. 2015. Experiments in open domain deception detection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Roser Saur´ı and James Pustejovsky. 2012. Are you sure that this happened? assessing the factuality degree of events in text. Computational Linguistics. Alan G. Sawyer, Juliano Laran, and Jun Xu. 2008. The readability of marketing journals: Are awardwinning articles better written? Journal of Marketing, 72(1):108–117. H Andrew Schwartz, Gregory Park, Maarten Sap, Evan Weingarten, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Jonah Berger, Martin Seligman, et al. 2015. Extracting human temporal orientation from facebook language. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. H. Andrew Schwartz, Masoud Rouhizadeh, Michael Bishop, Philip Tetlock, Barbara Mellers, and Lyle Ungar. 2017. Assessing objective recommendation quality through political forecasting. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2348–2357, Copenhagen, Denmark. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Youngseo Son, Nipun Bayas, and H Andrew Schwartz. 2018. Causal explanation analysis on social media. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Sandesh Swamy, Alan Ritter, and Marie-Catherine de Marneffe. 2017. “i have a feeling trump will win..................”: Forecasting winners and losers from user predictions on twitter. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1583–1592, Copenhagen, Denmark. Association for Computational Linguistics. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Linguistics, 37(2):267–307. Chenhao Tan, Lillian Lee, and Bo Pang. 2014. The effect of wording on message propagation: Topic- and author-controlled natural experiments on twitter. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 175–185, Baltimore, Maryland. Association for Computational Linguistics. Chenhao Tan, Vlad Niculae, Cristian DanescuNiculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Proceedings of the 25th International Conference on World Wide Web, WWW ’16, pages 613–624, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Yla R. Tausczik and James W. Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of Language and Social Psychology. Philip Tetlock. 2005. Expert political judgment: How good is it? how can we know? Oren Tsur and Ari Rappoport. 2009. Revrank: A fully unsupervised algorithm for selecting the most helpful book reviews. In Third International AAAI Conference on Weblogs and Social Media. Rob Voigt, Nicholas P Camp, Vinodkumar Prabhakaran, William L Hamilton, Rebecca C Hetey, Camilla M Griffiths, David Jurgens, Dan Jurafsky, and Jennifer L Eberhardt. 2017. Language from police body camera footage shows racial disparities in officer respect. Proceedings of the National Academy of Sciences. 5328 Rob Voigt, David Jurgens, Vinodkumar Prabhakaran, Dan Jurafsky, and Yulia Tsvetkov. 2018. Rtgender: A corpus for studying differential responses to gender. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political preferences from streaming communications. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Zijian Wang and David Jurgens. 2018. It’s going to be okay: Measuring access to support in online communities. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 33–45, Brussels, Belgium. Association for Computational Linguistics. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, and Eduard Hovy. 2019. Lets make your request more persuasive: Modeling persuasive strategies via semisupervised neural nets on crowdfunding platforms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 5329 A Additional Experiments on Good Judgment Open Dataset A.1 Differences Between Top and Bottom Forecasters? Figure 2 presents calibration curves and averaged standardized Brier scores across years for the top and bottom 500 forecasters. We observe the differences between these two groups are persistent over time. Controlled lab experiments from psychology have also demonstrated that top forecasters ranked by Brier scores consistently have better forecasting performance than bottom forecasters (Mellers et al., 2015a). 0.5 0.6 0.7 0.8 0.9 1.0 Confidence 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy top ranked forecasters bottom ranked forecasters (a) Calibration curves by using all forecasts. 2015 2016 2017 2018 Year 0.4 0.2 0.0 0.2 0.4 0.6 0.8 Standard Brier score top ranked forecasters bottom ranked forecasters (b) Aggregated forecasting performance across years. Figure 2: Comparison of forecasting skill between the top 500 and bottom 500 forecasters ranked by averaged standardized Brier scores. (a) Calibration curves for each group calculated using all forecasts (with and without justifications). The diagonal dotted line indicates a perfect calibration. (b) Trends of average standardized Brier scores over years. Negative values indicate better forecasting skill. A.2 Additional Metrics and Examples for Linguistic Analysis Uncertainty. We present examples of sentences with uncertainty scores from our dataset in Table 9. Discourse connectives. We further investigate the portion of discourse connectives used between sentences within each group. For this purpose, we use a lexicon developed by Das et al. (2018), which collects connectives from PDTB corpus connective list, RST Signalling Corpus and RST-DT relational indicator list. The lexicon contains 149 English connectives, divided into 4 categories: comparison, contingency, expansion, and temporal.12 Our results show that skilled forecasters tend to use discourse connectives more frequently compared to unskilled forecasters, which may indicate that they tend to make more coherent arguments. Thinking style. Analytical thinking score in LIWC (Tausczik and Pennebaker, 2010) ranks the level of a person’s thinking skill. A high score correlates with formal, logical, and hierarchical thinking, while low scores are associated with informal, and narrative thinking. As shown in Table 8, good forecasters appear to demonstrate better analytical thinking skills. Metric p Bonferroni Discourse connectives Comparison ↑↑↑ ∗ Contingency ↑↑ Expansion ↑↑ ∗ Temporal ↑↑↑ ∗ Thinking style Analytical thinking ↑↑ ∗ Table 8: Comparison of various metrics computed over text written by the top 500 and bottom 500 forecasters. p-values are calculated by bootstrap hypothesis test. The number of arrows indicates the level of p-value, while the direction shows the relative relationship between top and bottom forecasters, ↑↑↑: top group is higher than bottom group with p < 0.001, ↑↑: p < 0.01, ↑: p < 0.05. Tests that pass Bonferroni correction are marked by ∗. A.3 Linguistic Cues over Time We are interested in whether our observed linguistic differences are consistent over time. To answer this question, we select the top 500 and bottom 500 forecasters based on their final ranking and evaluate aggregated metrics for the two groups in different years. Our results are shown in Figure 3. We observe the same pattern for all linguistic metrics. For example, skilled forecasters consistently exhibit a higher level of uncertainty and past temporal orientation, and a lower readability compared to unskilled forecasters. 12As some connectives are listed under more than one category, we restrict the list to those belonging to one or two categories. 5330 Sentence Uncert. Score Merkel is probably least prone to political scandals among the Western leaders and candidates . 1.00 It seems unlikely that the court would transfer the terms of that contract to Uber . 0.99 My assumptions : - Sturgeon will not set a date for indyref2 before the UK elections on June 8 . 0.05 To date , Toyota has distributed only 100 of the 300 Mirais preordered in California ... 0.02 Table 9: Examples of sentences in our dataset with uncertainty scores estimated by the model proposed by Adel and Sch¨utze (2017). A higher uncertainty score indicates a higher level of uncertainty. 2015 2016 2017 2018 Year 8.6 8.8 9.0 9.2 9.4 Readability top ranked forecasters bottom ranked forecasters (a) Readability (Dale) 2015 2016 2017 2018 Year 1.0 1.1 1.2 1.3 1.4 1.5 Absolute sentiment strength top ranked forecasters bottom ranked forecasters (b) Emotion 2015 2016 2017 2018 Year 0.170 0.175 0.180 0.185 0.190 0.195 Noun top ranked forecasters bottom ranked forecasters (c) Parts of Speech (noun) 2015 2016 2017 2018 Year 0.180 0.185 0.190 0.195 0.200 0.205 Verb top ranked forecasters bottom ranked forecasters (d) Parts of Speech (verb) 2015 2016 2017 2018 Year 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 Comparison top ranked forecasters bottom ranked forecasters (e) Discourse connectives (comparison) 2015 2016 2017 2018 Year 0.12 0.14 0.16 0.18 0.20 Temporal top ranked forecasters bottom ranked forecasters (f) Discourse connectives (temporal) 2015 2016 2017 2018 Year 0.44 0.46 0.48 0.50 0.52 0.54 0.56 0.58 Uncertainty top ranked forecasters bottom ranked forecasters (g) Uncertainty 2015 2016 2017 2018 Year 64 66 68 70 72 74 76 78 Analytical top ranked forecasters bottom ranked forecasters (h) Thinking style (analytical score) 2015 2016 2017 2018 Year 0.013 0.014 0.015 0.016 0.017 0.018 0.019 0.020 Focus on past top ranked forecasters bottom ranked forecasters (i) Temporal orientation (focus on past) Figure 3: Linguistic features in different years for top 500 and bottom 500 forecasters. The plots show how readability (Dale), emotion, Parts of Speech (noun and verb), discourse connectives (comparison and temporal), uncertainty, thinking style (analytical score), and temporal orientation (focus on past) change in different years. We observe nearly consistent trends for all metrics over time, which indicates that linguistic differences are stable. Error bars represent standard errors. B Experimental Details on Companies’ Earning Forecasts B.1 Extracting Numerical Forecasts from Text Not all analysts’ notes in our dataset are associated with structured earnings forecasts (in tables). Instead, the analysts’ numerical predictions for future earnings are directly reported in the text of their notes, which also contain additional language justifying their predictions. Therefore, our first goal is to extract structured representations of analysts’ EPS estimates in a ⟨TIME, VALUE⟩format. We noticed that analysts have a highly consistent style when writing this section of the report, we therefore use a set of lexico-syntactic patterns to extract the forecasts from text; as described below. We found this approach to have both high precision and high recall. We randomly sampled 60% of the notes in our dataset for developing patterns. Before generating the rules, we replaced entities indicating time 5331 Sentence We trim our 12-month target price to $20 from $23 , 10X our ’16 EPS estimate of $2.01 -LRB- trimmed today from $2.10 -RRB- . Pattern ⟨TIME⟩EPS estimate of ⟨MONEY⟩ Extracted ⟨’16, $2.01⟩ Sentence We raise ’18 and ’19 EPS estimates by $4.61 and $5.72 to $19.85 and $25.95 . Pattern ⟨TIME⟩and ⟨TIME⟩EPS estimates ⟨BY-MASK⟩to ⟨MONEY⟩and ⟨MONEY⟩ Extracted ⟨’18, $19.85⟩, ⟨’19, $25.95⟩ Sentence We raise our FY 17 EPS estimate to $3.23 from $2.96 and set FY 18 ’s at $3.43 . Pattern ⟨TIME⟩EPS estimate to ⟨MONEY⟩⟨FROM-MASK⟩and set ⟨TIME⟩at ⟨MONEY⟩ Extracted ⟨FY 17, $3.23⟩, ⟨FY 18, $3.43⟩ Table 10: Examples of earnings forecasts extracted from analysts’ notes. Only sentences mentioning the earnings forecast are shown; the notes also contain additional analysis to justify the forecast. All sentences from notes are used to classify accurate versus inaccurate forecasts as described in §3.2. and money with special ⟨TIME⟩and ⟨MONEY⟩tokens. To evaluate the generalization of our patterns, we randomly sampled 100 sentences containing 136 numerical forecasts from the remaining 40% of notes and manually checked all of them. We estimate that our pattern-based approach extracts numerical forecasts with 0.91 precision and 0.82 recall. Table 10 shows examples of numerical forecasts extracted using our approach. In a few cases we found that an analyst’s note can contain more than one forecast. For simplicity, we only consider the earliest forecast that is made within the 2014-2018 time range.
2020
473
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5332–5344 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5332 Text and Causal Inference: A Review of Using Text to Remove Confounding from Causal Estimates Katherine A. Keith, David Jensen, and Brendan O’Connor College of Information and Computer Sciences University of Massachusetts Amherst {kkeith,jensen,brenocon}@cs.umass.edu Abstract Many applications of computational social science aim to infer causal conclusions from nonexperimental data. Such observational data often contains confounders, variables that influence both potential causes and potential effects. Unmeasured or latent confounders can bias causal estimates, and this has motivated interest in measuring potential confounders from observed text. For example, an individual’s entire history of social media posts or the content of a news article could provide a rich measurement of multiple confounders. Yet, methods and applications for this problem are scattered across different communities and evaluation practices are inconsistent. This review is the first to gather and categorize these examples and provide a guide to dataprocessing and evaluation decisions. Despite increased attention on adjusting for confounding using text, there are still many open problems, which we highlight in this paper. 1 Introduction In contrast to descriptive or predictive tasks, causal inference aims to understand how intervening on one variable affects another variable (Holland, 1986; Pearl, 2000; Morgan and Winship, 2015; Imbens and Rubin, 2015; Hern´an and Robins, 2020). Specifically, many applied researchers aim to estimate the size of a specific causal effect, the effect of a single treatment variable on an outcome variable. However, a major challenge in causal inference is addressing confounders, variables that influence both treatment and outcome. For example, consider estimating the size of the causal effect of smoking (treatment) on life expectancy (outcome). Occupation is a potential confounder that may influence both the propensity to smoke and life expectancy. Estimating the effect of treatment on outcome without accounting for this confounding could result in Figure 1: Left: A causal diagram for text that encodes causal confounders, the setting that is focus of this review paper. The major assumption is that latent confounders can be measured from text and those confounder measurements can be used in causal adjustments. Right: An example application in which practitioner does not have access to the confounding variable, occupation, in structured form but can measure confounders from unstructured text (e.g. an individual’s social media posts). strongly biased estimates and thus invalid causal conclusions. To eliminate confounding bias, one approach is to perform randomized controlled trials (RCTs) in which researchers randomly assign treatment. Yet, in many research areas such as healthcare, education, or economics, randomly assigning treatment is either infeasible or unethical. For instance, in our running example, one cannot ethically randomly assign participants to smoke since this could expose them to major health risks. In such cases, researchers instead use observational data and adjust for the confounding bias statistically with methods such as matching, propensity score weighting, or regression adjustment (§5). In causal research about human behavior and society, there are potentially many latent confounding variables that can be measured from unstructured 5333 text data. Text data could either (a) serve as a surrogate for potential confounders; or (b) the language of text itself could be a confounder. Our running example is an instance of text as a surrogate: a researcher may not have a record of an individual’s occupation but could attempt to measure this variable from the individual’s entire history of social media posts (see Fig. 1). An example of text as a direct confounder: the linguistic content of social media posts could influence censorship (treatment) and future posting rates (outcome) (Roberts et al., 2020). A challenging aspect of this research design is the high-dimensional nature of text. Other work has explored general methods for adjusting for highdimensional confounders (D’Amour et al., 2017; Rassen et al., 2011; Louizos et al., 2017; Li et al., 2016; Athey et al., 2017). However, text data differ from other high-dimensional data-types because intermediate confounding adjustments can be read and evaluated by humans (§6) and designing meaningful representations of text is still an open research question.1 Even when applying simple adjustment methods, a practitioner must first transform text into a lower-dimensional representation via, for example, filtered word counts, lexicon indicators, topic models, or embeddings (§4). An additional challenge is that empirical evaluation in causal inference is still an open research area (Dorie et al., 2019; Gentzel et al., 2019) and text adds to the difficulty of this evaluation (§7). We narrow the scope of this paper to review methods and applications with text data as a causal confounder. In the broader area of text and causal inference, work has examined text as a mediator (Veitch et al., 2019), text as treatment (Fong and Grimmer, 2016; Egami et al.; Wood-Doughty et al., 2018; Tan et al., 2014), text as outcome (Egami et al.), causal discovery from text (Mani and Cooper, 2000), and predictive (Granger) causality with text (Balashankar et al., 2019; del Prado Martin and Brendel, 2016; Tabari et al., 2018). Outside of this prior work, there has been relatively little interaction between natural language processing (NLP) research and causal inference. NLP has a rich history of applied modeling and diagnostic pipelines that causal inference could draw upon. Because applications and methods for text 1For instance, there have been four workshops on representation learning at major NLP conferences in the last four years (Blunsom et al., 2016, 2017; Augenstein et al., 2018, 2019). as a confounder have been scattered across many different communities, this review paper aims to gather and unify existing approaches and to concurrently serve three different types of researchers and their respective goals: • For applied practitioners, we collect and categorize applications with text as a causal confounder (Table 1 and §2), and we provide a flowchart of analysts’ decisions for this problem setting (Fig. 2). • For causal inference researchers working with text data, we highlight recent work in representation learning in NLP (§4) and caution that this is still an open research area with questions of the sensitivity of effects to choices in representation. We also outline existing interpretable evaluation methods for adjustments of text as a causal confounder (§6). • For NLP researchers working with causal inference, we summarize some of the most-used causal estimators that condition on confounders: matching, propensity score weighting, regression adjustment, doubly-robust methods, and causally-driven representation learning (§5). We also discuss evaluation of methods with constructed observational studies and semi-synthetic data (§7). 2 Applications In Table 1, we gather and summarize applications that use text to adjust for potential confounding. This encompasses both (a) text as a surrogate for confounders, or (b) the language itself as confounders.2 As an example, consider Kiciman et al. (2018) where the goal is to estimate the size of the causal effect of alcohol use (treatment) on academic success (outcome) for college students. Since randomly assigning college students to binge drink is not feasible or ethical, the study instead uses observational data from Twitter, which also has the advantage of a large sample size of over sixty-three thousand students. They use heuristics to identify 2We acknowledge that Table 1 is by no means exhaustive. To construct Table 1, we started with three seed papers: Roberts et al. (2020), Veitch et al. (2019), and Wood-Doughty et al. (2018). We then examined papers cited by these papers, papers that cited these papers, and papers published by the papers’ authors. We repeated this approach with the additional papers we found that adjusted for confounding with text. We also examined papers matching the query “causal” or “causality” in the ACL Anthology. 5334 Paper Treatment Outcome(s) Confounder Text data Text rep. Adjustment method Johansson et al. (2016) Viewing device (mobile or desktop) Reader’s experience News content News Word counts Causal-driven rep. learning De Choudhury et al. (2016) Word use in mental health community User transitions to post in suicide community Previous text written in a forum Social media (Reddit) Word counts Stratified propensity score matching De Choudhury and Kiciman (2017) Language of comments User transitions to post in suicide community User’s previous posts and comments received Social media (Reddit) Unigrams and bigrams Stratified propensity score matching Falavarjani et al. (2017) Exercise (Foursquare checkins) Shift in topical interest on Twitter Pre-treatment topical interest shift Social media (Twitter, Foursquare) Topic models Matching Olteanu et al. (2017) Current word use Future word use Past word use Social media (Twitter) Top unigrams and bigrams Stratified propensity score matching Pham and Shen (2017) Group vs. individual loan requests Time until borrowers get funded Loan description Microloans (Kiva) Pre-trained embeddings + neural networks A-IPTW, TMLE Kiciman et al. (2018) Alcohol mentions College success (e.g. study habits, risky behaviors, emotions) Previous posts Social media (Twitter) Word counts Stratified propensity score matching Sridhar et al. (2018) Exercise Mood Mood triggers Users’ text on mood logging apps Word counts Propensity score matching Saha et al. (2019) Self-reported usage of psychiatric medication Mood, cognition, depression, anxiety, psychosis, and suicidal ideation Users’ previous posts Social media (Twitter) Word counts + lexicons + supervised classifiers Stratified propensity score matching Sridhar and Getoor (2019) Tone of replies Changes in sentiment Speaker’s political ideology Debate transcripts Topic models + lexicons Regression adjustment, IPTW, A-IPTW Veitch et al. (2019) Presence of a theorem Rate of acceptance Subject of the article Scientific articles BERT Causal-driven rep. learning + Regression adjustment, TMLE Roberts et al. (2020) Perceived gender of author Number of citations Content of article International Relations articles Topic models + propensity score Coarsened exact matching Roberts et al. (2020) Censorship Subsequent censorship and posting rate Content of posts Social media (Weibo) Topic models + propensity score Coarsened exact matching Table 1: Example applications that infer the causal effects of treatment on outcome by measuring confounders (unobserved) from text data (observed). In doing so, these applications choose a representation of text (text rep.) and a method to adjust for confounding. the Twitter accounts of college-age students and extract alcohol mentions and indicators of college success (e.g., study habits, risky behaviors, and emotions) from their Twitter posts. They condition on an individual’s previous posts (temporally previous to measurements of treatment and outcome) as confounding variables since they do not have demographic data. They represent text as word counts and use stratified propensity score matching to adjust for the confounding bias. The study finds the effects of alcohol use include decreased mentions of study habits and positive emotions and increased mentions of potentially risky behaviors. Text as a surrogate for confounders. Traditionally, causal research that uses human subjects as the unit of analysis would infer demographics via surveys. However, with the proliferation of the web and social media, social research now includes large-scale observational data that would be challenging to obtain using surveys (Salganik, 2017). This type of data typically lacks demographic information but may contain large amounts of text written by participants from which demographics can be extracted. In this space, some researchers are specific about the confounders they want to extract such as an individual’s ideology (Sridhar and Getoor, 2019) or mood (Sridhar et al., 2018). Other researchers condition on all the text they have available and assume that low-dimensional summaries capture all possible confounders. For example, researchers might assume that text encodes all possible confounders between alcohol use and college success (Kiciman et al., 2018) or psychiatric medication and anxiety (Saha et al., 2019). We dissect and comment on this assumption in Section 8. Open problems: NLP systems have been shown to be inaccurate for low-resource languages (Duong et al., 2015), and exhibit racial and gender disparity (Blodgett and O’Connor, 2017; Zhao et al., 2017). Furthermore, the ethics of predicting psychological indicators, such as mental health status, from text are questionable (Chancellor et al., 2019). It is unclear how to mitigate these disparities when trying to condition on demographics from text and how NLP errors will propagate to causal estimates. Language as confounders. There is growing interest in measuring language itself (e.g. the sentiment or topical content of text) as causal confounders. For example, Roberts et al. (2020) examine how the perceived gender of an author affects the number of citations that an article receives. However, an article’s topics (the confounders) are likely to influence the perceived gender of its author (reflecting an expectation that women write about certain topics) and the number of citations of that article (“hotter” topics will receive more 5335 Figure 2: This chart is a guide to design decisions for applied research with causal confounders from text. Step 1: Encode domain assumptions by drawing a causal diagram (§3). If the application does not use text to measure latent confounders, the causal effects are not identifiable or the application is outside the scope of this review. Step 2: Use NLP to measure confounders from text (§4). Step 3: Choose a method that adjusts for confounding in causal estimates (§5). Evaluation should include (A) sensitivity analysis (§4), (B) human evaluation of adjustments when appropriate (§6), and (C) evaluation of recovering the true causal effects (§7). citations). Other domains that analyze language as a confounder include news (Johansson et al., 2016), social media (De Choudhury et al., 2016; Olteanu et al., 2017), and loan descriptions (Pham and Shen, 2017). See Section 4 for more discussion on the challenges and open problems of inferring these latent aspects of language. 3 Estimating causal effects Two predominant causal inference frameworks are structural causal models (SCM) (Pearl, 2009b) and potential outcomes (Rubin, 1974, 2005), which are complementary and theoretically connected (Pearl, 2009b; Richardson and Robins, 2013; Morgan and Winship, 2015). While their respective goals substantially overlap, methods from structural causal models tend to emphasize conceptualizing, expressing, and reasoning about the effects of possible causal relationships among variables, while methods from potential outcomes tend to emphasize estimating the size or strength of causal effects. 3.1 Potential outcomes framework In the ideal causal experiment, for each each unit of analysis, i (e.g., a person), one would like to measure the outcome, yi (e.g., an individual’s life expectancy), in both a world in which the unit received treatment, ti = 1 (e.g., the person smoked), as well as in the counterfactual world in which the same unit did not receive treatment, ti = 0 (e.g the same person did not smoke).3 A fundamental challenge of causal inference is that one cannot simultaneously observe treatment and non-treatment for 3In this work, we only address binary treatments, but multivalue treatments are also possible (e.g., Imbens (2000)). a single individual (Holland, 1986). The most common population-level estimand of interest is the average treatment effect (ATE).4 In the absence of confounders, this is simply the difference in means between the treatment and control groups, τ = E(yi|ti = 1) −E(yi|ti = 0), and the “unadjusted” or “naive” estimator is ˆτnaive = 1 n1 X i:ti=1 yi −1 n0 X j:tj=0 yj (1) where n1 is the number of units that have received treatment and n0 is the number of units that have not received treatment. However, this equation will be biased if there are confounders, zi, that influence both treatment and outcome. 3.2 Structural causal models framework Structural causal models (SCMs) use a graphical formalism that depicts nodes as random variables and directed edges as the direct causal dependence between these variables. The typical estimand of choice for SCMs is the probability distribution of an outcome variable Y given an intervention on a treatment variable T: P(Y | do(T = t)) (2) in which the do-notation represents intervening to set variable T to the value t and thereby removing all incoming arrows to the variable T. Identification. In most cases, Equation 2 is not equal to the ordinary conditional distribution 4Other estimands include the average treatment effect on the treated (ATT) and average treatment effect on the control (ATC) (Morgan and Winship, 2015) 5336 Figure 3: A causal diagram showing common causal relationships. P(Y | T = t) since the latter is simply filtering to the sub-population and the former is changing the underlying data distribution via intervention. Thus, for observational studies that lack intervention, one needs an identification strategy in order to represent P(Y | do(T = t)) in terms of distributions of observed variables. One such identification strategy (assumed by the applications throughout this review) is the backdoor criterion which applies to a set of variables, S, if they (i) block every backdoor path between treatment and outcome, and (ii) no node in S is a descendant of treatment. Without positive identification, the causal effects cannot be estimated and measuring variables from text is a secondary concern. Drawing the causal graph. Causal graphs help clarify which variables should and should not be conditioned on. The causal graphs in Figure 3 illustrate how the direction of the arrows differentiates confounder, collider, and mediator variables. Identifying the differences in these variables is crucial since, by d-separation, conditioning on a confounder will block the treatment-confounderoutcome path, removing bias. By contrast, conditioning on a collider can create dependence between treatment-collider-outcome5 (Pearl, 2009a) potentially introducing more bias (Montgomery et al., 2018; Elwert and Winship, 2014). Mediator variables require a different set of adjustments than confounders to find the “natural direct effect” between treatment and outcome (VanderWeele, 2015; Pearl, 2014). A practitioner typically draws a causal graph by explicitly encoding theoretical and domain assumptions as well as the results of prior 5In Pearl et al. (2016)’s example of a collider, suppose scholarships at a college are only given to two types of students: those with unusual musical talents and high grade point averages. In the general population, musical and academic talent are independent. However, if one discovers a person is on a scholarship (conditioning on the collider) then knowing a person lacks musical talent tells us that they are extremely likely to have a high GPA. data analyses.6 Open Problems: When could text potentially encode confounders and colliders simultaneously? If so, is it possible to use text to adjust exclusively for confounders? 4 Measuring confounders via text After drawing the causal graph, the next step is to use available text data to recover latent confounders. Some approaches pre-specify the confounders of interest and measure them from text, P(z | x). Others learn confounders inductively and use a low-dimensional representation of text as the confounding variable z in subsequent causal adjustments. Pre-specified confounders. When a practitioner can specify confounders they want to measure from text (e.g., extracting “occupation” from text in our smoking example), they can use either (1) lexicons or (2) trained supervised classifiers as the instrument of measurement. Lexicons are word lists that can either be hand-crafted by researchers or taken off-the-shelf. For example, Saha et al. (2019) use categories of the Linguistic Inquiry and Word Count (LIWC) lexicon (Pennebaker et al., 2001) such as tentativeness, inhibition, and negative affect, and use indicators of these categories in the text as confounders. Trained supervised classifiers use annotated training examples to predict confounders. For instance, Saha et al. (2019) also build machine learning classifiers for users’ mental states (e.g., depression and anxiety) and apply these classifiers on Twitter posts that are temporally prior to treatment. If these classifiers accurately recover mental states and there are no additional latent confounders, then conditioning on the measured mental states renders treatment independent of potential outcomes. Open problems: Since NLP methods are still far from perfectly accurate, how can one mitigate error that arises from approximating confounding variables? Closely related to this question is effect restoration which addresses error from using proxy variables (e.g., a father’s occupation) in place of true confounders (e.g, socioeconomic status) (Kuroki and Pearl, 2014; Oktay et al., 2019). Wood6See Morgan and Winship (2015) pgs. 33-34 on both the necessity and difficulty of specifying a causal graph for applied social research. Time-ordering can be particularly helpful when encoding causal relationships (for instance, there cannot be an arrow pointing from variable A to variable B if B preceded A in time). 5337 Doughty et al. (2018) build upon effect restoration for causal inference with text classifiers, but there are still open problems in accounting for error arising from other text representations and issues of calibration (Nguyen and O’Connor, 2015) and prevalence estimation (Card and Smith, 2018; Keith and O’Connor, 2018) in conjunction with NLP. Ideas from the large literature on measurement error models may also be helpful (Fuller, 1987; Carroll et al., 2006; Buonaccorsi, 2010). Inductively derived confounders. Other researchers inductively learn confounders in order to condition on all aspects of text, known and unknown. For example, some applications condition on the entirety of news (Johansson et al., 2016) or scientific articles (Veitch et al., 2019; Roberts et al., 2020). This approach typically summarizes textual information with text representations common in NLP. Ideally, this would encode all aspects of language (meaning, topic, style, affect, etc.), though this is an extremely difficult, open NLP problem. Typical approaches include the following. (1) Bag-of-words representations discard word order and use word counts as representations. (2) Topic models are generative probabilistic models that learn latent topics in document collections and represent documents as distributions over topics (Blei et al., 2003; Boyd-Graber et al., 2014; Roberts et al., 2014). (3) Embeddings are continuous, vector-based representations of text. To create vector representations of longer texts, off-the-shelf word embeddings such as word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) or combined via variants of weighted averaging (Arora et al., 2017) or neural models (Iyyer et al., 2015; Bojanowski et al., 2017; Yang et al., 2016). (4) Recently, fine-tuned, large-scale neural language models such as BERT (Devlin et al., 2019) have achieved state-of-the-art performance on semantic benchmarks, and are now used as text representations. Each of these text representations is a real-valued vector that is used in place of the confounder, z, in a causal adjustment method (§5) Open problems: Estimates of causal effects are contingent on the “garden of forking paths” of data analysis, meaning any “paths” an analyst did not take could have resulted in different conclusions (Gelman and Loken, 2013). For settings with causal confounders from text, the first fork is the choice of representation (e.g., topic models or embeddings) and the second fork is the pre-processing and hyperparameter decisions for the chosen representations. We highlight that these decisions have been shown to alter results in predictive tasks. For instance, studies have shown that pre-processing decisions dramatically change topic models (Denny and Spirling, 2018; Schofield et al., 2017); embeddings are sensitive to hyperparameter tuning (Levy et al., 2015) and the construction of the training corpus (Antoniak and Mimno, 2018); and fine-tuned language model performance is sensitive to random restarts (Phang et al., 2018). Thus, reporting sensitivity analysis of the causal effects from these decisions seems crucial: how robust are the results to variations in modeling specifications? 5 Adjusting for confounding bias Given a set of variables Z that satisfy the backdoor criterion (§3.2), one can use the backdoor adjustment to estimate the causal quantity of interest, P(Y = y | do(T = t)) = Z P(Y = y | T = t, Z = z) P(Z = z) dz (3) Conditioning on all confounders is often impractical in high-dimensional settings such as those found in natural language. We provide an overview of methods used by applications in this review that approximate such conditioning, leading to unbiased estimates of treatment effect; however, we acknowledge this is not an exhaustive list of methods and direct readers to more extensive guides (Morgan and Winship, 2015; Athey et al., 2017). Open problems: Causal studies typically make an assumption of overlap, also known as common support or positivity, meaning that any individual has a non-zero probability of assignment to each treatment condition for all possible values of the covariates: ∀z, 0 < P(T = 1 | Z = z) < 1. D’Amour et al. (2017) show that as the dimensionality of covariates grows, strict overlap converges to zero. What are the implications of these results for high-dimensional text data? 5.1 Propensity scores A propensity score estimates the conditional probability of treatment given a set of possible confounders (Rosenbaum and Rubin, 1984, 1983; Caliendo and Kopeinig, 2008). The true model of treatment assignment is typically unknown so one must estimate the propensity score from data (e.g., from a logistic regression model), 5338 π ≡P(T = 1 | Z). (4) Inverse Probability of Treatment Weighting (IPTW) assigns a weight to each unit based on the propensity score (Lunceford and Davidian, 2004), wi = ti/ˆπi + (1 −ti)/(1 −ˆπi), (5) thus emphasizing, for example, treated units that were originally unlikely to be treated (ti = 1, low πi). The ATE is calculated with weighted averages between the treatment and control groups,7 ˆτIPTW = 1 n1 X i:ti=1 wiyi −1 n0 X j:tj=0 wjyj (6) 5.2 Matching and stratification Matching aims to create treatment and control groups with similar confounder assignments; for example, grouping units by observed variables (e.g., age, gender, occupation), then estimating effect size within each stratum (Stuart, 2010). Exact matching on confounders is ideal but nearly impossible to obtain with high-dimensional confounders, including those from text. A framework for matching with text data is described by Mozer et al. (2020) and requires choosing: (1) a text representation (§4); (2) a distance metric (cosine, Eucliean, absolute difference in propensity score etc.); and (3) a matching algorithm. As Stuart (2010) describes, the matching algorithm involves additional decisions about (a) greedy vs. optimal matching; (b) number of control items per treatment item; (c) using calipers (thresholds of maximum distance); and (d) matching with or without replacement. Coarsened exact matching (CEM) matches on discretized raw values of the observed confounders (Iacus et al., 2012). Instead of directly matching on observed variables, stratified propensity-score matching partitions propensity scores into intervals (strata) and then all units are compared within a single strata (Caliendo and Kopeinig, 2008). Stratification is also known as interval matching, blocking, and subclassification. Once the matching algorithm is implemented, counterfactuals (estimated potential outcomes) are obtained from the matches Mi for each unit i: ˆyi(k) = ( yi if ti = k 1 |Mi| P j∈Mi yj if ti ̸= k (7) 7Lunceford and Davidian (2004) note there are two versions of IPTW, where both the weighted sum and the raw count have been used for the n0 and n1 denominators. which is plugged into the matching estimator,8 ˆτmatch = 1 n n X i  ˆyi(1) −ˆyi(0)  . (8) Open problems: Ho et al. (2007) describe matching as a method to reduce model dependence because, unlike regression, it does not rely on a parameteric form. Yet, estimated causal effects may still be sensitive to other matching method decisions such as the number of bins in coarsened exact matching, the number of controls to match with each treatment in the matching algorithm, or the choice of caliper. Are causal estimates made using textual covariates particularly sensitive or robust to such choices? 5.3 Regression adjustment Regression adjustment fits a supervised model from observed data about the expected conditional outcomes q(t, z) ≡E(Y | T = t, Z = z) (9) Then the learned conditional outcome, ˆq, is used to predict counterfactual outcomes for each observation under treatment and control regimes, ˆτreg = 1 n n X i (ˆq(1, zi) −ˆq(0, zi)) (10) 5.4 Doubly-robust methods Unlike methods that model only treatment (IPTW) or only outcome (regression adjustment), doubly robust methods model both treatment and outcome, and have the desirable property that if either the treatment or outcome models are unbiased then the effect estimate will be unbiased as well. These methods often perform very well in practice (Dorie et al., 2019). Adjusted inverse probability of treatment weighting (A-IPTW) combines estimated propensity scores (Eqn. 4) and conditional outcomes (Eqn. 9), while the more general targeted maximum likelihood estimator (TMLE) updates the conditional outcome estimate with a regression on the propensity weights (Eqn. 5) and ˆq (Van der Laan and Rose, 2011). 5.5 Causal-driven representation learning Several research efforts design representations of text specifically for causal inference goals. These 8For alternative matching estimators see Abadie et al. (2004). This estimator is techinally the sample average treatment effect (SATE), not the population-level ATE, since we have pruned treatment and control pairs that do not have matches (Morgan and Winship, 2015). 5339 approaches still initialize their models with representations of text described in Section 4, but then the representations are updated with machine learning architectures that incorporate the observed treatment assignment and other causal information. Johansson et al. (2016) design a network with a multitask objective that aims for low prediction error for the conditional outcome estimates, q, and minimizes the discrepancy distance between q(1, zi) and q(0, zi) in order achieve balance in the confounders. Roberts et al. (2020) combine structural topic models (STM; Roberts et al. (2014)), propensity scores, and matching. They use the observed treatment assignment as the content covariate in the STM, append an estimated propensity score to the topic-proportion vector for each document, and then perform coarsened exact matching on that vector. Veitch et al. (2019) fine-tune a pre-trained BERT network with a multi-task loss objective that estimates (a) the original masked languagemodeling objective of BERT, (b) propensity scores, and (c) conditional outcomes for both treatment and control. They use the predicted conditional outcomes and propensity scores in regression adjustment and the TMLE formulas. Open problems: These methods have yet to be compared to one another on the same benchmark evaluation datasets. Also, when are the causal effects sensitive to hyperparameter and network architecture choices and what should researchers do in these settings? 6 Human evaluation of intermediate steps Text data has the advantage of being interpretable— matched pairs and some low-dimensional representations of text can be read by humans to evaluate their quality. When possible, we suggest practitioners use (1) interpretable balance metrics and/or (2) human judgements of treatment propensity to evaluate intermediate steps of the causal estimation pipeline. 6.1 Interpretable balance metrics For matching and propensity score methods, the confounder balance should be assessed, since ideally P(Z | T = 1) = P(Z | T = 0) in a matched sample (Stuart, 2010). A standard numerical balance diagnostic is the standardized difference in means (SDM), SDM(j) = 1 n1 P i:ti=1 zij −1 n0 P i:ti=0 zij σt=1 j where zij is a single confounder j for a single unit i and σt=1 j is the standard deviation of zij for all i such that ti = 1. SDM can also be used to evaluate the propensity score, in which case there would only be a single j (Rubin, 2001). For causal text applications, Roberts et al. (2020) and Sridhar and Getoor (2019) estimate the difference in means for each topic in a topic-model representation of confounders and Sridhar et al. (2018) estimate the difference in means across structured covariates but not the text itself. As an alternative to SDM, Roberts et al. (2020) use string kernels to perform similarity checks. Others use domain-specific, known structured confounders to evaluate the balance between treatment and control groups. For instance, De Choudhury and Kiciman (2017) sample treatment-control pairs across all propensity score strata and label the sampled text based on known confounders (in their case, from a previously-validated codebook of suicidal ideation risk markers). Open problems: For embeddings and causallydriven representations, each dimension in the confounder vector z is not necessarily meaningful. How can balance metrics be used in this setting? 6.2 Judgements of treatment propensity When possible, one can also improve validation by evaluating matched items (posts, sentences, documents etc.) to humans for evaluation. Humans can either (a) use a scale (e.g., a 1-5 Likert scale) to rate items individually on their propensity for treatment, or (b) assess similarity of paired items after matching. A simple first step is for analysts to do “inhouse” evaluation on a small sample (e.g., Roberts et al. (2020)), but a larger-sample experiments on crowd-working platforms can also increase the validity of these methods (e.g., Mozer et al. (2020)). Open problems: How can these human judgement experiments be improved and standardized? Future work could draw from a rich history in NLP of evaluating representations of topic models and embeddings (Wallach et al., 2009; Bojanowski et al., 2017; Schnabel et al., 2015) and evaluating semantic similarity (Cer et al., 2017; Bojanowski et al., 2017; Reimers and Gurevych, 2019). 7 Evaluation of causal methods Because the true causal effects in real-world causal inference are typically unknown, causal evaluation is a difficult and open research question. As 5340 algorithmic complexity grows, the expected performance of causal methods can be difficult to estimate theoretically (Jensen, 2019). Other causal evaluations involve synthetic data. However, as Gentzel et al. (2019) discuss, synthetic data has no “unknown unknowns” and many researcher degrees of freedom, which limits their effectiveness. Thus, we encourage researchers to evaluate with constructed observational studies or semi-synthetic datasets, although measuring latent confounders from text increases the difficulty of creating realistic datasets that can be used for empirical evaluation of causal methods. 7.1 Constructed observational studies Constructed observational studies collect data from both randomized and non-randomized experiments with similar participants and settings. Evaluations of this kind include job training programs in economics (LaLonde, 1986; Glynn and Kashin, 2013), advertisement marketing campaigns (Gordon et al., 2019), and education (Shadish et al., 2008). For instance, Shadish et al. (2008) randomly assign participants to a randomized treatment (math or vocabulary training) and non-randomized treatment (participants choose their own training). They compare causal effect estimates from the randomized study with observational estimates that condition on confounders from participant surveys (e.g., sex, age, marital status, like of mathematics, extroversion, etc.). Open problems: To extend constructed observational studies to text data, one could build upon Shadish et al. (2008) and additionally (a) ask participants to write free-form essays of their past educational and childhood experiences and/or (b) obtain participants’ public social media posts. Then causal estimates that condition on these textual representation of confounders could be compared to both those with surveys and the randomized settings. Alternatively, one could find observational studies with both real covariates and text and (1) randomize treatment conditional on the propensity score model (constructed from the covariates but not the text) and (2) estimate causal effect given only text (not the covariates). Then any estimated non-zero treatment effect is only bias. 7.2 Semi-synthetic datasets Semi-synthetic datasets use real covariates and synthetically generate treatment and outcome, as in the 2016 Atlantic Causal Inference Competition (Dorie et al., 2019). Several applications in this review use real metadata or latent aspects of text to simulate treatment and outcome: Johansson et al. (2016) simulate treatment and outcome from two centroids in topic model space from newswire text; Veitch et al. (2019) use indicators of an article’s “buzzy” keywords; Roberts et al. (2020) use “quantitative methodology” categories of articles that were hand-coded by other researchers. Open problems: Semi-synthetic datasets that use real covariates of text seem to be a better evaluation strategy than purely synthetic datasets. However, with semi-synthetic datasets, researchers could be inadvertently biased to choose metadata that they know their method will recover. A promising future direction is a competition-style evaluation like Dorie et al. (2019) in which one group of researchers generates a causal dataset with text as a confounder and other groups of researchers evaluate their causal methods without access to the data-generating process. 8 Discussion and Conclusion Computational social science is an exciting, rapidly expanding discipline. With greater availability of text data, alongside improved natural language processing models, there is enormous opportunity to conduct new and more accurate causal observational studies by controlling for latent confounders in text. While text data ought to be as useful for measurement and inference as “traditional” lowdimensional social-scientific variables, combining NLP with causal inference methods requires tackling major open research questions. Unlike predictive applications, causal applications have no ground truth and so it is difficult distinguish modeling errors and forking paths from the true causal effects. In particular, we caution against using all available text in causal adjustment methods without any human validation or supervision, since one cannot diagnose any potential errors. Solving these open problems, along with the others presented in this paper, would be a major advance for NLP as a social science methodology. Acknowledgments The authors thank Sam Witty, Jacob Eisenstein, Brandon Stewart, Zach Wood-Doughty, Andrew Halterman, Laura Balzer, and members of the University of Massachusetts Amherst NLP reading group for helpful feedback, as well as the anonymous referees for detailed peer reviews. 5341 References Alberto Abadie, David Drukker, Jane Leber Herr, and Guido W Imbens. 2004. Implementing matching estimators for average treatment effects in stata. The Stata Journal, 4(3):290–311. Maria Antoniak and David Mimno. 2018. Evaluating the stability of embedding-based word similarities. Transactions of the Association for Computational Linguistics, 6:107–119. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In ICLR. Susan Athey, Guido Imbens, Thai Pham, and Stefan Wager. 2017. Estimating average treatment effects: Supplementary analyses and remaining challenges. American Economic Review, 107(5):278–81. Isabelle Augenstein, Kris Cao, He He, Felix Hill, Spandana Gella, Jamie Kiros, Hongyuan Mei, and Dipendra Misra. 2018. Proceedings of the Third Workshop on Representation Learning for NLP. In Proceedings of The Third Workshop on Representation Learning for NLP. Isabelle Augenstein, Spandana Gella, Sebastian Ruder, Katharina Kann, Burcu Can, Johannes Welbl, Alexis Conneau, Xiang Ren, and Marek Rei. 2019. Proceedings of the 4th Workshop on Representation Learning for NLP. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). Ananth Balashankar, Sunandan Chakraborty, Samuel Fraiberger, and Lakshminarayanan Subramanian. 2019. Identifying predictive causal factors from news streams. In Empirical Methods in Natural Langugage Processing. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3(Jan):993–1022. Su Lin Blodgett and Brendan O’Connor. 2017. Racial disparity in natural language processing: A case study of social media african-american english. In Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) Workshop, KDD. Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, and Scott Yih. 2017. Proceedings of the 2nd Workshop on Representation Learning for NLP. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Phil Blunsom, Kyunghyun Cho, Shay Cohen, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, and Scott Wen-tau Yih. 2016. Proceedings of the 1st Workshop on Representation Learning for NLP. In Proceedings of the 1st Workshop on Representation Learning for NLP. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Jordan Boyd-Graber, David Mimno, and David Newman. 2014. Care and feeding of topic models: Problems, diagnostics, and improvements. Handbook of Mixed Membership Models and Their Applications, 225255. John P Buonaccorsi. 2010. Measurement Error: Models, Methods, and Applications. CRC Press. Marco Caliendo and Sabine Kopeinig. 2008. Some practical guidance for the implementation of propensity score matching. Journal of Economic Surveys, 22(1):31–72. Dallas Card and Noah A Smith. 2018. The importance of calibration for estimating proportions from annotations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. Raymond J Carroll, David Ruppert, Leonard A Stefanski, and Ciprian M Crainiceanu. 2006. Measurement Error in Nonlinear Models: a Modern Perspective. CRC Press. Daniel Cer, Mona Diab, Eneko Agirre, I˜nigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), Vancouver, Canada. Association for Computational Linguistics. Stevie Chancellor, Eric PS Baumer, and Munmun De Choudhury. 2019. Who is the human in humancentered machine learning: The case of predicting mental health from social media. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):147. Alexander D’Amour, Peng Ding, Avi Feller, Lihua Lei, and Jasjeet Sekhon. 2017. Overlap in observational studies with high-dimensional covariates. arXiv preprint arXiv:1711.02582. Munmun De Choudhury and Emre Kiciman. 2017. The language of social support in social media and its effect on suicidal ideation risk. In International AAAI Conference on Web and Social Media (ICWSM). Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Discovering shifts to suicidal ideation from mental health content in social media. In Proceedings of the 2016 CHI conference on human factors in computing systems, pages 2098–2110. ACM. Matthew J Denny and Arthur Spirling. 2018. Text preprocessing for unsupervised learning: Why it matters, when it misleads, and what to do about it. Political Analysis, 26(2):168–189. 5342 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In North American Association of Computational Linguistics. Vincent Dorie, Jennifer Hill, Uri Shalit, Marc Scott, and Daniel Cervone. 2019. Automated versus doit-yourself methods for causal inference: Lessons learned from a data analysis competition. Statistical Science, 34(1):43–68. Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. 2015. Low resource dependency parsing: Crosslingual parameter sharing in a neural network parser. In Association for Computational Linguistics. Naoki Egami, Christian J Fong, Justin Grimmer, Margaret E Roberts, and Brandon M Stewart. How to make causal inferences using texts. Working paper. Felix Elwert and Christopher Winship. 2014. Endogenous selection bias: The problem of conditioning on a collider variable. Annual Review of Sociology, 40:31–53. Seyed Amin Mirlohi Falavarjani, Hawre Hosseini, Zeinab Noorian, and Ebrahim Bagheri. 2017. Estimating the effect of exercising on users’ online behavior. In Eleventh International AAAI Conference on Web and Social Media. Christian Fong and Justin Grimmer. 2016. Discovery of treatments from text corpora. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1600–1609. Wayne A Fuller. 1987. Measurement Error Models. John Wiley & Sons. Andrew Gelman and Eric Loken. 2013. The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Department of Statistics, Columbia University. Amanda Gentzel, Dan Garant, and David Jensen. 2019. The case for evaluating causal models using interventional measures and empirical data. In Advances in Neural Information Processing Systems. Adam Glynn and Konstantin Kashin. 2013. Front-door versus back-door adjustment with unmeasured confounding: Bias formulas for front-door and hybrid adjustments. In 71st Annual Conference of the Midwest Political Science Association, volume 3. Brett R Gordon, Florian Zettelmeyer, Neha Bhargava, and Dan Chapsky. 2019. A comparison of approaches to advertising measurement: Evidence from big field experiments at facebook. Marketing Science, 38(2):193–225. MA Hern´an and JM Robins. 2020. Causal Inference: What If. Boca Raton: Chapman Hall/CRC. Daniel E Ho, Kosuke Imai, Gary King, and Elizabeth A Stuart. 2007. Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Analysis, 15(3):199– 236. Paul W Holland. 1986. Statistics and causal inference. Journal of the American statistical Association, 81(396):945–960. Stefano M Iacus, Gary King, and Giuseppe Porro. 2012. Causal inference without balance checking: Coarsened exact matching. Political Analysis. Guido W Imbens. 2000. The role of the propensity score in estimating dose-response functions. Biometrika, 87(3):706–710. Guido W Imbens and Donald B Rubin. 2015. Causal Inference in Statistics, Social, and Biomedical Sciences. Cambridge University Press. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Association for Computational Linguistics. David Jensen. 2019. Comment: Strengthening empirical evaluation of causal inference methods. Statistical Science, 34(1):77–81. Fredrik Johansson, Uri Shalit, and David Sontag. 2016. Learning representations for counterfactual inference. In ICML. Katherine Keith and Brendan O’Connor. 2018. Uncertainty-aware generative models for inferring document class prevalence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Emre Kiciman, Scott Counts, and Melissa Gasser. 2018. Using longitudinal social media analysis to understand the effects of early college alcohol use. In Twelfth International AAAI Conference on Web and Social Media. Manabu Kuroki and Judea Pearl. 2014. Measurement bias and effect restoration in causal inference. Biometrika, 101(2):423–437. Mark J Van der Laan and Sherri Rose. 2011. Targeted Learning: Causal Inference for Observational and Experimental Data. Springer Science & Business Media. Robert J LaLonde. 1986. Evaluating the econometric evaluations of training programs with experimental data. The American Economic Review, pages 604– 620. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. 5343 Sheng Li, Nikos Vlassis, Jaya Kawale, and Yun Fu. 2016. Matching via dimensionality reduction for estimation of treatment effects in digital marketing campaigns. In IJCAI. Christos Louizos, Uri Shalit, Joris M Mooij, David Sontag, Richard Zemel, and Max Welling. 2017. Causal effect inference with deep latent-variable models. In Advances in Neural Information Processing Systems. Jared K Lunceford and Marie Davidian. 2004. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Statistics in Medicine, 23(19):2937–2960. Subramani Mani and Gregory F Cooper. 2000. Causal discovery from medical textual data. In Proceedings of the AMIA Symposium, page 542. American Medical Informatics Association. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. Jacob M Montgomery, Brendan Nyhan, and Michelle Torres. 2018. How conditioning on posttreatment variables can ruin your experiment and what to do about it. American Journal of Political Science, 62(3):760–775. Stephen L Morgan and Christopher Winship. 2015. Counterfactuals and Causal Inference. Cambridge University Press. Reagan Mozer, Luke Miratrix, Aaron Russell Kaufman, and L Jason Anastasopoulos. 2020. Matching with text data: An experimental evaluation of methods for matching documents and of measuring match quality. Political Analysis. Khanh Nguyen and Brendan O’Connor. 2015. Posterior calibration and exploratory analysis for natural language processing models. In Empirical Methods in Natural Langugage Processing. H¨useyin Oktay, Akanksha Atrey, and David Jensen. 2019. Identifying when effect restoration will improve estimates of causal effect. In Proceedings of the 2019 SIAM International Conference on Data Mining, pages 190–198. SIAM. Alexandra Olteanu, Onur Varol, and Emre Kiciman. 2017. Distilling the outcomes of personal experiences: A propensity-scored analysis of social media. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pages 370–386. ACM. Judea Pearl. 2000. Causality: Models, Reasoning and Inference. Springer. Judea Pearl. 2009a. Causal inference in statistics: An overview. Statistics Surveys, 3:96–146. Judea Pearl. 2009b. Causality: Models, Reasoning and Inference, Second edition. Cambridge University Press. Judea Pearl. 2014. Interpretation and identification of causal mediation. Psychological Methods, 19(4):459. Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. Causal Inference in Statistics: A Primer. John Wiley & Sons. James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates, 71(2001):2001. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Langugage Processing. Thai T Pham and Yuanyuan Shen. 2017. A deep causal inference approach to measuring the effects of forming group loans in online non-profit microfinance platform. arXiv preprint arXiv:1706.02795. Jason Phang, Thibault F´evry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. Fermin Moscoso del Prado Martin and Christian Brendel. 2016. Case and cause in icelandic: Reconstructing causal networks of cascaded language changes. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2421–2430. Jeremy A Rassen, Robert J Glynn, M Alan Brookhart, and Sebastian Schneeweiss. 2011. Covariate selection in high-dimensional propensity score analyses of treatment effects in small samples. American Journal of Epidemiology, 173(12):1404–1413. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using siamese BERTnetworks. In Empirical Methods in Natural Langugage Processing. Thomas S Richardson and James M Robins. 2013. Single world intervention graphs (SWIGs): A unification of the counterfactual and graphical approaches to causality. Center for the Statistics and the Social Sciences, University of Washington Series. Working Paper, (128). Margaret E Roberts, Brandon M Stewart, and Richard A Nielsen. 2020. Adjusting for confounding with text matching. American Journal of Political Science (forthcoming). Margaret E Roberts, Brandon M Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson, and David G Rand. 2014. Structural topic models for 5344 open-ended survey responses. American Journal of Political Science, 58(4):1064–1082. Paul R Rosenbaum and Donald B Rubin. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55. Paul R Rosenbaum and Donald B Rubin. 1984. Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Statistical Association, 79(387):516–524. Donald B Rubin. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5):688. Donald B Rubin. 2001. Using propensity scores to help design observational studies: application to the tobacco litigation. Health Services and Outcomes Research Methodology, 2(3-4):169–188. Donald B Rubin. 2005. Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469):322–331. Koustuv Saha, Benjamin Sugar, John Torous, Bruno Abrahao, Emre Kıcıman, and Munmun De Choudhury. 2019. A social media study on the effects of psychiatric medication use. In Proceedings of the International AAAI Conference on Web and Social Media, volume 13, pages 440–451. Matthew Salganik. 2017. Bit By Bit: Social Research in the Digital Age. Princeton University Press. Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Empirical Methods in Natural Langugage Processing. Alexandra Schofield, M˚ans Magnusson, and David Mimno. 2017. Pulling out the stops: Rethinking stopword removal for topic models. In Association for Computational Linguistics. William R Shadish, Margaret H Clark, and Peter M Steiner. 2008. Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. Journal of the American Statistical Association, 103(484):1334–1344. Dhanya Sridhar and Lise Getoor. 2019. Estimating causal effects of tone in online debates. In IJCAI. Dhanya Sridhar, Aaron Springer, Victoria Hollis, Steve Whittaker, and Lise Getoor. 2018. Estimating causal effects of exercise from mood logging data. In IJCAI/ICML Workshop on CausalML. Elizabeth A Stuart. 2010. Matching methods for causal inference: A review and a look forward. Statistical Science, 25(1):1. Narges Tabari, Piyusha Biswas, Bhanu Praneeth, Armin Seyeditabari, Mirsad Hadzikadic, and Wlodek Zadrozny. 2018. Causality analysis of twitter sentiments and stock market returns. In Proceedings of the First Workshop on Economics and Natural Language Processing. Association for Computational Linguistics. Chenhao Tan, Lillian Lee, and Bo Pang. 2014. The effect of wording on message propagation: Topicand author-controlled natural experiments on twitter. In Association for Computational Linguistics. Tyler VanderWeele. 2015. Explanation in Causal Inference: Methods for Mediation and Interaction. Oxford University Press. Victor Veitch, Dhanya Sridhar, and David M Blei. 2019. Using text embeddings for causal inference. arXiv preprint arXiv:1905.12741. Hanna M Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. 2009. Evaluation methods for topic models. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1105–1112. ACM. Zach Wood-Doughty, Ilya Shpitser, and Mark Dredze. 2018. Challenges of using text classifiers for causal inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4586–4598. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.
2020
474
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5345–5357 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5345 Text-Based Ideal Points Keyon Vafa, Suresh Naidu, David M. Blei Columbia University fkeyon.vafa,sn2430,[email protected] Abstract Ideal point models analyze lawmakers’ votes to quantify their political positions, or ideal points. But votes are not the only way to express a political position. Lawmakers also give speeches, release press statements, and post tweets. In this paper, we introduce the text-based ideal point model (tbip), an unsupervised probabilistic topic model that analyzes texts to quantify the political positions of its authors. We demonstrate the tbip with two types of politicized text data: U.S. Senate speeches and senator tweets. Though the model does not analyze their votes or political affiliations, the tbip separates lawmakers by party, learns interpretable politicized topics, and infers ideal points close to the classical vote-based ideal points. One benefit of analyzing texts, as opposed to votes, is that the tbip can estimate ideal points of anyone who authors political texts, including non-voting actors. To this end, we use it to study tweets from the 2020 Democratic presidential candidates. Using only the texts of their tweets, it identifies them along an interpretable progressive-tomoderate spectrum. 1 Introduction Ideal point models are widely used to help characterize modern democracies, analyzing lawmakers’ votes to estimate their positions on a political spectrum (Poole and Rosenthal, 1985). But votes aren’t the only way that lawmakers express political preferences—press releases, tweets, and speeches all help convey their positions. Like votes, these signals are recorded and easily collected. This paper develops the text-based ideal point model (tbip), a probabilistic topic model for analyzing unstructured political texts to quantify the political preferences of their authors. While classical ideal point models analyze how different people vote on a shared set of bills, the tbip analyzes how different authors write about a shared set of latent topics. The tbip is inspired by the idea of political framing: the specific words and phrases used when discussing a topic can convey political messages (Entman, 1993). Given a corpus of political texts, the tbip estimates the latent topics under discussion, the latent political positions of the authors of texts, and how per-topic word choice changes as a function of the political position of the author. A key feature of the tbip is that it is unsupervised. It can be applied to any political text, regardless of whether the authors belong to known political parties. It can also be used to analyze non-voting actors, such as political candidates. Figure 1 shows a tbip analysis of the speeches of the 114th U.S. Senate. The model lays the senators out on the real line and accurately separates them by party. (It does not use party labels in its analysis.) Based only on speeches, it has found an interpretable spectrum—Senator Bernie Sanders is liberal, Senator Mitch McConnell is conservative, and Senator Susan Collins is moderate. For comparison, Figure 2 also shows ideal points estimated from the voting record of the same senators; their language and their votes are closely correlated. The tbip also finds latent topics, each one a vocabulary-length vector of intensities, that describe the issues discussed in the speeches. For each topic, the tbip involves both a neutral vector of intensities and a vector of ideological adjustments that describe how the intensities change as a function of the political position of the author. Illustrated in Table 1 are discovered topics about immigration, health care, and gun control. In the gun control topic, the neutral intensities focus on words like “gun” and “firearms.” As the author’s ideal point becomes more negative, terms like “gun violence” and “background checks” increase in intensity. As the author’s ideal point becomes more positive, terms like “constitutional rights” increase. The tbip is a bag-of-words model that combines ideas from ideal point models and Poisson factor5346 Bernie Sanders (I-VT) Elizabeth Warren (D-MA) Sherrod Brown (D-OH) Chuck Schumer (D-NY) Amy Klobuchar (D-MN) Susan Collins (R-ME) Mark Warner (D-VA) Jeff Sessions (R-AL) Rand Paul (R-KY) Ben Sasse (R-NE) Marco Rubio (R-FL) Mitch McConnell (R-KY) John McCain (R-AZ) Figure 1. The text-based ideal point model (tbip) separates senators by political party using only speeches. The algorithm does not have access to party information, but senators are coded by their political party for clarity (Democrats in blue circles, Republicans in red x’s). The speeches are from the 114th U.S. Senate. ization topic models (Canny, 2004; Gopalan et al., 2015). The latent variables are the ideal points of the authors, the topics discussed in the corpus, and how those topics change as a function of ideal point. To approximate the posterior, we use an efficient black box variational inference algorithm with stochastic optimization. It scales to large corpora. We develop the details of the tbip and its variational inference algorithm. We study its performance on three sessions of U.S. Senate speeches, and we compare the tbip to other methods for scaling political texts (Slapin and Proksch, 2008; Lauderdale and Herzog, 2016a). The tbip performs best, recovering ideal points closest to the vote-based ideal points. We also study its performance on tweets by U.S. senators, again finding that it closely recovers their vote-based ideal points. (In both speeches and tweets, the differences from vote-based ideal points are also qualitatively interesting.) Finally, we study the tbip on tweets by the 2020 Democratic candidates for President, for which there are no votes for comparison. It lays out the candidates along an interpretable progressiveto-moderate spectrum. 2 The text-based ideal point model We develop the text-based ideal point model (tbip), a probabilistic model that infers political ideology from political texts. We first review Bayesian ideal points and Poisson factorization topic models, two probabilistic models on which the tbip is built. 2.1 Background: Bayesian ideal points Ideal points quantify a lawmaker’s political preferences based on their roll-call votes (Poole and Rosenthal, 1985; Jackman, 2001; Clinton et al., 2004). Consider a group of lawmakers voting “yea” or “nay” on a shared set of bills. Denote the vote of lawmaker i on bill j by the binary variable vij . The Bayesian ideal point model posits scalar perlawmaker latent variables xi and scalar per-bill latent variables .˛j ; j /. It assumes the votes come from a factor model, xi  N .0; 1/ ˛j ; j  N .0; 1/ vij  Bern..˛j C xij //: (1) where .t/ D 1 1Cet . The latent variable xi is called the lawmaker’s ideal point; the latent variable j is the bill’s polarity. When xi and j have the same sign, lawmaker i is more likely to vote for bill j; when they have opposite sign, the lawmaker is more likely to vote against it. The per-bill intercept term ˛j is called the popularity. It captures that some bills are uncontroversial, where all lawmakers are likely to vote for them (or against them) regardless of their ideology. Using data of lawmakers voting on bills, political scientists approximate the posterior of the Bayesian ideal point model with an approximate inference method such as Markov Chain Monte Carlo (MCMC) (Jackman, 2001; Clinton et al., 2004) or expectation-maximization (EM) (Imai et al., 2016). Empirically, the posterior ideal points of the lawmakers accurately separate political parties and capture the spectrum of political preferences in American politics (Poole and Rosenthal, 2000). 2.2 Background: Poisson factorization Poisson factorization is a class of non-negative matrix factorization methods often employed as a topic model for bag-of-words text data (Canny, 2004; Cemgil, 2009; Gopalan et al., 2014). 5347 Poisson factorization factorizes a matrix of document/word counts into two positive matrices: a matrix  that contains per-document topic intensities, and a matrix ˇ that contains the topics. Denote the count of word v in document d by ydv. Poisson factorization posits the following probabilistic model over word counts, where a and b are hyperparameters: dk  Gamma.a; b/ ˇkv  Gamma.a; b/ ydv  Pois P k dkˇkv  : (2) Given a matrix y, practitioners approximate the posterior factorization with variational inference (Gopalan et al., 2015) or MCMC (Cemgil, 2009). Note that Poisson factorization can be interpreted as a Bayesian variant of nonnegative matrix factorization, with the so-called “KL loss function” (Lee and Seung, 1999). When the shape parameter a is less than 1, the latent vectors d and ˇk tend to be sparse. Consequently, the marginal likelihood of each count places a high mass around zero and has heavy tails (Ranganath et al., 2015). The posterior components are interpretable as topics (Gopalan et al., 2015). 2.3 The text-based ideal point model The text-based ideal point model (tbip) is a probabilistic model that is designed to infer political preferences from political texts. There are important differences between a dataset of votes and a corpus of authored political language. A vote is one of two choices, “yea” or “nay.” But political language is high dimensional—a lawmaker’s speech involves a vocabulary of thousands. A vote sends a clear signal about a lawmaker’s opinion about a bill. But political speech is noisy—the use of a word might be irrelevant to ideology, provide only a weak signal about ideology, or change signal depending on context. Finally, votes are organized in a matrix, where each one is unambiguously attached to a specific bill and nearly all lawmakers vote on all bills. But political language is unstructured and sparse. A corpus of political language can discuss any number of issues—with speeches possibly involving several issues—and the issues are unlabeled and possibly unknown in advance. The tbip is based on the concept of political framing. Framing is the idea that a communicator will emphasize certain aspects of a message – implicitly or explicitly – to promote a perspective or agenda (Entman, 1993; Chong and Druckman, 2007). In politics, an author’s word choice for a particular issue is affected by the ideological message she is trying to convey. A conservative discussing abortion is more likely to use terms such as “life” and “unborn,” while a liberal discussing abortion is more likely to use terms like “choice” and “body.” In this example, a conservative is framing the issue in terms of morality, while a liberal is framing the issue in terms of personal liberty. The tbip casts political framing in a probabilistic model of language. While the classical ideal point model infers ideology from the differences in votes on a shared set of bills, the tbip infers ideology from the differences in word choice on a shared set of topics. The tbip is a probabilistic model that builds on Poisson factorization. The observed data are word counts and authors: ydv is the word count for term v in document d, and ad is the author of the document. Some of the latent variables in the tbip are inherited from Poisson factorization: the nonnegative K-vector of per-document topic intensities is d and the topics themselves are non-negative V -vectors ˇk, where K is the number of topics and V is the vocabulary size. We refer to ˇ as the neutral topics. Two additional latent variables capture the politics: the ideal point of an author s is a real-valued scalar xs, and the ideological topic is a real-valued V -vector k. The tbip uses its latent variables in a generative model of authored political text, where the ideological topic adjusts the neutral topic—and thus the word choice—as a function of the ideal point of the author. Place sparse Gamma priors on  and ˇ, and normal priors on  and x, so for all documents d, words v, topics k, and authors s, dk  Gamma.a; b/ kv  N .0; 1/ ˇkv  Gamma.a; b/ xs  N .0; 1/: These latent variables interact to draw the count of term v in document d, ydv  Pois P k dkˇkv expfxad kvg  : (3) For a topic k and term v, a non-zero kv will increase the Poisson rate of the word count if it shares the same sign as the ideal point of the author xad , and decrease the Poisson rate if they are of opposite signs. Consider a topic about gun control and suppose kv > 0 for the term “constitution.” An author with an ideal point xs > 0, say a conservative 5348 Ideology Top Words Liberal dreamers, dream, undocumented, daca, comprehensive immigration reform, deport, young, deportation Neutral immigration, united states, homeland security, department, executive, presidents, law, country Conservative laws, homeland security, law, department, amnesty, referred, enforce, injunction Liberal affordable care act, seniors, medicare, medicaid, sick, prescription drugs, health insurance Neutral health care, obamacare, affordable care act, health insurance, insurance, americans, coverage, percent Conservative health care law, obamacare, obama, democrats, obamacares, deductibles, broken promises Liberal gun violence, gun, guns, killed, hands, loophole, background checks, close Neutral gun, guns, second, orlando, question, firearms, shooting, background checks Conservative second, constitutional rights, rights, due process, gun control, mental health, list, mental illness Table 1. The tbip learns topics from Senate speeches that vary as a function of the senator’s political positions. The neutral topics are for an ideal point of 0; the ideological topics fix ideal points at 1 and C1. We interpret one extreme as liberal and the other as conservative. Data is from the 114th U.S. Senate. author, will be more likely to use the term “constitution” when discussing gun control; an author with an ideal point xs < 0, a liberal author, will be less likely to use the term. Suppose kv < 0 for the term “violence.” Now the liberal author will be more likely than the conservative to use this term. Finally suppose kv D 0 for the term “gun.” This term will be equally likely to be used by the authors, regardless of their ideal points. To build more intuition, examine the elements of the sum in the Poisson rate of Equation (3) and rewrite slightly to dk exp.log ˇkv C xad kv/. Each of these elements mimics the classical ideal point model in Equation (1), where kv now measures the “polarity” of term v in topic k and log ˇkv is the intercept or “popularity.” When kv and xad have the same sign, term v is more likely to be used when discussing topic k. If kv is near zero, then the term is not politicized, and its count comes from a Poisson factorization. For each document d, the elements of the sum that contribute to the overall rate are those for which dk is positive; that is, those for the topics that are being discussed in the document. The posterior distribution of the latent variables provides estimates of the ideal points, neutral topics, and ideological topics. For example, we estimate this posterior distribution using a dataset of senator speeches from the 114th United States Senate session. The fitted ideal points in Figure 1 show that the tbip largely separates lawmakers by political party, despite not having access to these labels or votes. Table 1 depicts neutral topics (fixing the fitted Okv to be 0) and the corresponding ideological topics by varying the sign of Okv. The topic for immigration shows that a liberal framing emphasizes “Dreamers” and “DACA”, while the conservative frame emphasizes “laws” and “homeland security.” We provide more details and empirical studies in Section 5. 3 Related work Most ideal point models focus on legislative rollcall votes. These are typically latent-space factor models (Poole and Rosenthal, 1985; McCarty et al., 1997; Poole and Rosenthal, 2000), which relate closely to item-response models (Bock and Aitkin, 1981; Bailey, 2001). Researchers have also developed Bayesian analogues (Jackman, 2001; Clinton et al., 2004) and extensions to time series, particularly for analyzing the Supreme Court (Martin and Quinn, 2002). Some recent models combine text with votes or party information to estimate ideal points of legislators. Gerrish and Blei (2011) analyze votes and the text of bills to learn ideological language. Gerrish and Blei (2012) and Lauderdale and Clark (2014) use text and vote data to learn ideal points adjusted for topic. The models in Nguyen et al. (2015) and Kim et al. (2018) analyze votes and floor speeches together. With labeled political party affiliations, machine learning methods can also help map language to party membership. Iyyer et al. (2014) use neural networks to learn partisan phrases, while the models in Tsur et al. (2015) and Gentzkow et al. (2019) use political party labels to analyze differences in speech patterns. Since the tbip does not use votes or party information, it is applicable to all political texts, even when votes and party labels are not present. Moreover, party labels can be restrictive because they force hard membership in one of two groups (in American politics). The tbip can infer how topics change smoothly across the political spectrum, rather than simply learning topics for each political party. Annotated text data has also been used to pre5349 dict ideological positions. Wordscores (Laver et al., 2003; Lowe, 2008) uses texts that are hand-labeled by political position to measure the conveyed positions of unlabeled texts; it has been used to measure the political landscape of Ireland (Benoit and Laver, 2003; Herzog and Benoit, 2015). Ho et al. (2008) analyze hand-labeled editorials to estimate ideal points for newspapers. The ideological topics learned by the tbip are also related to political frames (Entman, 1993; Chong and Druckman, 2007). Historically, these frames have either been hand-labeled by annotators (Baumgartner et al., 2008; Card et al., 2015) or used annotated data for supervised prediction (Johnson et al., 2017; Baumer et al., 2015). In contrast to these methods, the tbip is completely unsupervised. It learns ideological topics that do not need to conform to pre-defined frames. Moreover, it does not depend on the subjectivity of coders. wordfish (Slapin and Proksch, 2008) is a model of authored political texts about a single issue, similar to a single-topic version of tbip. wordfish has been applied to party manifestos (Proksch and Slapin, 2009; Lo et al., 2016) and single-issue dialogue (Schwarz et al., 2017). wordshoal (Lauderdale and Herzog, 2016a) extends wordfish to multiple issues by analyzing a collection of labeled texts, such as Senate speeches labeled by debate topic. wordshoal fits separate wordfish models to the texts about each label, and combines the fitted models in a one-dimensional factor analysis to produce ideal points. In contrast to these models, the tbip does not require a grouping of the texts into single issues. It naturally accommodates unstructured texts, such as tweets, and learns both ideal points for the authors and ideologyadjusted topics for the (latent) issues under discussion. Furthermore, by relying on stochastic optimization, the tbip algorithm scales to large data sets. In Section 5 we empirically study how the tbip ideal points compare to both of these models. 4 Inference The tbip involves several types of latent variables: neutral topics ˇk, ideological topics k, topic intensities d, and ideal points xs. Conditional on the text, we perform inference of the latent variables through the posterior distribution p.; ˇ; ; xjy/. But calculating this distribution is intractable. We rely on approximate inference. We use mean-field variational inference to fit an approximate posterior distribution (Jordan et al., 1999; Wainwright et al., 2008; Blei et al., 2017). Variational inference frames the inference problem as an optimization problem. Set q.; ˇ; ; x/ to be a variational family of approximate posterior distributions, indexed by variational parameters . Variational inference aims to find the setting of  that minimizes the KL divergence between q and the posterior. Minimizing this KL divergence is equivalent to maximizing the evidence lower bound (elbo), EqŒlog p.; ˇ; ; x/ C log p.yj; ˇ; ; x/ log q.; ˇ; ; x/: The elbo sums the expectation of the log joint (here broken up into the log prior and log likelihood) and the entropy of the variational distribution. To approximate the tbip posterior we set the variational family to be the mean-field family. The mean-field family factorizes over the latent variables, where d indexes documents, k indexes topics, and s indexes authors: q.; ˇ; ; x/ D Y d;k;s q.d/q.ˇk/q.k/q.xs/: We use lognormal factors for the positive variables and Gaussian factors for the real variables, q.d/ D LogNormalK.d ; I2 d / q.ˇk/ D LogNormalV .ˇk; I2 ˇk/ q.k/ D NV .k; I2 k/ q.xs/ D N .xs; 2 xs/: Our goal is to optimize the elbo with respect to  D f;  2  ; ˇ;  2 ˇ ; ;  2  ; x;  2 x g. We use stochastic gradient ascent. We form noisy gradients with Monte Carlo and the “reparameterization trick” (Kingma and Welling, 2014; Rezende et al., 2014), as well as with data subsampling (Hoffman et al., 2013). To set the step size, we use Adam (Kingma and Ba, 2015). We initialize the neutral topics and topic intensities with a pre-trained model. Specifically, we pre-train a Poisson factorization topic model using the algorithm in Gopalan et al. (2015). The tbip algorithm uses the resulting factorization to initialize the variational parameters for d and ˇk. The full procedure is described in Appendix A. For the corpus of Senate speeches described in Section 2, training takes 5 hours on a single 5350 Votes Speeches Tweets Chuck Schumer (D-NY) Bernie Sanders (I-VT) Joe Manchin (D-WV) Susan Collins (R-ME) Jeff Sessions (R-AL) Deb Fischer (R-NE) Correlation to vote ideal points — 0.88 0.94 Mitch McConnell (R-KY) Figure 2. The ideal points learned by the tbip for senator speeches and tweets are highly correlated with the classical vote ideal points. Senators are coded by their political party (Democrats in blue circles, Republicans in red x’s). Although the algorithm does not have access to these labels, the tbip almost completely separates parties. NVIDIA Titan V GPU. We have released open source software for Tensorflow and PyTorch.1 5 Empirical studies We study the text-based ideal point model (tbip) on several datasets of political texts. We first use the tbip to analyze speeches and tweets (separately) from U.S. senators. For both types of texts, the tbip ideal points, which are estimated from text, are close to the classical ideal points, which are estimated from votes. We also compare the tbip to existing methods for scaling political texts (Slapin and Proksch, 2008; Lauderdale and Herzog, 2016a). The tbip performs better, finding ideal points closer to the vote-based ideal points. Finally, we use the tbip to analyze a group that does not vote: 2020 Democratic presidential candidates. Using only tweets, it estimates ideal points for the candidates on an interpretable progressive-to-moderate spectrum. 5.1 The tbip on U.S. Senate speeches We analyze Senate speeches provided by Gentzkow et al. (2018), focusing on the 114th session of Congress (2015-2017). We compare ideal points found by the tbip to the vote-based ideal point model from Equation (1). (Appendix B provides details about the comparison.) We use approximate posterior means, learned with variational inference, to estimate the latent variables. The estimated ideal points are Ox; the estimated neutral topics are Oˇ; the estimated ideological topics are O. Figure 2 compares the tbip ideal points on 1http://github.com/keyonvafa/tbip speeches to the vote-based ideal points.2 Both models largely separate Democrats and Republicans. In the tbip estimates, progressive senator Bernie Sanders (I-VT) is on one extreme, and Mitch McConnell (R-KY) is on the other. Susan Collins (R-ME), a Republican senator often described as moderate, is near the middle. The correlation between the tbip ideal points and vote ideal points is high, 0:88. Using only the text of the speeches, the tbip captures meaningful information about political preferences, separating the political parties and organizing the lawmakers on a meaningful political spectrum. We next study the topics. For selected topics, Table 1 shows neutral terms and ideological terms. To visualize the neutral topics, we list the top words based on Oˇk. To visualize the ideological topics, we calculate term intensities for two poles of the political spectrum, xs D 1 and xs D C1. For a fixed k, the ideological topics thus order the words by EŒˇkv exp.kv/ and EŒˇkv exp.kv/. Based on the separation of political parties in Figure 1, we interpret negative ideal points as liberal and positive ideal points as conservative. Table 1 shows that when discussing immigration, a senator with a neutral ideal point uses terms like “immigration” and “United States.” As the author moves left, she will use terms like “Dreamers” and “DACA.” As she moves right, she will emphasize terms like “laws” and “homeland security.” The tbip also captures that those on the left refer to health care legislation as the Affordable Care Act, while those on the right call it Obamacare. Additionally, a liberal 2Throughout our analysis, we appropriately rotate and standardize ideal points so they are visually comparable. 5351 Speeches 111 Speeches 112 Speeches 113 Tweets 114 Corr. SRC Corr. SRC Corr. SRC Corr. SRC wordfish 0.47 0.45 0.52 0.53 0.69 0.64 0.87 0.80 wordshoal 0.61 0.64 0.60 0.56 0.45 0.44 — — tbip 0.79 0.73 0.86 0.85 0.87 0.84 0.94 0.84 Table 2. The tbip learns ideal points most similar to the classical vote ideal points for U.S. senator speeches and tweets. It learns closer ideal points than wordfish and wordshoal in terms of both correlation (Corr.) and Spearman’s rank correlation (SRC). The numbers in the column titles refer to the Senate session of the corpus. wordshoal cannot be applied to tweets because there are no debate labels. senator discussing guns brings attention to gun control: “gun violence” and “background checks” are among the largest intensity terms. Meanwhile, conservative senators are likely to invoke gun rights, emphasizing “constitutional rights.” Comparison to Wordfish and Wordshoal. We next treat the vote-based ideal points as “groundtruth” labels and compare the tbip ideal points to those found by wordfish and wordshoal. wordshoal requires debate labels, so we use the labeled Senate speech data provided by Lauderdale and Herzog (2016b) on the 111th–113th Senates to train each method. Because we are interested in comparing models, we use the same variational inference procedure to train all methods. See Appendix B for more details. We use two metrics to compare text-based ideal points to vote-based ideal points: the correlation between ideal points and Spearman’s rank correlation between their orderings of the senators. With both metrics, when compared to vote ideal points from Equation (1), the tbip outperforms wordfish and wordshoal; see Table 2. Comparing to another vote-based method, dw-nominate (Poole, 2005), produces similar results; see Appendix C. 5.2 The tbip on U.S. Senate tweets We use the tbip to analyze tweets from U.S. senators during the 114th Senate session, using a corpus provided by VoxGovFEDERAL (2020). Tweet-based ideal points almost completely separate Democrats and Republicans; see Figure 2. Again, Bernie Sanders (I-VT) is the most extreme Democrat, and Mitch McConnell (R-KY) is one of the most extreme Republicans. Susan Collins (R-ME) remains near the middle; she is among the most moderate senators in vote-based, speechbased, and tweet-based models. The correlation between vote-based ideal points and tweet-based ideal points is 0:94. We also use senator tweets to compare the tbip to wordfish (we cannot apply wordshoal because tweets do not have debate labels). Again, the tbip learns closer ideal points to the classical vote ideal points; see Table 2. 5.3 Using the tbip as a descriptive tool As a descriptive tool, the tbip provides hints about the different ways senators use speeches or tweets to convey political messages. We use a likelihood ratio to help identify the texts that influenced the tbip ideal point. Consider the log likelihood of a document using a fixed ideal point Qx and fitted values for the other latent variables, `d. Qx/ D X v log p.ydvj O; Oˇ; O; Qx/: Ratios based on this likelihood can help point to why the tbip places a lawmaker as extreme or moderate. For a document d, if `d. Oxad / `d.0/ is high then that document was (statistically) influential in making Oxad more extreme. If `d. Oxad / `d.maxs. Oxs// or `d. Oxad / `d.mins. Oxs// is high then that document was influential in making Oxad less extreme. We emphasize this diagnostic does not convey any causal information, but rather helps understand the relationship between the data and the tbip inferences. Bernie Sanders (I-VT). Bernie Sanders is an Independent senator who caucuses with the Democratic party; we refer to him as a Democrat. Among Democrats, his ideal point changes the most between one estimated from speeches and one estimated from votes. Although his vote-based ideal point is the 17th most liberal, the tbip ideal point based on Senate speeches is the most extreme. We use the likelihood ratio to understand this difference in his vote-based and speech-based ideal 5352 Bernie Sanders Elizabeth Warren Tulsi Gabbard Kamala Harris Bill de Blasio Julian Castro Kirsten Gillibrand Cory Booker Beto O’Rourke Joe Biden Pete Buttigieg Tom Steyer Tim Ryan Mike Bloomberg Amy Klobuchar Michael Bennet John Hickenlooper John Delaney Steve Bullock Figure 3. Based on tweets, the tbip places 2020 Democratic presidential candidates along an interpretable progressive-to-moderate spectrum. points. His speeches with the highest likelihood ratio are about income inequality and universal health care, which are both progressive issues. The following is an excerpt from one such speech: “The United States is the only major country on Earth that does not guarantee health care to all of our people... At a time when the rich are getting richer and the middle class is getting poorer, the Republicans take from the middle class and working families to give more to the rich and large corporations.” Sanders is considered one of the most liberal senators; his extreme speech ideal point is sensible. That Sanders’ vote-based ideal point is not more extreme appears to be a limitation of the vote-based method. Applying the likelihood ratio to votes helps illustrate the issue. (Here a bill takes the place of a document.) The ratio identifies H.R. 2048 as influential. This bill is a rollback of the Patriot Act that Sanders voted against because it did not go far enough to reduce federal surveillance capabilities (RealClearPolitics, 2015). In voting “nay”, he was joined by one Democrat and 30 Republicans, almost all of whom voted against the bill because they did not want surveillance capabilities curtailed at all. Vote-based ideal points, which only model binary values, cannot capture this nuance in his opinion. As a result, Sanders’ vote-based ideal point is pulled to the right. Deb Fischer (R-NE). Turning to tweets, Deb Fischer’s tweet-based ideal point is more liberal than her vote-based ideal point; her vote ideal point is the 11th most extreme among senators, while her tweet ideal point is the 43rd most extreme. The likelihood ratio identifies the following tweets as responsible for this moderation: “I want to empower women to be their own best advocates, secure that they have the tools to negotiate the wages they deserve. #EqualPay” “FACT: 1963 Equal Pay Act enables women to sue for wage discrimination. #GetitRight #EqualPayDay” The tbip associates terms about equal pay and women’s rights with liberals. A senator with the most liberal ideal point would be expected to use the phrase “#EqualPay” 20 times as much as a senator with the most conservative ideal point and “women” 9 times as much, using the topics in Fischer’s first tweet above. Fischer’s focus on equal pay for women moderates her tweet ideal point. JeffSessions (R-AL). The likelihood ratio can also point to model limitations. JeffSessions is a conservative voter, but the tbip identifies his speeches as moderate. One of the most influential speeches for his moderate text ideal point, as identified by the likelihood ratio, criticizes Deferred Actions for Childhood Arrivals (DACA), an immigration policy established by President Obama that introduced employment opportunities for undocumented individuals who arrived as children: “The President of the United States is giving work authorizations to more than 4 million people, and for the most part they are adults. Almost all of them are adults. Even the so-called DACA proportion, many of them are in their thirties. So this is an adult job legalization program.” This is a conservative stance against DACA. So why does the tbip identify it as moderate? As depicted in Table 1, liberals bring up “DACA” when discussing immigration, while conservatives emphasize “laws” and “homeland security.” The fitted 5353 Ideology Top Words Progressive class, billionaire, billionaires, walmart, wall street, corporate, executives, government Neutral economy, pay, trump, business, tax, corporations, americans, billion Moderate trade war, trump, jobs, farmers, economy, economic, tariffs, businesses, promises, job Progressive #medicareforall, insurance companies, profit, health care, earth, medical debt, health care system, profits Neutral health care, plan, medicare, americans, care, access, housing, millions Moderate healthcare, universal healthcare, public option, plan, universal coverage, universal health care, away, choice Progressive green new deal, fossil fuel industry, fossil fuel, planet, pass, #greennewdeal, climate crisis, middle ground Neutral climate change, climate, climate crisis, plan, planet, crisis, challenges, world Moderate solutions, technology, carbon tax, climate change, challenges, climate, negative, durable Table 3. The tbip learns topics from 2020 Democratic presidential candidate tweets that vary as a function of the candidate’s political positions. The neutral topics are for an ideal point of 0; the ideological topics fix ideal points at 1 and C1. We interpret one extreme as progressive and the other as moderate. expected count of “DACA” using the most liberal ideal point for the topics in the above speech is 1:04, in contrast to 0:04 for the most conservative ideal point. Since conservatives do not focus on DACA, Sessions even bringing up the program sways his ideal point toward the center. Although Sessions refers to DACA disapprovingly, the bag-of-words model cannot capture this negative sentiment. 5.4 2020 Democratic candidates We also analyze tweets from Democratic presidential candidates for the 2020 election. Since all of the candidates running for President do not vote on a shared set of issues, their ideal points cannot be estimated using vote-based methods. Figure 3 shows tweet-based ideal points for the 2020 Democratic candidates. Elizabeth Warren and Bernie Sanders, who are often considered progressive, are on one extreme. Steve Bullock and John Delaney, often considered moderate, are on the other. The selected topics in Table 3 showcase this spectrum. Candidates with progressive ideal points focus on: billionaires and Wall Street when discussing the economy, Medicare for All when discussing health care, and the Green New Deal when discussing climate change. On the other extreme, candidates with moderate ideal points focus on: trade wars and farmers when discussing the economy, universal plans for health care, and technological solutions to climate change. 6 Summary We developed the text-based ideal point model (tbip), an ideal point model that analyzes texts to quantify the political positions of their authors. It estimates the latent topics of the texts, the ideal points of their authors, and how each author’s political position affects her choice of words within each topic. We used the tbip to analyze U.S. Senate speeches and tweets. Without analyzing the votes themselves, the tbip separates lawmakers by party, learns interpretable politicized topics, and infers ideal points close to the classical vote-based ideal points. Moreover, the tbip can estimate ideal points of anyone who authors political texts, including non-voting actors. When used to study tweets from 2020 Democratic presidential candidates, the tbip identifies them along a progressive-to-moderate spectrum. Acknowledgments This work is funded by ONR N00014-17-1-2131, ONR N00014-15-1-2209, NIH 1U01MH115727-01, NSF CCF-1740833, DARPA SD2 FA8750-18-C-0130, Amazon, NVIDIA, and the Simons Foundation. Keyon Vafa is supported by NSF grant DGE-1644869. We thank Voxgov for providing us with senator tweet data. We also thank Mark Arildsen, Naoki Egami, Aaron Schein, and anonymous reviewers for helpful comments and feedback. References Michael Bailey. 2001. Ideal point estimation with a small number of votes: A random-effects approach. Political Analysis, 9(3):192–210. Eric Baumer, Elisha Elovic, Ying Qin, Francesca Polletta, and Geri Gay. 2015. Testing and comparing computational approaches for identifying the language of framing in political news. In Proceedings of ACL. Frank R Baumgartner, Suzanna L De Boef, and Amber E Boydstun. 2008. The decline of the death penalty and the discovery of innocence. Cambridge University Press. Kenneth Benoit and Michael Laver. 2003. Estimating Irish party policy positions using computer wordscoring: The 2002 election. Irish Political Studies, 18(1):97–107. 5354 David M Blei, Alp Kucukelbir, and Jon D McAuliffe. 2017. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877. R Darrell Bock and Murray Aitkin. 1981. Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46(4):443–459. John Canny. 2004. GaP: A factor model for discrete data. In ACM SIGIR Conference on Research and Development in Information Retrieval. Dallas Card, Amber Boydstun, Justin H Gross, Philip Resnik, and Noah A Smith. 2015. The media frames corpus: Annotations of frames across issues. In Proceedings of ACL. Ali Taylan Cemgil. 2009. Bayesian inference for nonnegative matrix factorisation models. Computational Intelligence and Neuroscience, 2009. Dennis Chong and James N Druckman. 2007. Framing theory. Annual Review of Political Science, 10:103– 126. Joshua Clinton, Simon Jackman, and Douglas Rivers. 2004. The statistical analysis of roll call data. American Political Science Review, 98(2):355–370. Robert M Entman. 1993. Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4):51–58. Matthew Gentzkow, Jesse M Shapiro, and Matt Taddy. 2018. Congressional record for the 43rd-114th Congresses: Parsed speeches and phrase counts. Stanford Libraries. Matthew Gentzkow, Jesse M Shapiro, and Matt Taddy. 2019. Measuring group differences in highdimensional choices: Method and application to congressional speech. Econometrica, 87(4):1307–1340. Sean Gerrish and David M Blei. 2011. Predicting legislative roll calls from text. In Proceedings of ICML. Sean Gerrish and David M Blei. 2012. How they vote: Issue-adjusted models of legislative behavior. In Proceedings of NeurIPS. Prem Gopalan, Jake M Hofman, and David M Blei. 2015. Scalable recommendation with Poisson factorization. In Proceedings of UAI. Prem K Gopalan, Laurent Charlin, and David M Blei. 2014. Content-based recommendations with Poisson factorization. In Proceedings of NeurIPS. Alexander Herzog and Kenneth Benoit. 2015. The most unkindest cuts: Speaker selection and expressed government dissent during economic crisis. The Journal of Politics, 77(4):1157–1175. Daniel E Ho, Kevin M Quinn, et al. 2008. Measuring explicit political positions of media. Quarterly Journal of Political Science, 3(4):353–377. Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. 2013. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347. Kosuke Imai, James Lo, and Jonathan Olmsted. 2016. Fast estimation of ideal points with massive data. American Political Science Review, 110(4):631–656. Mohit Iyyer, Peter Enns, Jordan Boyd-Graber, and Philip Resnik. 2014. Political ideology detection using recursive neural networks. In Proceedings of ACL. Simon Jackman. 2001. Multidimensional analysis of roll call data via Bayesian simulation: Identification, estimation, inference, and model checking. Political Analysis, 9(3):227–241. Kristen Johnson, Di Jin, and Dan Goldwasser. 2017. Leveraging behavioral and social information for weakly supervised collective classification of political discourse on Twitter. In Proceedings of ACL. Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. 1999. An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233. In Song Kim, John Londregan, and Marc Ratkovic. 2018. Estimating spatial preferences from votes and text. Political Analysis, 26(2):210–229. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Diederik P Kingma and Max Welling. 2014. Autoencoding variational Bayes. In Proceedings of ICLR. Benjamin E Lauderdale and Tom S Clark. 2014. Scaling politically meaningful dimensions using texts and votes. American Journal of Political Science, 58(3):754–771. Benjamin E Lauderdale and Alexander Herzog. 2016a. Measuring political positions from legislative speech. Political Analysis, 24(3):374–394. Benjamin E Lauderdale and Alexander Herzog. 2016b. Replication data for: Measuring political positions from legislative speech. Harvard Dataverse. Michael Laver, Kenneth Benoit, and John Garry. 2003. Extracting policy positions from political texts using words as data. American Political Science Review, 97(2):311–331. Daniel D Lee and H Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755):788. 5355 Jeffrey B. Lewis, Keith Poole, Howard Rosenthal, Adam Boche, Aaron Rudkin, and Luke Sonnet. 2020. Voteview: Congressional roll-call votes database. James Lo, Sven-Oliver Proksch, and Jonathan B Slapin. 2016. Ideological clarity in multiparty competition: A new measure and test using election manifestos. British Journal of Political Science, 46(3):591–610. Will Lowe. 2008. Understanding wordscores. Political Analysis, 16(4):356–371. Andrew D Martin and Kevin M Quinn. 2002. Dynamic ideal point estimation via Markov Chain Monte Carlo for the US Supreme Court, 1953–1999. Political Analysis, 10(2):134–153. Nolan M McCarty, Keith T Poole, and Howard Rosenthal. 1997. Income redistribution and the realignment of American politics. American Enterprise Institute Press. Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, and Kristina Miler. 2015. Tea Party in the house: A hierarchical ideal point topic model and its application to Republican legislators in the 112th Congress. In Proceedings of ACL. Keith T Poole. 2005. Spatial models of parliamentary voting. Cambridge University Press. Keith T Poole and Howard Rosenthal. 1985. A spatial model for legislative roll call analysis. American Journal of Political Science, pages 357–384. Keith T Poole and Howard Rosenthal. 2000. Congress: A political-economic history of roll call voting. Oxford University Press on Demand. Sven-Oliver Proksch and Jonathan B Slapin. 2009. How to avoid pitfalls in statistical analysis of political texts: The case of Germany. German Politics, 18(3):323–344. Rajesh Ranganath, Linpeng Tang, Laurent Charlin, and David M Blei. 2015. Deep exponential families. In Proceedings of AISTATS. RealClearPolitics. 2015. Bernie Sanders on USA Freedom Act: ”I may well be voting for it,” does not go far enough. Online; posted 31-May-2015. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of ICML. Daniel Schwarz, Denise Traber, and Kenneth Benoit. 2017. Estimating intra-party preferences: Comparing speeches to votes. Political Science Research and Methods, 5(2):379–396. Jonathan B Slapin and Sven-Oliver Proksch. 2008. A scaling model for estimating time-series party positions from texts. American Journal of Political Science, 52(3):705–722. Oren Tsur, Dan Calacci, and David Lazer. 2015. A frame of mind: Using statistical models for detection of framing and agenda setting campaigns. In Proceedings of ACL. VoxGovFEDERAL. 2020. U.S. senators tweets from the 114th Congress. Martin J Wainwright, Michael I Jordan, et al. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2):1–305. A Algorithm We present the full procedure for training the textbased ideal point model (tbip) in Algorithm 1. We make a final modification to the model in Equation (3). If some political authors are more verbose than others (i.e. use more words per document), the learned ideal points may reflect verbosity rather than a political preference. Thus, we multiply the expected word count by a term that captures the author’s verbosity compared to all authors. Specifically, if ns is the average word count over documents for author s, we set a weight: ws D ns 1 S P s0 ns0 ; (4) for S the number of authors. We then multiply the rate in Equation (3) by wad . Empirically, we find this modification does not make much of a difference for the correlation results, but it helps us interpret the ideal points for the qualitative analysis. B Data and inference settings Senator speeches We remove senators who made less than 24 speeches. To lessen non-ideological correlations in the speaking patterns of senators from the same state, we remove cities and states in addition to stopwords and procedural terms. We include all unigrams, bigrams, and trigrams that appear in at least 0.1% of documents and at most 30%. To ensure that the inferences are not influenced by procedural terms used by a small number of senators with special appointments, we only include phrases that are spoken by 10 or more senators. This preprocessing leaves us with 19,009 documents from 99 senators, along with 14,503 terms in the vocabulary. To train the tbip, we perform stochastic gradient ascent using Adam (Kingma and Ba, 2015), with a mini-batch size of 512. To curtail extreme word 5356 Algorithm 1: The text-based ideal point model (tbip) Input: Word counts y, authors a, and number of topics K (D documents and V words) Output: Document intensities O, neutral topics Oˇ, ideological topic offsets O, ideal points Ox Pretrain: Hierarchical Poisson factorization (Gopalan et al., 2015) to obtain initial estimates O and Oˇ Initialize: Variational parameters  2  ;  2 ˇ ; ;  2  ; x;  2 x randomly,  D log. O/, ˇ D log. Oˇ/ Compute weights w as in Equation (4) while the evidence lower bound (elbo) has not converged do sample a document index d 2 f1; 2; : : : ; Dg sample z; zˇ; z; zx  N .0; I/ F Sample noise distribution Set Q D exp.z ˇ  C / and Qˇ D exp.zˇ ˇ ˇ C ˇ/ F Reparameterize Set Q D z ˇ  C  and Qx D zx ˇ x C x F Reparameterize for v 2 f1; : : : ; V g do Set dv D P k Qdk Qˇkv exp.Qkv Qxad /   wad Compute log p.ydvj Q; Qˇ; Q; Qx/ D log Pois.ydvjdv/ F Log-likelihood term end Set log p.ydj Q; Qˇ; Q; Qx/ D P v log p.ydvj Q; Qˇ; Q; Qx/ F Sum over words Compute log p. Q; Qˇ; Q; Qx/ and log q. Q; Qˇ; Q; Qx/ F Prior and entropy terms Set elbo D log p. Q; Qˇ; Q; Qx/ C N  log p.ydj Q; Qˇ; Q; Qx/ log q. Q; Qˇ; Q; Qx/ Compute gradients relbo using automatic differentiation Update parameters  end return approximate posterior means O; Oˇ; O; Ox count values from long speeches, we take the natural logarithm of the counts matrix before performing inference (appropriately adding 1 and rounding so that a word count of 1 is transformed to still be 1). We use a single Monte Carlo sample to approximate the gradient of each batch. We assume 50 latent topics and posit the following prior distributions: dk; ˇkv  Gamma.0:3; 0:3/, kv; xs  N .0; 1/. We train the vote ideal point model by removing all votes that are not cast as “yea” or “nay” and performing mean-field variational inference with Gaussian variational distributions. Since each variational family is Gaussian, we approximate gradients using the reparameterization trick (Rezende et al., 2014; Kingma and Ba, 2015). For the comparisons against wordfish and wordshoal, we preprocess speeches in the same way as Lauderdale and Herzog (2016a). We train each Senate session separately, thereby only including one timestep for wordfish. For this reason, our results on the U.S. Senate differ from those reported by Lauderdale and Herzog (2016a), who train a model jointly over all time periods. Additionally, we use variational inference with reparameterization gradients to train all methods. Specifically, we perform mean-field variational inference, positing Gaussian variational families on all real variables and lognormal variational families on all positive variables. Senator tweets Our Senate tweet preprocessing is similar to the Senate speech preprocessing, although we now include all terms that appear in at least 0.05% of documents rather than 0.01% to account for the shorter tweet lengths. We remove cities and states in addition to stopwords and the names of politicians. This preprocessing leaves us with 209,779 tweets. We use the same model and hyperparameters as for speeches, although we no longer take the natural logarithm of the counts matrix since individual tweets cannot have extreme word counts due to the character limit. We use a batch size of 1,024. 2020 Democratic candidates We scrape the Twitter feeds of 19 candidates, including all tweets between January 1, 2019 and February 27, 2020. We do not include Andrew Yang, Jay Inslee, and Marianne Williamson since it is difficult to define 5357 Speeches 111 Speeches 112 Speeches 113 Tweets 114 Corr. SRC Corr. SRC Corr. SRC Corr. SRC wordfish 0.52 0.49 0.51 0.51 0.71 0.65 0.79 0.74 wordshoal 0.62 0.66 0.58 0.51 0.46 0.46 — — tbip 0.82 0.77 0.85 0.85 0.89 0.86 0.94 0.88 Table 4. The tbip learns ideal points most similar to dw-nominate vote ideal points for U.S. senator speeches and tweets. It learns closer ideal points than wordfish and wordshoal in terms of both correlation (Corr.) and Spearman’s rank correlation (SRC). The numbers in the column titles refer to the Senate session of the corpus. wordshoal cannot be applied to tweets because there are no debate labels. the political preferences of non-traditional or singleissue candidates. We follow the same preprocessing we used for the 114th Senate, except we include tokens that are used in more than 0.05% of documents rather than 0.1%. We remove phrases used by only one candidate, along with stopwords and candidate names. This preprocessing leaves us with 45,927 tweets for the 19 candidates. We use the same model and hyperparameters as for senator tweets. C Comparison to DW-Nominate dw-nominate (Poole, 2005) is a dynamic method for learning ideal points from votes. As opposed to the vote ideal point model in Equation (1), it analyzes votes across multiple Senate sessions. It also learns two latent dimensions per legislator. We also compare text ideal points to the first dimension of DW-Nominate, which corresponds to economic/redistributive preferences (Lewis et al., 2020). We use the fitted dw-nominate ideal points available on Voteview (Lewis et al., 2020). The tbip learns ideal points closer to dw-nominate than wordfish and wordshoal; see Table 4. In Section 5, we observed that Bernie Sanders’ vote ideal point is somewhat moderate under the scalar ideal point model from Equation (1). It is worth noting that Sanders’ vote ideal point is more extreme under dw-nominate than under the scalar model: his dw-nominate ideal point is the third-most extreme among Democrats. Since dwnominate uses two dimensions to model each legislator’s latent preferences, it can more flexibly model Sanders’ voting deviations. Additionally, the dynamic nature of dw-nominate may capture salient information from other Senate sessions. However, restricting the vote ideal point to be static and a scalar, like it is for the tbip, results in the more moderate vote ideal point in Section 5.
2020
475
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5358–5368 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5358 Understanding the Language of Political Agreement and Disagreement in Legislative Texts Maryam Davoodi Purdue University [email protected] Eric Waltenburg Purdue University [email protected] Dan Goldwasser Purdue University [email protected] Abstract While national politics often receive the spotlight, the overwhelming majority of legislation proposed, discussed, and enacted is done at the state level. Despite this fact, there is little awareness of the dynamics that lead to adopting these policies. In this paper, we take the first step towards a better understanding of these processes and the underlying dynamics that shape them, using data-driven methods. We build a new large-scale dataset, from multiple data sources, connecting state bills and legislator information, geographical information about their districts, and donations and donors’ information. We suggest a novel task, predicting the legislative body’s vote breakdown for a given bill, according to different criteria of interest, such as gender, rural-urban and ideological splits. Finally, we suggest a shared relational embedding model, representing the interactions between the text of the bill and the legislative context in which it is presented. Our experiments show that providing this context helps improve the prediction over strong text-based models. 1 Introduction Despite the fact that state-level legislation is rarely discussed, it has a dramatic influence on the everyday life of residents of the respective states. The policies enacted at the state-level touch on all aspects, from mundane topics, such as trash removal and state mascots, to highly ideologically-charged topics such as education, religious liberties, and health-care access. Moreover, state-legislatures discuss and vote-on significantly more bills than their Federal counterparts, adding up to over 120,000 bills per year (King, 2019). Also, the lack of general interest, as well as the complexity of the processes that differ across states, often leads to public disengagement from local politics. This results in decisions being made with little understanding of Republican Democrat b) Competitive c) Inverse-Competitive Yea 20% Nay 80% Yea 70% Nay 30% Yea 55% Nay 45% Yea 25% Nay 75% Yea 30% Nay 70% a) Failed Figure 1: Example of failure and party cleavages. the processes that shape them and how they are likely to influence different demographics. Similarly, most effort directed at understanding political processes using data was directed at the Federal level. In the NLP community, several works looked at analyzing political texts (Iyyer et al., 2014) and the resulting behaviors of legislators (Gerrish and Blei, 2011, 2012). The only exception is recent work (Eidelman et al., 2018), predicting whether a bill would pass the preliminary stage, legislative committee, to a full-body vote. State-level demographic cleavages: Our goal in this paper is to take a first step towards understanding the processes and interests that underlie how decisions are passed using data-driven methods. Our main intuition is that the impact of bills on different demographics will be reflected in the behavior and voting patterns of their representatives. Thus, providing the ability to automatically identify bills, before they are put to a vote, that will have a positive or negative influence on a specific demographic can help inform public responses and increase engagement with local political processes. To help achieve this goal, we define two novel text classification tasks, characterizing the breakdown of votes, based on different cleavages or demographic indicators such as gender, geography (i.e., rural vs. urban districts), party membership and ideological splits. With respect to each one of these splits, we define two aggregate-level proper5359 ties of a vote, competitive and inverse-competitive cleavages. Both of these measures capture the lack of consensus in the legislature body around a specific bill, but in different ways. We say that a bill is competitive in a vote (Fig. 1b) if the majority of legislators from a logical group (e.g., democrats, women, urban districts, liberals) vote differently from the majority of legislators from the opposite group (e.g., republican, men, rural districts, conservatives). A bill is inverse-competitive (Fig. 1c) if there is a partial or complete tie within the legislators from the same group (e.g., women). To help explain these concepts, consider a bill restricting access to abortion clinics. This bill is likely to results in a competitive vote, based on ideology. On the other hand, a bill granting tax breaks for farmers might result in a inverse-competitive vote, based on ideology. In that case, a competitive vote, based on geography is more likely. In Table 1, we provide examples of the different splits associated with real bills that were brought to a vote. Unsurprisingly, a “benign” bill, such as #1 is widely accepted and does not result in any contention. A contentious bill, such as #2, touching on the way religion is taught is split ideologically (i.e., the vote is almost unanimous inside each ideological group), but mixed based on economic and gender splits. Bill #4 addressing nepotism issues and regulating public contracts is contentious across all splits. Alerting the public when such bills are brought to a vote can help ensure that legislators take into account the opinions and voiced raised in their constituencies. Technical Contributions Although a text classification scheme is a reasonable starting point to determine demographic cleavages of bills only based on their content, it is not sufficient. Our key insight in this paper is that the context or relations through which specific information is propagated among different players in the legislative process (e.g., money donors and legislators), can be leveraged to further improve the performance. Thus, we build a shared relational architecture that models the text of a bill and its context into a graph; Our model captures the behavior of individual legislators, language of bills, and influence of contributions on the decision to identify demographic cleavages. While there are different ways to realize our relational model, we chose to build on recent advances in the NLP space, Relational Graph Convolutional Network (RGCN) (Schlichtkrull et al., 2018) and pretrained BERT transformers (Devlin et al., 2018). RGCN allows us to define multiple relations between each pair of entities (e.g., a legislator sponsorship and casting a vote on a bill) and BERT enables us to represent the textual information more efficiently. With the help of the attention-based architecture, BERT has been shown to outperform LSTM models. To operationalize our relational settings, we collected information from different sources and introduced a new dataset combining information about legislators, bills, donations, and donors as well as demographic information about the legislators and their districts. In our experiments, we analyze the implication of different relations on the performance and show that our shared architecture outperforms existing text and graph models. Table 1: Competitive and inverse-competitive bills. # Bill Title Gen. Geo. Ideo. Party 1 A CONCURRENT RESOLUTION congratulating the Pioneer Junior-Senior High School football team on winning the Indiana High School Athletic Association None None None None 2 Teaching of the origin of life Inver. Inver. Comp. Comp. 3 Beer dealer permits None None None None 4 Officeholder qualifications, nepotism, and public contracts Both Inver. Both Both 2 Related Work Bill analysis at the state level has received little attention and our work, while conducting a new in-depth modeling and analysis, is inspired by the following works: Classification of congress roll calls. (Eidelman et al., 2018) combines the text of the bill with partisan identity of the bill’s sponsor(s) in a model predicting the likelihood of a member of the U.S. Congress voting in support of a non-unanimous congress bill or resolution. They find that the models that combine text with sponsorship data significantly outperform several alternative models. Similarly, (Gerrish and Blei, 2011) uses topics associated with congress bills to infer its location in ideological space and then uses ideal point models to predict the likelihood of a U.S. Senator or House member voting in support of a bill. They find that their model increases predictive accuracy by about 4% over a naïve baseline model. (Patil et al., 2019; Kraft et al., 2016; Karimi et al., 2019; Kornilova et al., 2018; Peng et al., 2016) extend this congress model to learn embeddings for legislators and congress bills using other sources of data 5360 (e.g., Twitter, knowledge graphs). More recently, (Budhwar et al., 2018) evaluates different models for predicting roll-call votes based on verbal statements that legislators make during questioning. Predicting progress of bills Rather than using bill text in models to explain the roll-call behavior of individual legislators, (Yano et al., 2012) include the legislation’s text in a model that predicts whether a bill emerges from a standing committee, a point in the legislative process that most bills do not pass. In particular, they use features based on the urgency and importance of the issue being addressed by the bill as well as a set of features extracted from co-sponsors of the bill. Examining the fate of bills between the 103rd and 111th congresses, they find that including features of the bill drawn from the text improves the model’s predictive accuracy over their baseline model. (Eidelman et al., 2018) repeat a similar analysis for the states and they show “that combining contextual information about the legislators and the legislatures with bill text consistently provides the best predictions”. (Nay, 2017) examines the text of congressional to identify the text structure most associated with a congress bill’s enactment and then embeds it using Word2Vec for the classification based on Random Forests; Nay concludes that the full text of a congress bill enables better prediction efficiency. Demographic bill cleavages. Demographic bill cleavages is a well-studied topic in the political science space. Research has properly differentiated between the multiple ways demographic background of legislators can influence roll-call voting. (Pinney and Serra, 1999) finds that Congressional Black Caucus members vote more consistently with the caucus than they do with fellow partisans or with representatives from their state. (Jenkins, 2012) discusses gender moderates the effect of party and ideology in roll-call voting. Similarly, (Frederick, 2010) discusses gender influences the roll-call vote in the Senate by moderating the effect of partisanship for GOP women. (Broach, 1972) demonstrates that urban-rural cleavages structure vote in less partisan states and on bills that clearly divide urban and rural interests. NLP applications of GCN. Recently, GCNs have been explored in different NLP tasks. Semantic role labeling (SRL) (Marcheggiani and Titov, 2017), relation classification in clinical narratives (Li et al., 2018), and machine translations (Bastings et al., 2017) are a few instances. In such tasks, GCN is used to encode syntactic structure of sentences. In a similar context, some works explored the idea of graph neural networks (GNNs) (Peng et al., 2018; Henaff et al., 2015; Defferrard et al., 2016), where each part of a document (e.g., sentences) is collapsed into a graph of words or the citation relations (Kipf and Welling, 2016) creates a network among different documents. 3 Modeling We model the legislative process as a graph that consists of bills, legislators, and money donors in all states. Building a global graph captures contextual information and relationships that interconnect different states. For instance, money donation by a contributor to two legislators from different states could indicate they have a similar roll call behaviors on abortion bills. Given this intuition, after a brief overview of the legislative process in US states, we describe how we collapse it into a graph structure. Bill Introduced First Reading Referred to Committee Second and Third Reading Conference Committee Governor Law Other Chamber Origin Chamber Second and Third Reading Referred to Committee First Reading Figure 2: Bill-to-Law stages 3.1 Primer on State-Level legislative Process Although there are some specific differences across state legislatures, a common process, shown in Figure 2, prevails. This process starts with one or more legislators (Representatives or Senators) who sponsor and file a bill. The idea of a bill could be original or come from a constituent, public official, or an interest group. Each state consists of two “chambers”: the House of Representatives (“House") and the Senate. To become law, the bill goes through a reviewing process in the origin chamber, where it can “die” at different stages. If the bill gets a pass vote, it is sent to the other chamber and the same process repeats. Finally, the bill is reviewed by the state Governor for signature. In parallel to these efforts, external contributors, e.g., money donors and lobbyists, play an important yet indirect role in the process. By sourcing information and money into the process, they leave an impact on legislators, which can change the progression of a bill. Within a chamber the process is as follows: if the leadership in the chamber chooses, the bill gets its 5361 First Reading by title. Then, the chamber president may refer the bill to a committee for review. If the committee casts a vote on the bill, it can be defeated or advance to Second Reading by the full body of legislators. Next, the chamber leadership may decide to approve the bill for Third Reading, where it again comes to a vote by the full body of legislators and a majority vote can advance the bill. Contributors Inferred Negative Donation Sponsors Nay Vote Yea Vote Legislators State Bills Positive Donation Figure 3: Collapsing the legislative process into a heterogeneous multi-relational legislative graph. 3.2 Legislative Process in a Heterogeneous Multi-Relational Graph A close look reveals that the legislative process cannot be captured in a simple graph as there can be multiple relations between a pair of nodes (e.g., sponsorship and vote between legislators and bills), and the graph consists of several nodes types with different attributes and labels (e.g., bills with competitive labels). Thus, we model the process using a heterogeneous multi-relational graph, as follows: Node attributes: The nodes in our proposed legislative graph come with a rich set of features and information: (1) Bill nodes contain title, description, and full text of the house and senate state bills. (2) Legislator nodes contain diverse textual information abstracting the behavior of a legislator such as his biography, political interests, committee assignments, and demographic attributes (gender, party, and ideology and the district information). (3) Contributors nodes come with different information (in the textual format) on money donors such as their specific and general business interests, party, and their type (individual vs non-individual). Relations: Based on the legislative process, we identify that legislator and bill nodes participate in three main relations: sponsorship (R1), negative (“Nay”) vote (R2), and positive (“Yea”) vote (R3). Similarly, we establish two types of relations between contributors and legislators: positive donation edges (R4), which are realized based on the real data, and negative or lack of donation edges (R5), inferred when a contributor shows lack of interest in specific legislators (e.g., always donates to Democrats). In this case, we randomly sample such legislators and link them to the contributor. Based on our data analysis, more than 62% of unique contributors always contribute to one party in our dataset. We also conducted an ablation study, not included due to space constraints, and the donor information contributed between 2 to 11 F1 points. 3.3 Bill Inference Problems For a bill and one of its roll calls in the legislative graph, we seek to predict if (1) it evinces identifiable voting cleavages or (2) it can advance by getting a pass. For voting cleavages, we defined four demographic attributes (gender, party, ideology, and the urban/rural nature of the district) to divide legislators into groups. We assign nine labels to each bill as follows: (1) Competitive labels: For an attribute (e.g., party), a voting round of a bill is defined as “competitive” if the majority of legislators from one group (e.g., Democrats) votes differently from the majority of the other group (e.g., Republicans). For example, in Figure 1b, 70% of Democrats vote Yea and 80% Republicans vote Nay on a roll call, then the bill is competitive and the disagreement between the groups is 10% (=80%-70%). (2) Inverse-competitive labels: Similarly, for an attribute (e.g., party), we call a voting round as inverse-competitive if there is a partial or full cleavage among the legislators of the same group. For instance, consider a bill with 55% of Democrats voting Yea and 45% of them voting Nay (Figure 1c). In this case, the bill turns out to be inverse-competitive and the disagreement is 45% (the percentage of minority votes). (3) Survival label: Depending on the progress, a bill passes a certain voting round if it gets a majority vote (e.g., in 2nd/3rd Reading) or if two-thirds of legislators agree to it (e.g., in amendments). 4 Inference on Legislative Graph We argue for a joint graph and text embedding model to represent the nodes and their textual attributes in the legislative graph, which is used for the roll-call prediction and aggregation. Embedding models that only leverage textual information ignore important relations in the legislative graph. Graph-based models make textual information less distinguishable at the classification stage, where it matters. At a high level, our approach combines the complementary strengths of both approaches. 5362 [CLS]Sara is a liberal woman..[SEP] Attributes Biography Cmte assignment [CLS] Sara is a member of… [SEP] [CLS] Sara is served on … [SEP] Average Legislator embedding Vote aggregation Relation prediction/classifier Concatenation leg leg cont bill FFNN Cleavage/ survival Graph emb. Text emb. r3 r2 r1 Node encoder layers Bill embed Leg. embed Con. embed [CLS] Emergency medications .SEP] Title Description Body [CLS] The prescription of…[SEP] [CLS] A school nurse or… [SEP] Average Bill embedding BERT encoder (c) (a) [CLS] AT&T is a republican non-individual… [SEP] [CLS] AT&T’s specific economic category.. [SEP] [CLS] AT&T’s general economic category … [SEP] Average BERT encoder BERT encoder BERT encoder Contributor embedding General industry Specific business Attributes (d) (b) BERT encoder BERT encoder BERT encoder BERT encoder BERT encoder Figure 4: Joint text-graph architecture for predicting relations in the legislative graph and aggregating vote (rollcall) of individual legislators, by leveraging text-attributed RGCN and BERT’s pretrained embeddings. Our architecture (Figure 4a) uses BERT’s pretrained embedding to represent the textual information of nodes in the graph; and text-attributed RGCN to generate an embedding for them based on their relations. Finally, we combine them to build a representation of edges in the graph for our relation prediction and then aggregate vote relations. 4.1 Text Representation Layer The lower half of our architecture is based on BERT, which leverages transformers and acts as an efficient replacement for sequential models. In our case, we use the BERT’s pretrained embedding to form an initial representation for the textual information of the nodes in the legislative graph. Bill representation: We represent a bill by averaging three different vectors (Figure 4b) corresponding to: (1) title, (2) description, and (3) body of the bill. For each of these components, we compute the average word vector based on BERT’s pretrained word embedding. Thus, the bill representation is Xbill = Avg(etitle + edescription + ebody). Legislator representation: To represent a legislator, we compute BERT’s pretrained embedding for his textual information: (1) attributes, (2) biography, and (3) committee information. Finally, we take the average of these vectors, Xlegislator = Avg(eattributes+ebiography +ecmte−info), as illustrated in Figure 4c. Contributor representation: Similarly, We transform different pieces of textual information on a contributor, i.e., party- and type-related attributes, business information, and industry data, into separate vectors, eattributes, ebusiness, eindustry and then take their average as the final representation, Xcontributor (Figure 4d). 4.2 Relational Graph Convolutional Layers We feed the text representation of the bill, legislator, and contributor nodes, as their initial representation, into Relational Graph Convolutional Network (RGCNs) to better represent them given the legislative graph structure. In parallel, a feed-forward neural network (FFNN) processes these text representations and takes them to a concatenation layer for the joint text-graph optimization. From the message passing perspective, each (non-relational) GCN layer performs two operations: propagation and aggregation. In the propagation phase, the neighborhood nodes send their feature/hidden representation to the node that needs to be updated. In the aggregation phase, the node sums up all the messages coming from its neighborhood with its properties. The aggregated message is passed through a non-linear activation function which forms the new representation of the node. If the graph edges are not typed, the hidden representation of each node i, at (l + 1)’th layer, is computed by: hil+1 = σ X j∈Ni 1 ci W lhl j ! (1) In which the weight matrix W l is shared by all edges in layer l.Also, ci is a normalization factor, which is often set to ci = |Ni|. Relational GCN (RGCN) generalizes GCNs to handle different relations between any pair of nodes, and thus being a better fit for our problem. Unlike GCNs, RGCNs use a different weight matrix and normalization factors (e.g., cr i = |Nr i |) for each relation type and thus the hidden representation of nodes in (l+1)’th 5363 Table 2: Statistics of the legislative graphs, aggregated over the 2011-2018 period. State Nodes Relations # Cont # Bills # Leg. #Cont-Leg #Leg-Bill IN 274 4818 226 17729 217026 OR 462 4884 150 29213 102463 WI 175 1320 208 5924 88004 All 911 11022 584 52866 407493 layer is computed as: hil+1 = σ W l 0hl i + X r∈R X j∈Nr i 1 ci,r W l rhl j ! (2) By having a K-layer RGCN (stacking layers onto each other), we can capture kth-order relations from a node in the graph. However, a 2-layer RGCN turns out to be sufficient in our case as it fully realizes the 2nd order relations between contributors and bills. 4.3 Roll-Call Classification and Aggregation By combining the outputs of the RGCN and FFNN, we train a model for predicting relations in the legislative graph through FFNN+softmax. One could leverage DistMult scoring functions (Schlichtkrull et al., 2018; Yang et al., 2014) as well. Next, we post-process the roll-call relations and aggregate them to form the demographic and pass/fail vote breakdowns and determine the final class labels. In more detail, the representation of an edge or relation (s, d) is the dot product of ejoint s and ejoint d , which are the embedding of the corresponding nodes. The representation of a node comes from the concatenation of two components: (1) text embedding (hidden states) coming from the BERT layer after being fine-tuned through FFNN, and (2) the graph embedding (hidden state of the node) from the last RGCN layer. Loss function: At a high level, our loss function is L = LCls + LText + LGraph and jointly optimizes the text and graph embeddings as well as the relation prediction and roll-call aggregation. LCls is the cross-entropy loss of the relation prediction; LGraph and LText are the L2 regularizations of RGCN’s and FFNN’s weights that generate the graph and text representations, respectively. 5 Experiments In this section, we describe our comprehensive legislative dataset, combining different sources of data (e.g., money donors data, diverse information on Table 3: Legislators’ attributes across the target states aggregated over the 2011-2018 period—UR: Urban, RU: Rural, C: Conservative, M: Moderate, L: Liberal. State Gender Party Geography Ideology F M D R UR RU C M L IN 50 176 67 159 161 64 125 94 7 OR 47 103 83 67 133 17 28 61 61 WI 51 157 84 124 160 48 78 49 81 All 148 436 234 350 454 129 231 204 149 legislators). Table 2 shows the statistics of our dataset after pruning and forming the legislative graph (discussed in Section 3). Next, we focus on our joint embedding model and its great ability in outperforming existing prediction models. 5.1 Data Collection Methodology & Statistics Bills and legislator data. From the LegiScan website (LegiScan, 2019), we collected data on the text and lower chamber disposition of all bills introduced in Indiana, Oregon, and Wisconsin from the 2011 through 2018 sessions. To do so, we developed a comprehensive crawler in Python that performs multiple operations. First, it uses the LegiScan API to collect legislative information on every bill that covers: (1) bill metadata that includes the bill type, title, description, sponsors, and links to its texts; (2) vote metadata that consists of the individual legislator’s vote – “Yea,” “Nay,” “Absent,” or “NV”; and (c) legislator metadata containing party and district information. Then, our crawler accurately converts bill texts that are stored in the PDF format to text files, using open-source libraries. To identify the fine-grained progression of bills in the legislative process, our crawler downloads and processes the “History” section of each bill on the LegiScan Website, which consists of a series of events associated with a bill’s history (e.g., committee report, roll-call vote). Such information is not readily available in the LegiScan API. Overall, we collected 34443 bills introduced in the target states from 2011 to 2018. We studied 58% of the bills that had both the votes of individual legislators and full texts, which are necessary for determining vote breakdowns and cleavage labels; However, our focus in this paper is on the 2nd/3rd reading, in which all members of the chambers vote, so we selected 32% of the bills that reached this stage to build the legislative graph (Table 2). Biography, ideology and geography data. Finally, our crawler uses Ballotpedia (Ballotpedia, 2019) to collect texts on each legislator’s biography, political interests, and committee assignments. 5364 Also, it aggregates other publicly available datasets to identify each legislator’s attributes such as ideology, gender, and district nature (urban/rural). The ideology scores for legislators were taken (Shor and McCarty, 2011) and they were grouped into conservatives, moderates, and liberals. The district identifier was combined with GIS census data (Census, 2019) to categorize each legislator as representing either an urban or rural district.Table 3 shows the breakdown of legislators’ party, gender, ideology, and district information in our target states. For less than 10% of legislators, Ballotpedia profiles were missing. Thus, we used other public textual information about them (e.g., Twitter). Donors data: FollowTheMoney (FollowTheMoney, 2019) captures and keeps tracks of donations to legislators and candidates in the US states. Our crawler consumes the FollowTheMoney API to collect the information of donors for each legislator and cosponsors of our bills. This includes multiple textual attributes and information for each contributor: type that could be individual or nonindividual, general party, and economic and business information. While the contributor data can be used in more sophisticated ways, in this work, we focused on major contributors by setting a donation threshold ($10000) and removing those who contributed to a single legislator; We also separated between ideological contributors and pragmatic ones (donating to both parties) by inferring “negative” (lack of) donation relations (see Section 3); We set the fraction of negative donations to 30% of the positive ones extracted from the real data. Table 2 shows the final per-state statistics of contributors. 5.2 Experimental Setup We build different graph and textual models on top of PyTorch, DGL (Deep Graph Library), and spaCy. In our joint text-graph model (Figure 4) and other baselines, the initial embedding dimension of both BERT (“bert-large-uncased”) and the first-layer RGCN are set to 1024. The FFNN (fully connected layer) and the second-layer RGCN take the initial text and graph embeddings to a 256dimensional space. We have also experimented with different settings, which while resulting in lower overall performance, retained the same trend when comparing the other models. We used Adam to optimize our model and for each observed relation (Table 2), we sampled a negative example. Data splits. Our focus is on the bill cleavage and survival and thus we split legislative graphs based on bill nodes. To evaluate different scenarios, we have three configurations: (1) random where we select 20% of the bills for testing and keep the rest for training and validation. (2) time-based where 20% of most recent bills are considered for testing; and (3) state-based: where the test bills come from one specific state and train bills from the other states. The test bills and corresponding legislators appear in the test graph, and the difference of the original and test graphs is used for training. Note that vote relations of sponsoring legislators and a bill are known, and appear in training. Metric. Given the highly skewed data when predicting bill survival and cleavages, we pick Macro F1 as the main metric over accuracy. 5.2.1 Baselines To demonstrate the benefits of our joint text-graph embedding, we implement a series of text and graph embedding architectures as the baseline. Category 1: text embedding models: We realize our bill encoder (Figure 4b) using three text embedding models and then train a logistic regression classifier to directly predict if a bill text shows a certain cleavage or passes/fails: (a) BoW, where unigram and bigram features (top 10K highest scoring ones using scikit-learn (Pedregosa et al., 2011)) used to represent bill texts. (b) GloVe (Pennington et al., 2014) that is a popular word embedding model using the square loss; We used the GloVe-840B-300D pre-trained word vectors in our experiments. (c) BERT (Devlin et al., 2018) that is a transformer based architecture capable of capturing contextualized embedding. Category 2: featureless graph embedding models: We build a edge classifier over edge embeddings generated by models that assume nodes in the legislative graph are homogeneous and featureless, and then aggregate the roll call results: (a) DeepWalk (Perozzi et al., 2014) is an embedding model that generates node vectors by running Skip-Gram on random walks formed at different nodes in the graph. (b) GCN (Kipf and Welling, 2016) is the basic two-layer GCN model that uses a single weight matrix in each layer and begins with the random node features in the first layer. (c) RGCN (Schlichtkrull et al., 2018) is the relational version of the GCN that captures different relations in our legislative graph. Category 3: text-attributed (TA) graph embedding models: We use the same edge classifier 5365 Table 4: Macro-F1 in bill survival and cleavage prediction for the random split and known sponsors’ relations. Embedding Pass/ Fail Competitive Inverse-Competitive Party Gender Ideology Geography Party Gender Ideology Geography Naive Majority 0.47 0.44 0.46 0.44 0.46 0.48 0.47 0.45 0.47 Sponsor 0.51 0.43 0.43 0.41 0.43 0.44 0.45 0.41 0.45 Textbased BoW 0.63 0.64 0.64 0.65 0.60 0.58 0.60 0.57 0.62 GloVe 0.65 0.67 0.66 0.67 0.61 0.57 0.62 0.60 0.63 BERT 0.68 0.70 0.72 0.69 0.66 0.58 0.64 0.62 0.67 Featureless Graph DeepWalk 0.49 0.52 0.50 0.54 0.56 0.52 0.50 0.52 0.51 GCN 0.49 0.53 0.51 0.55 0.57 0.52 0.51 0.53 0.52 RGCN 0.57 0.57 0.53 0.55 0.59 0.54 0.52 0.55 0.56 Text Attributed Graph TA-DeepWalk 0.66 0.67 0.64 0.68 0.60 0.53 0.62 0.55 0.71 TA-GCN 0.67 0.67 0.65 0.66 0.61 0.52 0.62 0.54 0.72 TA-RGCN 0.72 0.69 0.65 0.71 0.63 0.56 0.64 0.57 0.72 Joint Graph+Text 0.82 0.83 0.79 0.82 0.73 0.64 0.78 0.65 0.78 but use the graph models that can consume the text-based node features generated by our BERTbased node encoders: (a) TA-DeepWalk (Yang et al., 2015) that changes the graph factorization in DeepWalk to support node features. (b) TA- GCN (Kipf and Welling, 2016) is the original GCN that takes as input an initial node features. (c) TA-RGCN (Schlichtkrull et al., 2018) is a relational GCN that captures node features initialized by our text-based node encoders. Category 4: naive baselines. We evaluate two other naive classifiers: (a) Majority: A baseline predicting the most frequent class in the training data: (b) Sponsor: A logistic regression classifier that directly predicts bill survival and cleavages based on the one-hot encoded sponsors’ info. encoded. 5.3 Results and Analysis Performance of different textual and graph models. Table 4 shows macro F1 for different bill cleavages and pass/fail. We first analyze the performance of different models in each category: (1) Among the naive models, the sponsor-based classifier improves the bill survival prediction compared to the majority model but has no positive impact on bill cleavages as expected intuitively. (2) In the textual models, we observe BERT improves the F1 performance by 2%-8% compared to GloVe and BoW. By leveraging a bidirectional operation, BERT more efficiently captures the context of each word in the bill title, summary, and body. (3) In the featureless graph models, RGCN consistently outperforms the standard GCN and DeepWalk models as it treats each of the relations in the legislative graph (e.g., donation and voting) differently and does not mix their weight matrices with each other. This benefit of RGCN is entirely enabled by our new dataset that explicitly tracks different legislative relations; (4) Unlike the second category, the text attributed graph models capture implicit relations between different nodes in the graph through their text features. By leveraging our node encoders, they begin with better initial representations of the nodes and relations (e.g., particularly votes) and thus provide an improvement by up to 15% in the performance compared to their featureless counterparts. (5) Finally, our proposed model by combining and jointly optimizing the graph and textual representations consistently provides a higher F1 score. Compared to the other models, it improves recall while maintaining high precision, e.g., in the case of the bill survival prediction, the macro precision and recall values for BERT, TA-RGCN, and our model are (0.72, 0.67), (0.92, 0.66), (0.82, 0.84), respectively. Language and implications of different cleavages. We can make a few observations: it is slightly more challenging to identify inverse-competitive bills compared to competitive ones. This happens across different graph and text models, and thus indicating the language of these bills and the dynamics of relations behind them is rather complex. To help provide an intuition, we summarized in Table 6 the top bigrams and unigrams used in competitive and inverse-competitive bills across the different cleavages. Interestingly, the top n-grams of competitive bills align better with the cleavages (e.g., “abortion” is competitive both based on ideology and gender) compared to the top inversecompetitive n-grams, which often focus on mundane issues such as taxes and services, suggesting that when non-polarizing legislation is discussed, group agreement takes a secondary role. From another angle, Figure 5 further illustrates the differences between these two categories of cleavages. Overall, there are 10%-20% more com5366 Table 5: Macro F1 for bill survival and party cleavages for the best model in each category based on the stateand time-based data splits. Embedding State-based (Test: IN) State-based (Test: OR) Time-based (Test: 20%) Pass/fail Comp. Inverse. Comp Pass/fail Comp Inverse Comp Pass/fail Comp. Inverse Comp. Naive (Majority) 0.47 0.44 0.45 0.46 0.45 0.44 0.48 0.45 0.46 Text-based (BERT) 0.63 0.64 0.53 0.61 0.64 0.54 0.67 0.67 0.57 Featureless Graph(RGCN) 0.52 0.52 0.50 0.51 0.50 0.51 0.54 0.54 0.52 Text-Attributed Graph (TA-RGCN) 0.60 0.62 0.53 0.62 0.61 0.52 0.67 0.68 0.55 Joint Graph+Text 0.70 0.72 0.58 0.70 0.70 0.58 0.73 0.76 0.61 Table 6: Most frequent unigrams and bigrams of competitive and inverse-competitive bills. Type Unigram/Bigram Comp. Party law, fund, abortion, political subdivision, providing penalty, badger-care plus, parental choice Gender abortion, child, medical, school, providing penalty, motor vehicle, minimum wage, parental choice Ideology income, abortion, insurance, drugs, local government, retirement system, natural resources, political subdivision Geography county, service, commission, district, transportation, housing, residential, state financial, criminal history, restroom facility, greenhouse gas Inv-comp. Party state, program, motor vehicle, real estate, study committee, education matters, Gender financial, emergency, permits, legislative council, economic development, criminal penalty Ideology tax, services, county, criminal, alcoholic beverages, board education, commission declaring Geography law, school corporation, property tax, unemployment insurance 0 0.1 0.2 0.3 Party Gender Geography Ideology % total bills Cleavage Competitve Inverse-competitive Figure 5: Distribution of competitive and inversecompetitive bills before split over 2011-2018. petitive bills compared to inverse-competitive ones under the party and ideology attributes, indicating cross-group disagreements (e.g., conservatives VS. moderates VS. liberals) are more likely than intragroup disagreement. This pattern is reversed for the gender and geography attributes. Implication of state- and time-based data splits. For the pass/fail and party cleavages with the best model in each category, Table 5 shows a sharp drop in the F1 score for the state-based and time-based data split, particularly for graph-based models (RGCN and TA-RGCN). By training the model with the two states and testing it with another one, the graph-based embedding models are challenged with representing many unseen legislators. While GCN-based solutions are capable of creating such representations in the test time (using the same weight matrix), they are sub-optimal particularly in featureless GCN settings. One interesting observation is that when the model is tested with the OR data, the drop is even sharper as OR tends to be a democratic state; While WI and IN are often republican states. For the time-based data split, we observe a similar but slightly better performance as the number of unseen nodes are fewer. In all these different configurations, our joint model still improves the F1 score but it is limited on how the underlying graph model behaves. 6 Summary In this paper, we take the first step towards understanding the dynamics of state-level legislative processes in the US through a data-driven approach. We proposed to collapse the legislative process into a heterogeneous multi-relational graph and suggest several tasks for capturing disagreement over several ideological and demographic cleavages, as well as predicting the outcome of the legislative process. We approach these problems by formulating them as aggregate roll-call prediction. To fully realize the potential of graph-based modeling, we created a new dataset, used to characterize the real-world context in which the legislative process takes place, consisting of bills, donors, and legislators and their behavior. We model the rich relationship between these entities and the content of the bills using a joint text and graph prediction model on top of BERT and RGCN, outperforming each one of the models in isolation. References Ballotpedia. 2019. State-level political encyclopedia data. https://ballotpedia.org/. 5367 Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. arXiv preprint arXiv:1704.04675. Glen T Broach. 1972. A comparative dimensional analysis of partisan and urban-rural voting in state legislatures. The Journal of Politics, 34(3):905–921. Aditya Budhwar, Toshihiro Kuboi, Alex Dekhtyar, and Foaad Khosmood. 2018. Predicting the vote using legislative speech. In Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age. GIS Census. 2019. Gis census data. https://www. nhgis.org/. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Vlad Eidelman, Anastassia Kornilova, and Daniel Argyle. 2018. How predictable is your state? leveraging lexical and contextual information for predicting legislative floor action at the state level. arXiv preprint arXiv:1806.05284. FollowTheMoney. 2019. State-level contributor data. https://www.followthemoney.org/. Brian Frederick. 2010. Gender and patterns of roll call voting in the us senate. In Congress & the Presidency, volume 37, pages 103–124. Taylor & Francis. Sean Gerrish and David M Blei. 2011. Predicting legislative roll calls from text. In Proceedings of the 28th international conference on machine learning (icml-11), pages 489–496. Sean Gerrish and David M Blei. 2012. How they vote: Issue-adjusted models of legislative behavior. In Advances in Neural Information Processing Systems, pages 2753–2761. Mikael Henaff, Joan Bruna, and Yann LeCun. 2015. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163. Mohit Iyyer, Peter Enns, Jordan Boyd-Graber, and Philip Resnik. 2014. Political ideology detection using recursive neural networks. In Association for Computational Linguistics. Shannon Jenkins. 2012. How gender influences roll call voting. Social Science Quarterly, 93(2):415– 433. Hamid Karimi, Tyler Derr, Aaron Brookhouse, and Jiliang Tang. 2019. Multi-factor congressional vote prediction. Advances in Social Networks Analysis and Mining (ASONAM). Kevin King. 2019. State Legislatures Vs. Congress: Which Is More Productive? http://bit.ly/ 30YsKwT. [Online; accessed 19-July-2019]. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Anastassia Kornilova, Daniel Argyle, and Vladimir Eidelman. 2018. Party matters: Enhancing legislative embeddings with author attributes for vote prediction. In Proceedings of ACL. Peter Kraft, Hirsh Jain, and Alexander M Rush. 2016. An embedding model for predicting roll-call votes. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. LegiScan. 2019. State-level legislative data. https: //legiscan.com/. Yifu Li, Ran Jin, and Yuan Luo. 2018. Classifying relations in clinical narratives using segment graph convolutional and recurrent neural networks (SegGCRNs). Journal of the American Medical Informatics Association, 26(3):262–268. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. arXiv preprint arXiv:1703.04826. John J Nay. 2017. Predicting and understanding lawmaking with word vectors and an ensemble model. PLoS ONE, 12(5):e0176999. Pallavi Patil, Kriti Myer, Ronak Zala, Arpit Singh, Sheshera Mysore, Andrew McCallum, Adrian Benton, and Amanda Stent. 2019. Roll call vote prediction with knowledge augmented models. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 574– 581. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825–2830. Hao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Yangqiu Song, and Qiang Yang. 2018. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In Proceedings of the 2018 World Wide Web Conference, pages 1063–1072. International World Wide Web Conferences Steering Committee. 5368 Tai-Quan Peng, Mengchen Liu, Yingcai Wu, and Shixia Liu. 2016. Follower-followee network, communication networks, and vote agreement of the us members of congress. Communication Research, 43(7):996–1024. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVE: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. Neil Pinney and George Serra. 1999. The congressional black caucus and vote cohesion: Placing the caucus within house voting patterns. Political Research Quarterly, 52(3):583–607. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pages 593–607. Springer. Boris Shor and Nolan McCarty. 2011. The ideological mapping of american legislatures. American Political Science Review. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Chang. 2015. Network representation learning with rich text information. In TwentyFourth International Joint Conference on Artificial Intelligence. Tae Yano, Noah A Smith, and John D Wilkerson. 2012. Textual predictors of bill survival in congressional committees. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 793–802. Association for Computational Linguistics.
2020
476
Would you Rather? A New Benchmark for Learning Machine Alignment with Cultural Values and Social Preferences †Yi Tay∗, ♭Donovan Ong∗, ♯Jie Fu, †Alvin Chan, ♭Nancy F. Chen ∗φLuu Anh Tuan, ♯‡Christopher Pal †Nanyang Technological University, Singapore ♯Polytechnique Montreal, Mila, ‡Canada CIFAR AI Chair ♭A*STAR, Singapore, ∗MIT CSAIL, φVinAI Research [email protected], [email protected] Abstract Understanding human preferences, along with cultural and social nuances, lives at the heart of natural language understanding. Concretely, we present a new task and corpus for learning alignments between machine and human preferences. Our newly introduced problem is concerned with predicting the preferable options from two sentences describing scenarios that may involve social and cultural situations. Our problem is framed as a natural language inference task with crowd-sourced preference votes by human players, obtained from a gamified voting platform. We benchmark several state-of-the-art neural models, along with BERT and friends on this task. Our experimental results show that current state-ofthe-art NLP models still leave much room for improvement. 1 Introduction The ability to understanding social nuances and human preferences is central to natural language understanding. This also enables better alignment of machine learning models with human values, eventually leading to better human-compatible AI applications (Peterson et al., 2019; Leslie, 2019; Rosenfeld and Kraus, 2018; Amodei et al., 2016; Russell and Norvig, 2016). There exist a plethora of work on studying optimal decision-making under a variety of situations (Edwards, 1954; Bottom, 2004; Plonsky et al., 2019; Peterson et al., 2019). On the other hand, cognitive models of human decision-making are usually based on small datasets (Peterson et al., 2019). Furthermore, these studies tend to only consider individuals in isolation. In contrast, we investigate the influence of cultural and social nuances for choice prediction at scale. In other words, we study the social preference as a whole, ∗First two authors contributed equally not those of an individual in isolation, which is arguably more challenging and largely unexplored. In this work, we propose a new benchmark dataset with a large number of 200k data points, Machine Alignment with Cultural values and Social preferences (MACS), for learning AI alignment with humans. Our dataset is based on a popular gamified voting platform, namely the game of ‘would you rather?’. In this game, participants are given two choices and vote for the more preferable option. Examples from our dataset can be found at Table 1. To the best of our knowledge, our work is the first work to incorporate voting-based language games as a language understanding benchmark. In many ways, our benchmark dataset is reminiscent of the natural language inference problem (MacCartney, 2009; Bowman et al., 2015), social commonsense reasoning (Sap et al., 2019) or other natural language understanding problems (Wang et al., 2018; Zellers et al., 2018). To this end, our problem is framed in a way that enables convenient benchmarking of existing state-of-the-art NLU models such as BERT (Devlin et al., 2018) or RoBERTa (Liu et al., 2019). That said, unlike many NLU datasets that rely on few annotators, the key differentiator lies in the fact that our dataset aggregates across hundreds or thousands and beyond for each data point. Options are also crowd-sourced and gamified which may encourage less monotonic samples, ie., encouraging players to come up with questionss that are difficult for other players. Additionally, our dataset comprises of country-level statistics, which enable us to perform cultural-level prediction of preferences. Our Contributions All in all, the prime contribution of this work is as follows: • We propose a new NLU benchmark based on an online gamified voting platform. • We propose several ways to formulate the problem, including absolute and relative preference prediction. We also introduce a cultural-level NLU problem formulation. • We investigate state-of-the-art NLU models such as BERT (Devlin et al., 2018), RobERTA (Liu et al., 2019) and XLNET (Yang et al., 2019) on this dataset. Empirical results suggests that our benchmark is reasonably difficult and there is a huge room for improvement. 2 Learning Alignment with Human Preferences This section describes the proposed dataset and problem formulation. 2.1 Dataset We look to crowdsourcing platforms to construct our dataset. Our dataset is constructed from https://www.rrrather.com/, an online platform1 for gamified voting. The platform is modeled after the famous internet game - would you rather?, which pits two supposedly comparable choices together. Whenever a player votes, their vote is recorded in the system. Players generally vote to see how well their vote aligns with the majority and consensus with everyone else. We provide samples of the problem space in Table 1. We crawled data from the said platform and filtered away posts with less than 500 total votes. In total, we amassed 194,525 data points, which we split into train/dev/test splits in an 80/10/10 fashion. Dataset statistics are provided in Table 2. Train Dev Test Total Data 155,621 19,452 19,452 194,525 ℓmax 678 351 298 ℓmean 8 8 8 ℓmin 1 2 2 Table 2: Dataset statistics of the MACS dataset. 2.2 Why is this interesting? This section outlines the benefits of our proposed dataset as a language understanding benchmark. 1The authors have obtained written permission from the owner of the platform to crawl and use their data for academic research. The questions, answers or discussions do not represent opinions of the authors in this paper. (1) Understanding before Interaction. In our dataset and problem formulation, complex understanding of each option text is often required first before modeling the relative preference between two options. This is unlike NLI or questionanswering based NLU benchmarks, where matching signals can be used to predict the outcome easily. In our dataset and task, it is imperative that any form of word overlap can be hardly used to determine the outcome. (2) A good coverage of social preferences. Upon closer inspection of our proposed benchmark, we find there is a good representation of samples which cover social and cultural themes. Social preferences (such as the preference of brands) are captured in samples such as example (6). (3) Completely natural. Our MACS dataset completely exists in the wild naturally. This is unlike datasets that have to be annotated by mechanical turkers or paid raters. In general, there is a lack of incentives for turkers to provide highquality ratings, which often results in problems such as annotation artifacts. Unlike these datasets, our MACS dataset completely exists in the wild naturally. The choices are often created by other human players. Hence, in the spirit of competitiveness, this means that the data is meant to be deliberately challenging. Moreover, there are at least 500 annotators for each sample, which makes the assigned label less susceptible to noisy raters. 2.3 Problem Formulation Given Q (prompt), two sentences S1 and S2 and V (.) which computes the absolute votes to each option, we explore different sub-tasks (or variant problem formulation). Predicting Preference This task is primarily concerned with predicting if V (S1) > V (S2) or otherwise. Intuitively, if a model is able to solve this task (perform equivalent to a human player), we consider it to have some fundamental understanding of human values and social preferences. We frame this task in two ways. The first is a straightforward binary classification problem, i.e., V (S1) > V (S2). The second task is a three-way classification problem with a third class predicting if the difference |V (S1) −V (S2)| is less than 5% of the total votes. In short, this means that two options are almost in a draw. Prompt Option A Option B (1) Would you rather fit into any group but never be popular only fit into the popular group (2) Would you rather have no one attend your funeral wedding (3) Would you rather have free starbucks for an entire year free itunes forever (4) Would you rather Look unhealthy and unattractive, but be in perfect health. Be absolutely beautiful and look healthy, but be in extremely bad health. (5) Would you rather Win the lottery Live twice as long (6) Would you rather have a Mac a PC (7) Would you rather spend the day Surfing on the ocean Surfing the Internet Table 1: Samples from our MACS dataset. Standard Cultural Binary Three-way Binary Three-way Model Dev Test Dev Test Dev Test Dev Test BERT 61.02 60.38 56.71 55.85 62.42 62.88 57.42 58.21 XLNEt 56.12 56.84 55.72 56.34 51.77 51.42 57.08 57.39 RoBERTa 64.75 64.15 61.04 61.19 64.39 64.71 59.28 61.22 Table 3: Experimental results on predicting preference (standard and cultural) with BERT (Devlin et al., 2018), XLNEt (Yang et al., 2019) and RoBERTa (Liu et al., 2019) on MACS dataset. Predicting Cultural Preferences We consider a variant of the preference prediction problem. Our MACS dataset has culture-level preference votes which are the voting scores with respect to a particular cultural demographic. We extend the same setting as Task 1 by requiring the model to produce culture-level predictions. In order to do this, we prepend the input sentence with a culture embedding token. For example, Input = [Culture] + [Choice A] + [Sep] + [Choice B]. The task is identical, predicting the greater of Choice A OR Choice B, with respect to the cultural ground truth. The dataset is augmented at the culture level and the same example is duplicated for each culture, e.g., we duplicate the sample for countries ’USA’ and ’Europe’. We consider only culturelevel votes with a threshold above 25 votes in the dataset for train/dev/test sets. Predicting Relative Preference The third variant is a fine-grained regression task where we want to identify if our model is able to learn the extent of preference given by human players. This problem is framed as a regression problem that is normalized from [0, 1] with respect to the total number of votes in the data point 3 Experiments This section outlines our experimental setup and results. 3.1 Experimental Setup We implement and run several models on this dataset. (1) BERT (Devlin et al., 2018) - Deep Bidirectional Transformers is the state-of-the-art pretrained transformer model for a wide range of NLP tasks. (2) XLNet (Yang et al., 2019) is a large pretrained model based on Transformer-XL. (3) RoBertA (Liu et al., 2019) is a robustly optimized improvement over the vanilla BERT model. All models were run using the finetune methodology using the standard Pytorch Huggingface2 repository. We train (finetune) all models for 3 epochs using the default hyperparameters.. Metrics The evaluation metrics for classification tasks is the standard accuracy score. For regression tasks, we use the correlation, Pearson, and Spearman metrics. 3.2 Experimental Results Table 3 reports our results on binary and three-way classification on the MACS dataset. In general, we find that RoBERTa performs the best. However, in most cases, the performance of all three models still leaves a lot to be desired. An accuracy of 60%+ shows that state-of-the-art models still struggle at this task. On the other hand, results on regression task are also similarly lacklustre, and 2https://github.com/huggingface/ transformers Dev Test Model Correlation Pearson Spearman Correlation Pearson Spearman BERT 0.234 0.256 0.214 0.229 0.250 0.208 XLNEt 0.225 0.243 0.206 0.228 0.250 0.206 RoBERTa 0.258 0.279 0.236 0.256 0.278 0.235 Table 4: Experimental results on predicting relative preference on MACS dataset. Prompt Option A Option B Vote A Vote B Pred (1) Would you rather be happy and with friends popular and without friends 95.39% 4.61%  (2) Would you rather.... Own a self refilling fridge. Have a self cleaning bedroom. 74.10% 25.9%  (3) Which art style do you prefer Photography Poetry 69.62% 30.38%  (4) Would you rather Be A Millionare Be the kindest, loveing most talented human being living and will ever live 47.32% 52.68%  (5) Would you rather Be the first to invent an Invisibility cloak Be the first to invent a Teleportation device 47.32% 52.68%  Table 5: Model predictions from MACS dataset using finetuned BERT. show that models like BERT and RoBERTa are unable to perform well on this task. On a whole, it is good to note that RoBERTa performs the best out of the three compared models. Overall, this encourages further research on cultural and social commonsense reasoning in the current state-of-the-art in natural language understanding. All in all, we hope our benchmark serves as a useful tool for understanding the social capabilities of these models. 3.3 Qualitative Evaluation Table 5 reports some sample of our model outputs, shedding light on examples in which our model does well and otherwise. We observe that the model often gets the answer wrong even when the ground truth is overwhelmingly swayed towards one side. On the other hand, occasionally, we also observe that the model can get questionable questions such as (4) and (5) correctly even despite the tight draw between human voters. 4 Conclusion We propose MACS (Machine Alignment with Cultural and Social Preferences), a new benchmark dataset for learning machine alignment with human cultural and social preferences. MACS encompasses and requires social and cultural reasoning to solve and an overall holistic understanding of humanity. It is designed to be challenging where state-of-the-art NLP models still struggle at ≈60%. Broader Impact In this paper, we are not promoting the use of https://www.rrrather.com/ as the training source, but rather the study of the alignment of machine learning models with social preference of a large population. Unfortunately, there might be some issues of bias, fairness and representation due to the curation of the training data from Internet, which might lead models to give prejudiced or stereotyped outputs. Evaluating bias, fairness and representation in language models and the training data is an important research area (Nadeem et al., 2020; Huang et al., 2019). As for future works, it is important to characterize and intervene biases when designing such tasks. References Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. 2016. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565. William P Bottom. 2004. Heuristics and biases: The psychology of intuitive judgment. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Ward Edwards. 1954. The theory of decision making. Psychological bulletin, 51(4):380. Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. 2019. Reducing sentiment bias in language models via counterfactual evaluation. arXiv preprint arXiv:1911.03064. David Leslie. 2019. Human compatible: Artificial intelligence and the problem of control. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Bill MacCartney. 2009. Natural language inference. Citeseer. Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456. Joshua Peterson, David Bourgin, Daniel Reichman, Thomas Griffiths, and Stuart Russell. 2019. Cognitive model priors for predicting human decisions. In International Conference on Machine Learning, pages 5133–5141. Ori Plonsky, Reut Apel, Eyal Ert, Moshe Tennenholtz, David Bourgin, Joshua C Peterson, Daniel Reichman, Thomas L Griffiths, Stuart J Russell, Evan C Carter, et al. 2019. Predicting human decisions with behavioral theories and machine learning. arXiv preprint arXiv:1904.06866. Ariel Rosenfeld and Sarit Kraus. 2018. Predicting human decision-making: From prediction to action. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(1):1–150. Stuart J Russell and Peter Norvig. 2016. Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited,. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019. Socialiqa: Commonsense reasoning about social interactions. arXiv preprint arXiv:1904.09728. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326.
2020
477
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5374–5386 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5374 Discourse as a Function of Event: Profiling Discourse Structure in News Articles around the Main Event Prafulla Kumar Choubey1 Aaron Lee1 Ruihong Huang1 Lu Wang2 1 Department of Computer Science and Engineering, Texas A&M University 2 Khoury College of Computer Sciences, Northeastern University (prafulla.choubey, aaronlee, huangrh)@tamu.edu (luwang)@ccs.neu.edu Abstract Understanding discourse structures of news articles is vital to effectively contextualize the occurrence of a news event. To enable computational modeling of news structures, we apply an existing theory of functional discourse structure for news articles that revolves around the main event and create a human-annotated corpus of 802 documents spanning over four domains and three media sources. Next, we propose several documentlevel neural-network models to automatically construct news content structures. Finally, we demonstrate that incorporating system predicted news structures yields new state-of-theart performance for event coreference resolution. The news documents we annotated are openly available and the annotations are publicly released for future research1. 1 Introduction Detecting and incorporating discourse structures is important for achieving text-level language understanding. Several well-studied discourse analysis tasks, such as RST (Mann and Thompson, 1988) and PDTB style (Prasad et al., 2008) discourse parsing and text segmentation (Hearst, 1994), generate rhetorical and content structures that have been shown useful for many NLP applications. But these widely applicable discourse structures overlook genre specialties. In this paper, we focus on studying content structures specific to news articles, a broadly studied text genre for many NLP tasks and applications. We believe that genre-specific discourse structures can effectively complement genre independent discourse structures and are essential for achieving deep story-level text understanding. What is in a news article? Normally, we expect a news article to describe well verified facts of newly 1Dataset can be found at https://github.com/ prafulla77/Discourse_Profiling happened events, aka the main events. However, almost no news article limits itself to reporting only the main events. Most news articles also report context-informing contents, including recent precursor events and current general circumstances, that are meant to directly explain the cause or the context of main events. In addition, they often contain sentences providing further supportive information that is arguably less relevant to main events, comprising of unverifiable or hypothetical anecdotal facts, opinionated statements, future projections and historical backgrounds. Apparently, the relevance order of sentences is not always aligned with their textual order, considering that sentences in a news article are ordered based on their vague importance that is generally determined by multiple factors, including content relevance as well as other factors such as the focus of an article, the author’s preferences and writing strategies. While a number of theoretical studies for news discourse exist, little prior effort has been put on computational modeling and automatic construction of news content structures. We introduce a new task and a new annotated text corpus for profiling news discourse structure that categorizes contents of news articles around the main event. The NewsDiscourse corpus consists of 802 news articles (containing 18,155 sentences), sampled from three news sources (NYT, Xinhua and Reuters), and covering four domains (business, crime, disaster and politics). In this corpus, we label each sentence with one of eight content types reflecting common discourse roles of a sentence in telling a news story, following the news content schemata proposed by Van Dijk (Teun A, 1986; Van Dijk, 1988a,b) with several minor modifications. Next, we present several baselines for automatically identifying the content type of sentences. The experimental results show that a decent performance can be obtained using a basic neural 5375 network-based multi-way classification approach. The sentence classification performance can be further improved by modeling interactions between sentences in a document and identifying sentence types in reference to the main event of a document. We envision that the news discourse profiling dataset as well as the learnt computational systems are useful to many discourse level NLP tasks and applications. As an example, we analyze correlations between content structures and event coreference structures in news articles, and conduct experiments to incorporate system predicted sentence content types into an event coreference resolution system. Specifically, we analyze the lifespan and spread of event coreference chains over different content types, and design constraints to capture several prominent observations for event coreference resolution. Experimental results show that news discourse profiling enables consistent performance gains across all the evaluation metrics on two benchmark datasets, improving the previous best performance for the challenging task of event coreference resolution. 2 Related Work Several well-studied discourse analysis tasks have been shown useful for many NLP applications. The RST (Mann and Thompson, 1988; Soricut and Marcu, 2003; Feng and Hirst, 2012; Ji and Eisenstein, 2014; Li et al., 2014a; Liu et al., 2019) and PDTB style (Prasad et al., 2008; Pitler and Nenkova, 2009; Lin et al., 2014; Rutherford and Xue, 2016; Qin et al., 2016; Xu et al., 2018) discourse parsing tasks identify discourse units that are logically connected with a predefined set of rhetorical relations, and have been shown useful for a range of NLP applications such as text quality assessment (Lin et al., 2011), sentiment analysis (Bhatia et al., 2015), text summarization (Louis et al., 2010), machine translation (Li et al., 2014b) and text categorization (Ji and Smith, 2017). Text segmentation (Hearst, 1994; Choi, 2000; Eisenstein and Barzilay, 2008; Koshorek et al., 2018) is another well studied discourse analysis task that aims to divide a text into a sequence of topically coherent segments and has been shown useful for text summarization (Barzilay and Lee, 2004), sentiment analysis (Sauper et al., 2010) and dialogue systems (Shi et al., 2019). The news discourse profiling task is complementary to the well-established discourse analysis tasks and is likely to further benefit many NLP applications. First, it studies genre-specific discourse structures, while the aforementioned discourse analysis tasks study genre independent general discourse structures and thus fail to incorporate domain knowledge. Second, it focuses on understanding global content organization structures with the main event at the center, while the existing tasks focus on either understanding rhetorical aspects of discourse structures (RST and PDTB discourse parsing) or detecting shallow topic transition structures (text segmentation). Genre-specific functional structures have been studied based on different attributes, but mostly for genres other than news articles. Liddy (1991), Kircz (1991) and Teufel et al. (1999) used rhetorical status and argumentation type to both define functional theories and create corpora for scientific articles. Mizuta et al. (2006), Wilbur et al. (2006), Waard et al. (2009) and Liakata et al. (2012) extensively studied functional structures in biological domain with multiple new annotation schemata. Past studies on functional structures of news articles have been mainly theoretical. Apart from Van Dijk’s theory of news discourse (Teun A, 1986; Van Dijk, 1988b), Pan and Kosicki (1993) proposed framing-based approach along four structural dimensions: syntactic, script, thematic and rhetorical, of which syntactic structure is similar to the Dijk’s theory. Owing to the high specificity of the Dijk’s theory, Yarlott et al. (2018) performed a pilot study for its computational feasibility and annotated a small dataset of 50 documents taken from the ACE Phase 2 corpus (Doddington et al., 2004). However, as mentioned in the paper, their annotators were given minimal training prior to annotations, consequently, the kappa inter-agreement (55%) between two annotators was not satisfactory. In addition, coverage of their annotated dataset on broad event domains and media sources was unclear. The only studies on functional structure of news article with sizable dataset include Baiamonte et al. (2016) that coarsely separates narration from descriptive contents and Friedrich and Palmer (2014) that classify clauses based on their aspectual property. 3 Elements of Discourse Profiling We consider sentences to be units of discourseand define eight schematic categories to study their roles within the context of the underlying topic. The original Van Dijk’s theory was designed for 5376 Main Content Fine-grained type (1) U.S. President Donald Trump tried on Tuesday to calm a storm over his failure to hold Russian President Vladimir Putin accountable for meddling in the 2016 U.S. election, saying he misspoke in a joint news conference in Helsinki. Main Event (2) The rouble fell 1.2 percent on Tuesday following Trump’s statement. Consequence Context-informing Content Fine-grained type (3) Trump praised the Russian leader for his “strong and powerful” denial of the conclusions of U.S. intelligence agencies that the Russian state meddled in the election. Previous Event (4) Special Counsel Robert Mueller is investigating that allegation and any possible collusion by Trump’s campaign. Current Context Additional Supportive Content Fine-grained type (5) Congress passed a sanctions law last year targeting Moscow for election meddling. Historical Event (6) “The threat of wider sanctions has grown,” a businessman told Reuters, declining to be named because of the subject’s sensitivity. Anecdotal Event (7) Republicans and Democrats accused him of siding with an adversary rather than his own country. Evaluation (8) McConnell and House Speaker Paul Ryan, who called Russia’s government “menacing,” said their chambers could consider additional sanctions on Russia. Expectation Table 1: Examples for eight Fine-grained content types. analyzing discourse functions of individual paragraphs w.r.t the main event, and the pilot study done by Yarlott et al. (2018) also considered paragraphs as units of annotations. Observing that some paragraphs contain more than one type of contents, we decided to conduct sentence-level annotations instead to minimize disagreements between annotators. and allow consistent annotations2. Table 1 contains an example for each content type. Consistent with the theory presented by Van Dijk, the categories are theoretical and some of them may not occur in every news article. 3.1 Main Contents Main content describes what the text is about, the most relevant information of the news article. It describes the most prominent event and its consequences that render the highest level topic of the news report. Main Event (M1) introduces the most important event and relates to the major subjects in a news report. It follows strict constraints of being the most recent and relevant event, and directly monitors the processing of remaining document. Categories of all other sentences in the document are interpreted with respect to the main event. Consequence (M2) informs about the events that are triggered by the main news event. They are either temporally overlapped with the main event or happens immediately after the main event. 2Our two annotators agreed that the majority of sentences describe one type of content. For a small number of sentences that contain a mixture of contents, we ask our annotators to assign the label that reflects the main discourse role of a sentence in the bigger context. 3.2 Context-informing Contents Context-informing sentences provide information related to the actual situation in which main event occurred. It includes the previous events and other contextual facts that directly explain the circumstances that led to the main event. Previous Event (C1) describes the real events that preceded the main event and now act as possible causes or preconditions for the main event. They are restricted to events that have occurred very recently, within last few weeks. Current Context (C2) covers all the information that provides context for the main event. They are mainly used to activate the situation model of current events and states that help to understand the main event in the current social or political construct. They have temporal co-occurrence with the main event or describe the ongoing situation. 3.3 Additional Supportive Contents Finally, sentences containing the least relevant information, comprising of unverifiable or hypothetical facts, opinionated statements, future projections and historical backgrounds, are classified as distantly-related content. Historical Event (D1) temporally precedes the main event in months or years. It constitutes the past events that may have led to the current situation, or indirectly relates to the main event or subjects of the news article. Anecdotal Event (D2) includes events with specific participants that are difficult to verify. It may include fictional situations or personal account of incidents of an unknown person especially aimed to exaggerate the situation. Evaluation (D3) introduces reactions from immediate participants, ex5377 perts or known personalities that are opinionated and may also include explicit opinions of the author or those of the news source. They are often meant to describe the social or political implications of the main event or evaluation of the current situation. Typically, it uses statements from influential people to selectively emphasize on their viewpoints. Expectation (D4) speculates on the possible consequences of the main or contextual events. They are essentially opinions, but with far stronger implications where the author tries to evaluate the current situation by projecting possible future events. 3.4 Speech vs. Not Speech In parallel with discourse profiling annotations, we also identify sentences that contain direct quotes or paraphrased comments stated directly by a human and label them as Speech. We assign a binary label, Speech vs. Not Speech, to each sentence independently from the annotations of the above eight schematic discourse roles. Note that Speech sentences may perfectly be annotated with any of the eight news discourse roles based on their contents, although we expect Speech sentences to serve certain discourse roles more often, such as evaluation and expectation. 3.5 Modifications to the Van Dijk Theory The Van Dijk’s theory was originally based on case studies of specific news reports. To accommodate wider settings covering different news domains and sources, we made several minor modifications to the original theory. First, we label both comments made by external sources (labeled as “verbal reactions” in the original theory) and comments made by journalistic entities as speech, and label speech with content types as well. Second, we added a new category, anecdotal event (D2), to distinguish unverifiable anecdotal facts from other contents. Anecdotal facts are quite prevalent in the print media. Third, we do not distinguish news lead sentences that summarize the main story from other Main Event (M1) sentences, considering that lead sentences pertain to the main event and major subjects of a news. 4 Dataset Creation and Statistics The NewsDiscourse corpus consists of 802 openly accessible news articles containing 18,155 sentences3 annotated with one of the eight content 3Note that only sentences within the body of the news article are considered for annotation and headlines are considered types or N/A (sentences that do not contribute to the discourse structure such as photo captions, text links for images, etc.) as well as Speech labels.The documents span across the domains of business, crime, disaster and politics from three major news sources that report global news and are widely used: NYT (USA), Reuters (Europe) and Xinhua (China). We include 300 articles each (75 per domain) from Reuters and Xinhua that are collected by crawling the web and cover news events between 2018-‘19. NYT documents are taken from existing corpora, including 102 documents from KBP 20154 (Ellis et al., 2015) and 100 documents (25 per domain) from the annotated NYT corpus (Evan, 2008). We trained two annotators for multiple iterations before we started the official annotations. In the beginning, each annotator completed 100 common documents (Eight from each of the domains and sources and four from the KBP) within the corpus to measure annotator’s agreement. The two annotators achieved Cohen’s κ score (Cohen, 1968) of 0.69144,0.72389 and 0.87525 for the eight finegrained, three coarse-grained and Speech label annotations respectively. Then, the remaining documents from each domain and news source were split evenly between the two annotators. Detailed distributions of the created corpus, including distributions of different content types across domains and media sources are reported in Tables 2 and 3 respectively. We find that distributions of content types vary depending on either domains or media sources. For instance, disaster documents report more consequences (M2) and anecdotal events (D2), crime documents contain more previous events (C1) and historical events (D1), while politics documents have the most opinionated contents (sentences in categories D3 and D4) immediately followed by business documents. Furthermore, among different sources, NYT articles are the most opinionated and describe historical events most often, followed by Reuters. In contrast, Xinhua articles has relatively more sentences describing the main event. Speech labels and content type labels are separately annotated and each sentence has both a content type label and a speech label (binary, speech as independent content. We used NLTK (Bird et al., 2009) to identify sentence boundaries in the body text. Occasionally, one sentence is wrongly split into multiple sentences, the annotators were instructed to assign them with the same label. 4KBP documents are not filtered for different domains due to the small size of corpus. 5378 M1 M2 C1 C2 D1 D2 D3 D4 N/A Business 336(8.5) 40(1.0) 225(5.8) 1,041(26.6) 238(6.1) 70(1.8) 1,049(26.8) 545(13.9) 368(9.4) Crime 374(10.4) 78(2.2) 271(7.5) 941(26.1) 510(14.2) 77(2.1) 816(22.7) 204(5.7) 328(9.1) Disaster 407(10.6) 206(5.3) 223(5.8) 1,032(26.8) 139(3.6) 330(8.6) 741(19.2) 405(10.5) 368(9.5) Politics 475 (10.4) 21(0.4) 218(4.8) 954(20.9) 228(5.0) 85(1.9) 1,492(32.7) 679(14.9) 414(9.1) Table 2: Distribution of Content type labels across domains, with percentages shown within parentheses. M1 M2 C1 C2 D1 D2 D3 D4 N/A NYT 492(8.4) 97(1.7) 342(5.8) 1401(24.0) 714(12.2) 197(3.4) 1876(32.1) 532(9.1) 197(3.3) Xinhua 667(13.6) 95(1.9) 361(7.4) 1249(25.5) 214(4.4) 96(2.0) 953(19.5) 525(10.7) 736(15.0) Reuters 624(8.4) 195(2.6) 391(5.1) 1837(24.8) 571(7.7) 316(4.3) 1867(25.2) 924(12.5) 686(9.3) NYT KBP 191(8.6) 42(1.9) 157(7.0) 519(23.3) 384(17.3) 47(2.1) 598(26.9) 148(6.7) 141(6.3) Table 3: Distribution of Content type labels across media sources, with percentages shown within parentheses. vs. not speech). In the created corpus, 5535 out of 18,155 sentences are labeled as speech. 5 Document-level Neural Network Model for Discourse Profiling A wide range of computational models has been applied for extracting different forms of discourse structures. However, across several tasks, neural network methods (Ji and Eisenstein, 2015; Becker et al., 2017) are found the most effective, with relatively superior performance obtained by modeling discourse-level context (Dai and Huang, 2018a,b). As an initial attempt, we use a hierarchical neural network to derive sentence representations and a document encoding, and model associations between each sentence and the main topic of the document when determining content types for sentences. Shown in Figure 1, it first uses a wordlevel bi-LSTM layer (Hochreiter and Schmidhuber, 1997) with soft-attention over word representations to generate intermediate sentence representations which are further enriched with the context information using another sentence-level bi-LSTM. Enriched sentence representations are then averaged with their soft-attention weights to generate document encoding. The final prediction layers model associations between the document encoding and each sentence encoding to predict sentence types. Context-aware sentence encoding: Let a document be a sequence of sentences {s1, s2..sn}, which in turn are sequences of words {(w11, w12..) .. (wn1, wn2, ..)}. We first transform a sequence of words in each sentence to contextualized word representations using ELMo (Peters et al., 2018) followed by a word-level biLSTM layer to obtain their hidden state representations Hs. Then, we take weighted sums of hidden representations using soft-attention scores to obtain intermediate senFigure 1: Neural-Network Architecture Incorporating Document Encoding for Content Type Classification tence encodings (Si) that are uninformed of the contextual information. Therefore, we apply another sentence-level biLSTM over the sequence of sentence encodings to model interactions among sentences and smoothen context flow from the headline until the last sentence in a document. The hidden states (Ht) of the sentence-level bi-LSTM are used as sentence encodings. Document Encoding: We generate a reference document encoding, as a weighted sum over sentence encodings using their soft-attention weights. Modeling associations with the main topic: Sentence types are interpreted with respect to the main event. However, while the sentence-level biLSTM augments sentence representations with the local context, they may be still unaware of the main topic. Therefore, we compute element-wise products and differences between the document encoding and a sentence encoding to measure their correlations, and further concatenate the products and differ5379 Models M1 M2 C1 C2 D1 D2 D3 D4 Macro Micro F1 P R F1 F1 Feature-based (SVM) 34.0 8.0 18.0 44.0 45.0 14.0 52.0 44.0 39.1 37.9 38.3 45.7 Basic Classifier 42.5 24.7 18.2 55.4 59.6 28.5 66.1 52.5 52.6 47.9 48.8(±0.8) 57.5(±0.6) Document LSTM 49.3 27.3 20.2 57.0 63.6 45.8 67.4 55.6 56.6 52.6 53.2(±0.7) 60.2(±1.0) +Headline 49.8 30.0 21.8 56.7 63.2 42.7 66.8 58.7 57.3 52.9 53.8(±0.7) 60.4(±1.0) +Document encoding 49.6 27.9 22.5 58.1 64.1 48.1 67.4 57.6 56.9 53.7 54.4(±0.8) 60.9(±0.7) CRF Fine-grained 47.7 26.4 22.2 56.0 63.3 45.2 66.4 55.2 55.4 52.9 52.9(±1.4) 59.4(±1.1) CRF Coarse-grained 48.4 29.3 21.6 55.9 62.9 47.2 66.7 54.2 55.6 53.4 53.5(±0.9) 59.6(±0.7) Table 4: Performance of different systems on fine-grained discourse content type classification task. All results correspond to average of 10 training runs with random seeds. In addition, we report standard deviation for both macro and micro F1 scores. ences with the sentence encoding to obtain the final sentence representation that is used for predicting its sentence type. Predicting Sentence Types: First, we use a two layer feed forward neural network as a regular classifier to make local decisions for each sentence based on the final sentence representations. In addition, news articles are known to follow inverted pyramid (Bell, 1998) or other commonly used styles where the output labels are not independent. Therefore, we also use a linear chain CRF (Lafferty et al., 2001) layer on the output scores of the local classifier to model dependence among discourse labels. 6 Evaluation We split 802 documents into training/dev/test sets of 502/100/200 documents. The training set includes 50 documents from each domain in Reuters and Xinhua, 9 documents from each domain in NYT and 66 documents from KBP; the dev set includes 8 documents from each domain and source and 4 documents from KBP; and the test set includes 17 documents from each domain in Reuters and Xinhua, 8 documents from each domain in NYT and 32 documents from KBP. The dataset is released with the standard split we used in our experiments. For evaluation, we calculate F1 score for each content type as well as micro and macro F1 scores. 6.1 Baseline Models Feature-based (SVM) uses linear SVM classifier (Pedregosa et al., 2011) over features used by Yarlott et al. (2018), including bag of words, tf-idf and 100-dimensional paragraph vectors obtained through Doc2Vec (Le and Mikolov, 2014) implementation in Gensim ( ˇReh˚uˇrek and Sojka, 2010). Following Yarlott et al. (2018), we set minimum α to 0.01, minimum word count to 5 for Doc2Vec model and train it for 50 epochs. All three features are built on the entire training corpus and the value of C in SVM classifier is set to 10. Basic Classifier uses only the word-level bi-LSTM with soft-attention to learn sentence representations followed by the local feed forward neural network classifier to make content type predictions. 6.2 Proposed Document-level Models Document LSTM adds the sentence-level BiLSTM over sentence representations obtained from the word-level BiLSTM to enrich sentence representations with local contextual information. +Document Encoding uses document encoding for modeling associations with the main topic and obtains the final sentence representations as described previously. +Headline replaces document encoding with headline sentence encoding generated from the wordlevel biLSTM. Headline is known to be a strong predictor for the main event (Choubey et al., 2018). CRF Fine-grained and CRF Coarse-grained adds a CRF layer to make content type predictions for sentences which models dependencies among fine-grained (eight content types) and coarse-grained (main vs. context-informing vs. supportive contents) content types respectively. 6.3 Implementation Details We set hidden states dimension to 512 for both word-level and sentence-level biLSTMs in all our models. Similarly, we use two-layered feed forward networks with 1024-512-1 units to calculate attention weights for both the BiLSTMs. The final classifier uses two-layer feed forward networks with 3072-1024-9 units for predicting sentence types. All models are trained using Adam (Kingma and Ba, 2014) optimizer with the learning rate of 5e-5. For regularization, we use dropout (Srivastava et al., 2014) of 0.5 on the output activations 5380 Systems P R F1 Feature-based (SVM) 61.0 71.0 69.0 Basic Classifier 81.6 80.7 81.2(±0.4) Document LSTM 80.7 83.6 82.2(±0.7) Table 5: Performance of different systems on Speech label classification task. of both BiLSTMs and all neural layers. Word embeddings are kept fixed during the training. All the neural model are trained for 15 epochs and we use the epoch yielding the best validation performance. To alleviate the influence of randomness in neural model training and obtain stable experimental results, we run each neural model ten times with random seeds and report the average performance. 6.4 Results and Analysis Tables 4 and 5 show the results from our experiments for content-type and speech label classification tasks. We see that a simple word-level biLSTM based basic classifier outperforms features-based SVM classifier (Yarlott et al., 2018) by 10.5% and 11.8% on macro and micro F1 scores respectively for content-type classification. Adding a sentencelevel BiLSTM helps in modeling contextual continuum and improves performance by additional 4.4% on macro and 2.7% on micro F1 scores. Also, as content types are interpreted with respect to the main event, modeling associations between a sentence representation and the referred main topic representation using headline or document embeddings improves averaged macro F1 score by 0.6% and 1.2% respectively. Empirically, the model using document embedding performs better than the one with headline embedding by 0.6% implying skewed headlining based on recency which is quite prevalent in news reporting. We further aim to improve the performance by using CRF models to capture interdependencies among different content types, however, CRF models using both fine-grained and coarse-grained label transitions could not exceed a simple classifier model. The inferior performance of CRF models can be explained by variations in news content organization structures (such as inverted pyramid, narrative, etc.), further implying the need to model those variations separately in future work. Similarly, for speech label classification task, word-level biLSTM model achieves 12.2% higher F1 score compared to the feature-based SVM classifier which is further improved by 1.0% with M1 M2 C1 C2 D1 D2 D3 D4 N/A M1 88.0 2.6 9.0 38.2 14.6 0.4 123.2 28. 2.0 M2 6.4 32.4 0.0 28.4 2.0 0.0 3.4 5.4 0.0 C1 13.6 0.6 15.2 27.8 15.2 0.2 25.4 12.0 6.0 C2 39.6 19.2 22.8 483.6 53.2 5.6 134.6 37.6 14.8 D1 3.0 0.0 8.8 54.8 125.4 5.8 41.2 4.2 7.8 D2 1.6 1.6 1.8 9.4 4.0 37.8 41.2 2.8 1.8 D3 6.8 0.0 6.0 82.6 20.4 12.0 586.6 58.2 5.4 D4 4.2 1.2 0.8 29.0 0.4 1.0 63.2 111.4 1.8 NA 1.2 0.0 0.0 1.6 0.6 0.0 3.4 0.0 158.2 Table 6: Confusion matrix for content-type classification based on prediction results of the model Document LSTM+Document Encoding on the dev set, averaged over 10 runs consistent with the results reported in Table 4. document-level biLSTM. We generated confusion matrix (Table 6) for content-type classification based on prediction results of the best performing model Document LSTM + Document Encoding on the dev set. Prediction errors mainly occur between Main Event (M1) and Current Context / Evaluation (C2/D3), between Previous Event (C1) and Current Context (C2), between Evaluation (D3) and Expectation (D4), and between Current Context (C2) and Historical Event / Evaluation (D1/D3). 7 Utilizing Content Structure to Improve Event Coreference Resolution M1 M2 C1 C2 D1 D2 D3 D4 51% 91% 79% 84% 86% 95% 84% 83% Table 7: Percentages of Singleton events in sentences of each content type. We envision that news discourse profiling can be useful to many discourse level NLP tasks and applications. As an example, we investigate uses of news structures for event coreference resolution by analyzing 102 documents from the KBP 2015 corpus included in our NewsDiscourse Corpus. We analyze the lifespan and spread of event coreference chains over different content types. First, table 7 shows the percentage of events that are singletons out of all the events that appear in sentences of each content type. We can see that in contrast to main event sentences (M1), other types of sentences are more likely to contain singleton events. We further analyze characteristics of nonsingleton events, to identify positions of their coreferential mentions and the spread of coreference chains in a document. Motivated by van Dijk’s theory, we hypothesize that the main events appear in each type of sentences, but the likelihoods of 5381 M1 M2 C1 C2 D1 D2 D3 D4 58% 15% 23% 15% 10% 9% 14% 14% Table 8: Percentages of Sentences of each content type that contain a headline main event. M1 M2 C1 C2 D1 D2 D3 D4 13% 0% 33% 49% 69% 100% 49% 13% Table 9: Percentages of Intra-type events out of nonsingleton events in sentences of each content type seeing the main events in a sentence may vary depending on the sentence type. We consider events that appear in the news headline to approximate the main events of a news article. As shown in Table 8, around 58%5 of main event sentences (M1) contain at least one headline event, in addition, context-informing sentences (C1+C2), especially sentences focusing on discussing recent pre-cursor events (C1), are more likely to mention headline events as well. Other than the main events, we observe that many events have all of their coreferential mentions appear within sentences of the same content type. We call such events intra-type events. In other words, an intra-type event chain starts from a sentence of any type will die out within sentences of the same content type. Table 9 shows the percentage of intra-type event chains out of all the event chains that begin in a certain type of sentence. We can see that non-main contents (e.g., content types C2-D3) are more likely to be self-contained from introducing to finishing describing an event. In particular, historical (D1) and anecdotal (D2) contents exhibit an even stronger tendency of having intratype event repetitions compared to other non-main content types. Incorporating Content Structure for Event Coreference Resolution: We incorporate news functional structures for event coreference resolution by following the above analysis and implementing content structure informed constraints in 5While all the main event sentences are expected to mention some main event, we use headline events to approximate main events and headline events do not cover all the main events of a news article. As shown in our previous work (Choubey et al., 2018), identifying main events is a challenging task in its own right and main events do not always occur in the headline of a news article. In addition, event annotations in the KBP corpora only consider a limited set of event types, seven types specifically, therefore, if main events do not belong to those seven types, they are not annotated as events, which also contributes to the imperfect percentage of main event sentences containing a headline event. an Integer Linear Programming (ILP) inference system to better identify singleton mentions, main event mentions and intra-type event mentions. We use the Document LSTM+Document encoding classifier to predict sentence content types. In addition, we built a discourse-aware event singleton classifier, that resembles the sentence type classifier, to identify singleton event mentions in a document. Specifically, the singleton classifier combines document and sentence representations provided by the content type classifier with contextualized event word representations obtained from a separate word-level biLSTM layer with 512 hidden units. Then, the singleton classifier applies a two-layer feed forward neural network to identify event singletons, and the feed forward network has 3072-512-2 units. We implement ILP constraints based on system predicted content types of sentences and singleton scores of event mentions. Detailed descriptions of ILP constraints we implemented and their equations are included in the appendix. The ILP formulation has been used in our previous work that yields the previous best system for event coreference resolution (Choubey and Huang, 2018), which aims to capture several specific document level distributional patterns of coreferential event mentions by simply using heuristics. For direct comparisons, we adopt the same experimental settings as in Choubey and Huang (2018), using KBP 2015 documents as the training data and using both KBP 2016 and KBP 2017 corpora for evaluation6. We retrained the sentence type classifier using 102 KBP 2015 documents annotated with content types, using 15 documents as the development set and the rest as the training data. We trained the event singleton classifier using the same train/dev split. In addition, we used the same event mentions and pairwise event coreference scores produced by a local pairwise classifier the same as in Choubey and Huang (2018)7. Experimental Results: We compare the content6All the KBP corpora include documents from both discussion forum and news articles. But as the goal of this study is to leverage discourse structures specific to news articles for improving event coreference resolution performance, we only evaluate the ILP system using news articles in the KBP corpora. This evaluation setting is consistent with our previous work Choubey and Huang (2018). For direct comparisons, the results reported for all the systems and baselines are based on news articles in the test datasets as well 7The classifier can be obtained from https://git. io/JeDw3 5382 KBP 2016 KBP 2017 Model B3 CEAFe MUC BLANC AV G B3 CEAFe MUC BLANC AV G Local classifier 51.47 47.96 26.29 30.82 39.13 50.24 48.47 30.81 29.94 39.87 +Content Structure 52.78 49.7 34.62 34.49 42.9 51.68 50.57 37.8 33.39 43.36 -Singletons 51.47 47.96 31.42 32.89 40.94 51.17 49.67 38.01 32.94 42.96 -Main Events 52.65 49.35 32.56 33.69 42.06 51.4 50.05 35.13 31.92 42.12 -Intra-type Events 52.62 49.63 32.97 34.07 42.32 51.62 50.45 37.54 33.42 43.26 Lu and Ng (2017) 50.16 48.59 32.41 32.72 40.97 Choubey and Huang (2018) 51.67 49.1 34.08 34.08 42.23 50.35 48.61 37.24 31.94 42.04 Table 10: Results for event coreference resolution systems on the benchmark datasets (KBP 2016 and 2017). structure aware ILP system with a baseline system (the row Local classifier) that performs greedy merging of event mentions using local classifier predicted pairwise coreference scores as well as two most recent models for event coreference resolution, the heuristics-based ILP system (Choubey and Huang, 2018) and another recent system (Lu and Ng, 2017). We use the same evaluation method as in (Choubey and Huang, 2018) and evaluate event coreference resolution results directly without requiring event mention type match8. Table 10 shows experimental results. Event coreference resolution is a challenging task as shown by the small margins of performance gains achieved by recent systems. The ILP model constrained by system predicted content structures (the row +Content Structure) outperforms the pairwise classifier baseline system as well as the two most recent systems consistently across all the evaluation metrics over the two benchmark datasets. In particular, our ILP system outperforms the previous state-of-the-art, the heuristics-based ILP system Choubey and Huang, with average F1 gains of 0.67% and 1.32% on KBP 2016 and KBP 2017 corpora respectively. The superior performance shows that systematically identified content structures are more effective than heuristics in guiding event linking, and establishes the usefulness of the new discourse profiling task. To further evaluate the importance of ILP constraints on Singletons, Main events and Intra-type events, we perform ablation experiments by removing each constraint from the full ILP model. Based on the results in Table 10, all the three types of constraints have noticeable impacts to coreference performance, and singletons and main events constraints contribute the most. 8The official KBP 2017 event coreference resolution scorer considers two event mentions coreferent if they strictly match on their event type and subtype, which requires building a high-performing event type identification system to enable an event coreference resolver to score well. Intuitively, news content structures can help in identifying other event relations as well, such as temporal and causal relations, and thus disentangling complete event structures. For instance, events occurring in C1 (Previous Event) sentences are probable cause for the main event which in turn causes events in M2 (Consequence) sentences (the same rationale can be applied for temporal order). 8 Conclusion We have created the first broad-coverage corpus of news articles annotated with a theoretically grounded functional discourse structure. Our initial experiments using neural models ascertain the feasibility of this task. We conducted experiments and demonstrated the usefulness of news discourse profiling for event coreference resolution. In the future, we will further improve the performance of news discourse profiling by investigating subgenres of news articles, and extensively explore its usage for various other NLP tasks and applications. Acknowledgments We thank our anonymous reviewers for providing insightful review comments. We gratefully acknowledge support from National Science Foundation via the awards IIS-1942918 and IIS-1755943. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government. References Daniela Baiamonte, Tommaso Caselli, and Irina Prodanof. 2016. Annotating content zones in news articles. CLiC it, page 40. 5383 Regina Barzilay and Lillian Lee. 2004. Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization. Maria Becker, Michael Staniek, Vivi Nastase, Alexis Palmer, and Anette Frank. 2017. Classifying semantic clause types: Modeling context and genre characteristics with recurrent neural networks and attention. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017), pages 230–240. Allan Bell. 1998. The discourse structure of news stories. In Approaches to media discourse, pages 64– 104. Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level sentiment analysis from RST discourse parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2212–2218, Lisbon, Portugal. Association for Computational Linguistics. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. ” O’Reilly Media, Inc.”. Freddy YY Choi. 2000. Advances in domain independent linear text segmentation. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 26–33. Association for Computational Linguistics. Prafulla Kumar Choubey and Ruihong Huang. 2018. Improving event coreference resolution by modeling correlations between event coreference chains and document topic structures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 485–495. Prafulla Kumar Choubey, Kaushik Raju, and Ruihong Huang. 2018. Identifying the most dominant event in a news article by mining event coreference relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 340– 345. Jacob Cohen. 1968. Multiple regression as a general data-analytic system. Psychological Bulletin, 70:426–443. Zeyu Dai and Ruihong Huang. 2018a. Building context-aware clause representations for situation entity type classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3305–3315. Zeyu Dai and Ruihong Huang. 2018b. Improving implicit discourse relation classification by modeling inter-dependencies of discourse units in a paragraph. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 141–151. George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In Lrec, volume 2, page 1. Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 334–343. Association for Computational Linguistics. Joe Ellis, Jeremy Getman, Dana Fore, Neil Kuster, Zhiyi Song, Ann Bies, and Stephanie Strassel. 2015. Overview of linguistic resources for the tac kbp 2015 evaluations: Methodologies and reults. In Proceedings of the TAC KBP 2015 Workshop, pages 16–17. Sandhaus Evan. 2008. The new york times annotated corpus. LDC2008T19. DVD. Philadelphia: Linguistic Data Consortium. Vanessa Wei Feng and Graeme Hirst. 2012. Text-level discourse parsing with rich linguistic features. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 60–68. Annemarie Friedrich and Alexis Palmer. 2014. Situation entity annotation. In Proceedings of LAW VIIIThe 8th Linguistic Annotation Workshop, pages 149– 158. Marti A Hearst. 1994. Multi-paragraph segmentation of expository text. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics, pages 9–16. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 13–24. Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. Transactions of the Association for Computational Linguistics, 3:329–344. Yangfeng Ji and Noah A. Smith. 2017. Neural discourse structure for text categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 996–1005, Vancouver, Canada. Association for Computational Linguistics. 5384 Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Joost G Kircz. 1991. Rhetorical structure of scientific articles: the case for argumentational analysis in information retrieval. Journal of documentation, 47(4):354–372. Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 469–473, New Orleans, Louisiana. Association for Computational Linguistics. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International conference on machine learning, pages 1188– 1196. Jiwei Li, Rumeng Li, and Eduard Hovy. 2014a. Recursive deep models for discourse parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2061–2069. Junyi Jessy Li, Marine Carpuat, and Ani Nenkova. 2014b. Assessing the discourse factors that influence the quality of machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 283–288. Maria Liakata, Shyamasree Saha, Simon Dobnik, Colin Batchelor, and Dietrich Rebholz-Schuhmann. 2012. Automatic recognition of conceptualization zones in scientific articles and two life science applications. Bioinformatics, 28(7):991–1000. Elizabeth DuRoss Liddy. 1991. The discourse-level structure of empirical abstracts: An exploratory study. Information Processing & Management, 27(1):55–81. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 997–1006. Association for Computational Linguistics. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A pdtb-styled end-to-end discourse parser. Natural Language Engineering, 20(2):151–184. Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, and Lidong Bing. 2019. Hierarchical pointer net parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1007– 1017, Hong Kong, China. Association for Computational Linguistics. Annie Louis, Aravind Joshi, and Ani Nenkova. 2010. Discourse indicators for content selection in summarization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 147–156. Association for Computational Linguistics. Jing Lu and Vincent Ng. 2017. Joint learning for event coreference resolution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 90–101. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Yoko Mizuta, Anna Korhonen, Tony Mullen, and Nigel Collier. 2006. Zone analysis in biology articles as a basis for information extraction. International journal of medical informatics, 75(6):468–487. Zhongdang Pan and Gerald M Kosicki. 1993. Framing analysis: An approach to news discourse. Political communication, 10(1):55–75. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Emily Pitler and Ani Nenkova. 2009. Using syntax to disambiguate explicit discourse connectives in text. In ACL-IJCNLP, pages 13–16. R. Prasad, N. Dinesh, Lee A., E. Miltsakaki, L. Robaldo, Joshi A., and B. Webber. 2008. The Penn Discourse Treebank 2.0. In lrec2008. Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016. A stacking gated neural architecture for implicit discourse relation classification. In EMNLP, pages 2263–2270. 5385 Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45– 50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en. Attapol T Rutherford and Nianwen Xue. 2016. Robust non-explicit neural discourse parser in english and chinese. ACL, page 55. Christina Sauper, Aria Haghighi, and Regina Barzilay. 2010. Incorporating Content Structure into Text Analysis Applications. Weiyan Shi, Tiancheng Zhao, and Zhou Yu. 2019. Unsupervised dialog structure learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1797–1807, Minneapolis, Minnesota. Association for Computational Linguistics. Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 228–235. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Simone Teufel, Jean Carletta, and Marc Moens. 1999. An annotation scheme for discourse-level argumentation in research articles. In Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics, pages 110–117. Association for Computational Linguistics. Van Dijk Teun A. 1986. News schemata. Studying writing: linguistic approaches, 1:155–186. Teun A Van Dijk. 1988a. News analysis. Case Studies of International and National News in the Press. New Jersey: Lawrence. Teun A Van Dijk. 1988b. News as discourse. Hillsdale, NJ, US: Lawrence Erlbaum Associates, Inc. Anita Waard, Paul Buitelaar, and Thomas Eigner. 2009. Identifying the epistemic value of discourse segments in biology texts. Proceedings of the Eighth International Conference on Computational Semantics:, pages 351–354. W John Wilbur, Andrey Rzhetsky, and Hagit Shatkay. 2006. New directions in biomedical text annotation: definitions, guidelines and corpus construction. BMC bioinformatics, 7(1):356. Yang Xu, Yu Hong, Huibin Ruan, Jianmin Yao, Min Zhang, and Guodong Zhou. 2018. Using active learning to expand training data for implicit discourse relation recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 725–731, Brussels, Belgium. Association for Computational Linguistics. W Victor Yarlott, Cristina Cornelio, Tian Gao, and Mark Finlayson. 2018. Identifying the discourse function of news article paragraphs. In Proceedings of the Workshop Events and Stories in the News 2018, pages 25–33. A ILP for Event Coreference Resolution Let λ refers the set of all event mentions in a document and pij equals the score from the local pairwise classifier denoting event mentions ‘i’ and ‘j’ are coreferential. We formulate the baseline objective function that minimizes equation 1. ΘB = X i∈λ,j∈λ −log(pij)xij −log(1 −pij)(¬xij) s.t. xij ∈{0, 1} (1) We then add constituent objective functions (equation 2) and new constraints to the baseline objective to incorporate document-level content structure, including repetitions of headline events in main content (ΘM) as well as in consequence, previous event and current context (ΘC), intra-type coreference chains in non-main contents (ΘL) and exclusion of singletons from event coreferential chains (ΘS) while reinforcing non-singletons to have more coreferential mentions (ΘN). Θ = ΘB +KMΘM +KCΘC +KLΘL +KSΘS +KNΘN (2) The weighting parameters for all the constituent objective functions were obtained through grid search. We first preset all the values to 0.5 and then searched each parameter in the multiples of 0.5 over the range from 0.5 to 5. We found that the best performance was obtained for KM=3.0, KC=1.0, KS=2.5 and KN=0.5. Also, the best values for KL are 0.5 for content types M2-C1 and 1.0 for content types C2-D8. A.1 Infusing Singletons Score in the ILP Forumlation Intuitively, coreferential event mentions and singletons are exclusive to each other. However, enforcing such mutual exclusion would be extremely unstable when both system predicted singletons 5386 and event coreference scores are imperfect. Therefore, we simply discourage singletons from being included in any coreference chains and encourage non-singletons to form more coreferential links in our model by adding two constituent objective functions ΘS and ΘN (equation 3). ΘS = X i∈λ,j∈λ,i∨j∈S xij ; ΘN = − X i∈λ,j∈λ,i∧j∈N xij (3) Where S and N are predicted singletons and nonsingletons from content-structure aware singleton classifier. The relaxed ΘS and ΘN based implementation allows violations for predicted singletons when its pairwise coreference score with an event mention is high. A.2 Incorporating Content Types in the ILP Forumlation As evident from the analysis, main, consequence, previous event and current context content types favor coreferential event mentions with headline event. Furthermore, if an event chain starts in one of the C1-D4 content types, it tend to have coreferential event mentions within the same content type or sometimes in the main content. We model above correlations between main and non-main content types and event coreference chains through their respective objective functions and constraints. Main Events: for the event pairs with the first event mention from headline and the second one from main content sentences, we define a simple objective function (equation 4) that add the negative sum of their indicator variables to the main objective function. ΘM = − X i∈ξH,j∈ξM xij (4) Here, ξH and ξM indicate event mentions in headline and main content sentences respectively. By minimizing ΘM in global objective function, our model encourages coreferential mentions between the headline and main content sentences. Similarly, we define ΘC that encourages coreferential mentions between the headline and sentences from consequence, previous event and current context content types (equation 5). ΘC = − X i∈ξH,j∈ξR xij (5) Here, ξR indicate event mentions in one of the consequence, previous event or current context content types. Intra-type Events: for each non-main content type T, we define the objective function ΘL and corresponding constraint (equation 6) to penalize event chains that start in that non-main content type sentence but include event mentions from other non-main type sentences. ΘL = X i∈ξT Yi s.t. Γi −Yi ≤Mγi Γi = X i∈ξT ,j /∈(ξM ∪ξT ) xij ; γi = X k /∈ξT ,i∈ξT xki (6) First, we define an ILP variable Yi for each event i in ξT , where ξT represents events in a non-main content type T ∈C1-D4, and add that to the objective function ΘL. Then, through the constraint in equation 6, we set the value of Yi to Γi when λi is 0. Γi equals the number of subsequent coreferential event mentions of event i in sentences of other nonmain types. γi equals the number of antecedent coreferential even mentions of event i in sentences of main or other non-main types. By minimizing Yi in ΘL, we discourage an event chain starting in a C1-D4 content type-sentence from forming coreferential links with subsequent event mentions in other non-main types.
2020
478
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5387–5403 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5387 Harnessing the linguistic signal to predict scalar inferences Sebastian Schuster∗ Stanford University [email protected] Yuxing Chen∗ Stanford University [email protected] Judith Degen Stanford University [email protected] Abstract Pragmatic inferences often subtly depend on the presence or absence of linguistic features. For example, the presence of a partitive construction (of the) increases the strength of a so-called scalar inference: listeners perceive the inference that Chris did not eat all of the cookies to be stronger after hearing “Chris ate some of the cookies” than after hearing the same utterance without a partitive, “Chris ate some cookies”. In this work, we explore to what extent neural network sentence encoders can learn to predict the strength of scalar inferences. We first show that an LSTM-based sentence encoder trained on an English dataset of human inference strength ratings is able to predict ratings with high accuracy (r = 0.78). We then probe the model’s behavior using manually constructed minimal sentence pairs and corpus data. We find that the model inferred previously established associations between linguistic features and inference strength, suggesting that the model learns to use linguistic features to predict pragmatic inferences. 1 Introduction An important property of human communication is that listeners can infer information beyond the literal meaning of an utterance. One well-studied type of inference is scalar inference (Grice, 1975; Horn, 1984), whereby a listener who hears an utterance with a scalar item like some infers the negation of a stronger alternative with all: (1) a. Chris ate some of the cookies. b. ⇝Chris ate some, but not all, of the cookies. Early accounts of scalar inferences (e.g., Gazdar 1979; Horn 1984; Levinson 2000) considered them to arise by default unless explicitly contradicted in context. However, in a recent corpus study, Degen (2015) showed that there is much more variability ∗Equal contribution. in scalar inferences from some to not all than previously assumed. Degen (2015) further showed that this variability is not random and that several lexical, syntactic, and semantic/pragmatic features of context explain much of the variance in inference strength.1 Recent Bayesian game-theoretic models of pragmatic reasoning (Goodman and Frank, 2016; Franke and J¨ager, 2016) are able to integrate speaker expectations with world knowledge to predict listeners’ pragmatic inferences in many cases (e.g., Goodman and Stuhlm¨uller 2013; Degen et al. 2015). However, to compute speaker expectations, these models require manual specification of features as well as specification of a finite set of possible utterances. Further, inference becomes intractable when scaling up beyond toy domains to make predictions for arbitrary utterances.2 Neural network (NN) models, on the other hand, do not suffer from these limitations: they are capable of making predictions for arbitrary utterances and do not require manual specification of features. Unlike Bayesian game-theoretic models, however, NN models have no explicit pragmatic reasoning mechanisms. In this work, we investigate to what extent NN models can learn to predict subtle differences in scalar inferences and to what extent these models infer associations between linguistic features and 1See Section 2 for the operationalization of inference strength that we use throughout this paper and for a description of these features. 2Recent models of generating pragmatic image descriptions (Andreas and Klein, 2016; Cohn-Gordon et al., 2018) and color descriptions (Monroe et al., 2017) have overcome this issue by approximating the distributions of utterances given a set of potential referents. However, these models require a finite set of world states (e.g., several referents to choose from) and a corresponding generative model of utterances (e.g., an image captioning model) and are therefore also limited to scenarios with pre-specified world states and a corresponding generative model. 5388 inference strength. In this enterprise we follow the recent successes of NN models in predicting a range of linguistic phenomena such as long distance syntactic dependencies (e.g., Elman 1990; Linzen et al. 2016; Gulordava et al. 2018; Futrell et al. 2019; Wilcox et al. 2019), semantic entailments (e.g., Bowman et al. 2015; Conneau et al. 2018), acceptability judgements (Warstadt et al., 2019b), factuality (Rudinger et al., 2018), negative polarity item licensing environments (Warstadt et al., 2019a), and, to some extent, speaker commitment (Jiang and de Marneffe, 2019a). In particular, we ask: 1. How well can a neural network sentence encoder learn to predict human inference strength judgments for utterances with some? 2. To what extent does such a model capture the qualitative effects of hand-mined contextual features previously identified as influencing inference strength? To address the first question, we compare the performance of several NN models that differ in the underlying word embedding model (GloVe, ELMo, or BERT). To address the second question, we probe the best model’s behavior through an analysis of predictions on manually constructed minimal sentence pairs, a regression analysis, and an analysis of attention weights. We find that the best model is able to predict inference strength ratings on a heldout test set with high accuracy (r = 0.78). The three analyses consistently suggest that the model learned associations between inference strength and linguistic features established by previous work (Degen, 2015). We release data and code at https://github. com/yuxingch/Implicature-Strength-Some. 2 The dataset We use the annotated dataset collected by Degen (2015), a dataset of the utterances from the Switchboard corpus of English telephone dialogues (Godfrey et al., 1992) with a noun phrase (NP) with some. The dataset consists of 1,362 unique utterances. For each example with a some-NP, Degen (2015) collected inference strength ratings from at least 10 participants recruited on Amazon’s Mechanical Turk. Participants saw both the target utterance and ten utterances from the preceding discourse context. They then rated the similarity between the original utterance like (2a) and an utterance in which some was replaced with some, but not all like (2b), on a 7-point Likert scale with endpoints labeled “very different meaning” (1) and “same meaning” (7). Low similarity ratings thus indicate low inference strength, and high similarity ratings indicate high inference strength. (2) a. I like – I like to read some of the philosophy stuff. b. I like – I like to read some, but not all, of the philosophy stuff. Using this corpus, Degen (2015) found that several linguistic and contextual factors influenced inference strength ratings, including the partitive form of, subjecthood, the previous mention of the NP referent, determiner strength, and modification of the head noun, which we describe in turn. Partitive: (3a-b) are example utterances from the corpus with and without partitive some-NPs, respectively. Values in parentheses indicate the mean inference strength rating for that item. On average, utterances with partitives yielded stronger inference ratings than ones without. (3) a. We [...] buy some of our own equipment. (5.3) b. You sound like you have some small ones in the background. (1.5) Subjecthood: Utterances in which the some-NP appears in subject position, as in (4a), yielded stronger inference ratings than utterances in which the some-NP appears in a different grammatical position, e.g., as a direct object as in (4b). (4) a. Some kids are really having it. (5.9) b. That would take some planning. (1.4) Previous mention: Discourse properties also have an effect on inference strength. A some-NP with a previously mentioned embedded NP referent yields stronger inferences than a some-NP whose embedded NP referent has not been previously mentioned. For example, (5a) contains a some-NP in which them refers to previously mentioned Mission Impossible tape recordings, whereas problems in the some-NP in (5b) has not been previously mentioned. (5) a. I’ve seen some of them on repeats. (5.8) b. What do you feel are some of the main problems? (3.4) Modification: Degen (2015) also found a small effect of whether or not the head noun of the someNP was modified: some-NPs with unmodified head nouns yielded slightly stronger inferences than those with modified head nouns. Determiner strength: Finally, it has been argued that there are two types of some: a weak some and a strong some (Milsark, 1974; Barwise and Cooper, 5389 1981). This weak/strong distinction has been notoriously hard to pin down (Horn, 1997) and Degen (2015) used empirical strength norms elicited independently for each item. To this end, she exploited the fact that removing weak some from an utterance has little effect on its meaning whereas removing strong some changes the meaning. Determiner strength ratings were thus elicited by asking participants to rate the similarity between the original utterance and an utterance without some (of) on a 7-point Likert scale from ‘different meaning’ to ‘same meaning’. Items with stronger some – e.g., (6a), determiner strength 3.3 – yielded stronger inference ratings than items with weaker some – e.g., (6b), determiner strength 6.7. (6) a. And some people don’t vote. (5.2) b. Well, we could use some rain up here. (2.1) The quantitative findings from Degen (2015) are summarized in Figure 4, which shows in blue the regression coefficients for all predictors she considered (see the original paper for more detailed descriptions). For our experiments, we randomly split the dataset into a 70% training and 30% test set, resulting in 954 training items and 408 test items. 3 Model The objective of the model is to predict mean inference strength rating i given an utterance (a sequence of words) U = {w1, w2, ..., wN}. We rescale the 1-to-7 Likert scale ratings to the interval [0, 1]. Figure 1 shows the overall model architecture. The model is a sentence classification model akin to the model proposed by Lin et al. (2017). It first embeds the utterance tokens using pre-trained embedding models, and then forms a sentence representation by passing the embedded tokens through a 2-layer bidirectional LSTM network (biLSTM) (Hochreiter and Schmidhuber, 1997) with dropout (Srivastava et al., 2014) followed by a self-attention mechanism that provides a weighted average of the hidden states of the topmost biLSTM layer. This sentence representation is then passed through a transformation layer with a sigmoid activation function, which outputs the predicted score in the interval [0, 1]. 4 Experiments 4.1 Training We used 5-fold cross-validation on the training data to optimize the following hyperparameters. Figure 1: Model architecture. Word embedding model: 100d GloVe (Pennington et al., 2014), 1024d ELMo (Peters et al., 2018; Gardner et al., 2018), 768d BERT-base, 1024d BERT-large (Devlin et al., 2019; Wolf et al., 2019). Output layer of word embedding models: [1, 3] for ELMo, [1, 12] for BERT-base, and [1, 24] for BERT-large. Dimension of LSTM hidden states: {100, 200, 400, 800}. Dropout rate in LSTM: {0.1, 0.2, 0.3, 0.4}. We first optimized the output layer parameter for each contextual word embedding model while keeping all other parameters fixed. We then optimized the other parameters for each embedding model by computing the average correlation between the model predictions and the human ratings across the five cross-validation folds. Architectural variants. We also evaluated all combinations of two architectural variants: First, we evaluated models in which we included the attention layer (LSTM+ATTENTION) or simply used the final hidden state of the LSTM (LSTM) as a sentence representation. Second, since participants providing inference strength ratings also had access to 10 utterances from the preceding conversational context, we also compared models that make predictions based only the target utterance with the some-NP and models that make predictions based on target utterances and the preceding conversational context. For the models using GloVe and ELMo, we prepended the conversational context to the target utterance to obtain a joint context and utterance embedding. For models using BERT, we made use of the fact that BERT had been trained to jointly embed two sentences or documents, and we obtained embeddings for the tokens in the target 5390 utterance by feeding the target utterance as the first document and the preceding context as the second document into the BERT encoder. We discarded the hidden states of the preceding context and only used the output of BERT for the tokens in the target utterance. Implementation details. We implemented the model in PyTorch (Paszke et al., 2017). We trained the model using the Adam optimizer (Kingma and Ba, 2015) with default parameters and a learning rate of 0.001, minimizing the mean squared error of the predicted ratings. In the no-context experiments, we truncated target utterances longer than 30 tokens, and in the experiments with context, we truncated the beginning of the preceding context such that the number of tokens did not exceed 150. Evaluation. We evaluated the model predictions in terms of their correlation r with the human inference strength ratings. As mentioned above, we optimized the hyperparameters using cross validation. We then took the best set of parameters and trained a model on all the available training data and evaluated that model on the held-out data. 4.2 Tuning results Not surprisngly, we find that the attention layer improves predictions and that contextual word embeddings lead to better results than the static GloVe embeddings. We also find that including the conversational context does not improve predictions (see Appendix A, for learning curves of all models, and Section 6, for a discussion of the role of conversational context). Otherwise, the model is quite insensitive to hyperparameter settings: neither the dimension of the hidden LSTM states nor the dropout rate had considerable effects on the prediction accuracy. We do find, however, that there are differences depending on the BERT and ELMo layer that we use as word representations. We find that higher layers work better than lower layers, suggesting that word representations that are influenced by other utterance tokens are helpful for this task. Based on these optimization runs, we chose the model with attention that uses the BERT-large embeddings but no conversational context for the subsequent experiments and analyses. 4.3 Test results Figure 2 shows the correlation between the best model according to the tuning runs (now trained on all training data) and the empirical ratings on Figure 2: Correlation between empirical ratings and predictions of the BERT-LARGE LSTM+ATTENTION model on held-out test items. the 408 held-out test items. As this plot shows, the model predictions fall within a close range of the empirical ratings for most of the items (r = 0.78).3 Further, similarly as in the empirical data, there seem to be two clusters in the model predictions: one that includes lower ratings and one that includes higher ratings, corresponding to strong and weak scalar inferences, respectively. The only systematic deviation appears to be that the model does not predict any extreme ratings – almost all predictions are greater than 2 or less than 6, whereas the empirical ratings include some cases outside of this range. Overall, these results suggest that the model can learn to closely predict the strength of scalar inferences. However, this result by itself does not provide evidence that the model learned associations between linguistic features and inference strength, since it could also be that, given the large number of parameters, the model learned spurious correlations independent of the empirically established feature-strength associations. To investigate whether the model learned the expected associations, we probed the model’s behavior in multiple ways, which we discuss next. 5 Model behavior analyses Minimal pair analysis. As a first analysis, we constructed artificial minimal pairs that differed along several factors of interest and compared the model predictions. Such methods have been recently used to probe, for example, what kind of 3For comparison, we estimated how well the human ratings correlated through a bootstrapping analysis: We re-sampled the human ratings for each item and computed the average correlation coefficient between the original and the re-sampled datasets, which we found to be approximately 0.93. 5391 syntactic dependencies different types of recurrent neural network language models are capable of encoding or to what extent sentence vector representations capture compositional meanings (e.g., Linzen et al. 2016; Gulordava et al. 2018; Chowdhury and Zamparelli 2018; Ettinger et al. 2018; Marvin and Linzen 2018; Futrell et al. 2019; Wilcox et al. 2019), and also allow us to probe whether the model is sensitive to controlled changes in the input. We constructed a set of 25 initial sentences with some-NPs. For each sentence, we created 32 variants that differed in the following four properties of the some-NP: subjecthood, partitive, pre-nominal modification, and post-nominal modification. For the latter three features, we either included or excluded of the or the modifier, respectively. For example, the version in (7a) includes of the whereas the version in (7b) lacks the partitive feature. To manipulate subjecthood of the some-NP, we created variants in which some was either the determiner in the subject NP as in (7) or in the object-NP as in (8). We also created passive versions of each of these variants (9-10). Each set of sentences included a unique main verb, a unique pair of NPs, and unique modifiers. The full list of sentences can be found in Appendix C. (7) a. Some of the (organic) farmers (in the mountains) milked the brown goats who graze on the meadows. b. Some (organic) farmers (in the mountains) milked the brown goats who graze on the meadows. (8) The organic farmers in the mountains milked some (of the) (brown) goats (who graze on the meadows). (9) The brown goats who graze on the meadows were milked by some (of the) (organic) farmers (in the mountains). (10) Some (of the) (brown) goats (who graze on the meadows) were milked by the organic farmers in the mountains. Figure 3 shows the model predictions for the manually constructed sentences grouped by the presence of a partitive construction, the grammatical function of the some-NP, and the presence of a modifier. As in the natural dataset from Degen (2015), sentences with a partitive received higher predicted ratings than sentences without a partitive; sentences with subject some-NPs received higher predicted ratings than sentences with nonsubject some-NPs; and sentences with a modified head noun in the some-NP received lower predictions than sentences with an unmodified some-NP. ● ● ● ● ● ● Modification Partitive Subjecthood modified unmodified partitive non−partitive subject other 3 4 5 6 Prediction Figure 3: Average model predictions on manually constructed sentences, grouped by presence of partitives, by grammatical function of the some-NP, and by presence of nominal modifiers. Semi-transparent dots show predictions on individual sentences. All these results provide evidence that the model learned the correct associations. This is particularly remarkable considering the train-test mismatch: the model was trained on noisy transcripts of spoken language that contained many disfluencies and repairs, and was subsequently tested on clean written sentences. Regression analysis. In the minimal pair analysis above we only investigated model predictions for three factors. As a second analysis, we therefore investigated whether the predictions of the best neural network model explain the variance explained by the linguistic features that modulate inference strength. To this end, we used a slightly simplified4 Bayesian implementation of the mixedeffects model by Degen (2015) that predicted inference strength ratings from hand-mined features. We used the brms (B¨urkner, 2017) and STAN (Carpenter et al., 2017) packages and compared this original model to an extended model that included both all of the predictors of the original model as well as the the output of the above NN model as a predictor. For this comparison, we investigated whether the magnitude of a predictor in the original model significantly decreased in the extended model with the NN predictor, based on the reasoning that if the NN predictions explain the variance previously explained by these manually coded pre4We removed by-item random intercepts and by-subject random slopes to facilitate inference. This simplification yielded almost identical estimates as the original model by Degen (2015). 5392 *** ** * *** *** *** *** * NN prediction Linguistic mention:Subjecthood: Modification Subjecthood:Modification Linguistic mention:Modification Linguistic mention:Subjecthood Partitive:Strength Utterance length Modification Subjecthood Linguistic mention Strength Partitive −0.5 0.0 0.5 1.0 Coefficient estimate Parameter Regression model original model extended model Figure 4: Maximum a posteriori estimates and 95%-credible intervals of coefficients for original and extended Bayesian mixed-effects regression models predicting the inference strength ratings. */**/*** indicate that the probability of the coefficient of the original model having a larger magnitude than the coefficient of the extended model is less than 0.05, 0.01, and 0.001, respectively. dictors, then the original predictor should explain no or less additional variance. We approximated the probability that the magnitude of the coefficient for the predictor i (βi) in the extended model including the NN predictor is smaller than the coefficient in the original model, P(|βextended i | < |βoriginal i |), by sampling values for each coefficient from the distributions of the original and the extended models and comparing the magnitude of the sampled coefficients. We repeated this process 1,000,000 times and treated the simulated proportions as approximate probabilities. An issue with this analysis is that estimating the regression model only on the items in the heldout test set yields very wide credible intervals for some of the predictors–in particular for some of the interactions–since the model infers these values from very little data. We therefore performed this regression analysis (and the subsequent analyses) on the entire data. However, while we estimated the regression coefficients from all the data, we crucially obtained the NN predictions through 6fold cross-validation (without additional tuning of hyperparameters), so that the NN model always made predictions on data that it had not seen during training. This did yield the same qualitative results as the analyses only performed on the held-out test items (see Appendix B) but it also provided us with narrower credible intervals that highlight the differences between the coefficient estimates of the two models. Figure 4 shows the estimates of the coefficients in the original model and the extended model. We find that the NN predictions explain some or all of the variance originally explained by many of the manually coded linguistic features: the estimated magnitude of the predictors for partitive, determiner strength, linguistic mention, subjecthood, modification, utterance length, and two of the interaction terms decreased in the extended model. These results provide additional evidence that the NN model indeed learned associations between linguistic features and inference strength rather than only explaining variance caused by individual items. This is particularly true for the grammatical and lexical features; we find that the NN predictor explains most of the variance originally explained by the partitive, subjecthood, and modification predictors. More surprisingly, the NN predictions also explain a lot of the variance originally explained by the determiner strength predictor, which was empirically determined by probing human interpretation and is not encoded explicitly in the surface form utterance.5 One potential explanation for this is that strong and weak some have different context distributions. For instance, weak some occurs in existential there constructions and with individuallevel predicates, whereas strong some tends not to (Milsark, 1974; McNally and Geenhoven, 1998; Carlson, 1977). Since pre-trained word embedding models capture a lot of distributional information, the NN model is presumably able to learn this association. 5As explained above, Degen (2015) obtained strength ratings by asking participants to rate the similarity of the original utterance and an utterance without the determiner some (of). 5393 0.0 0.1 0.2 0.3 1 5 10 15 20 25 Position in Utterance Mean Attention Weight Token some other 0.02 0.04 0.06 0.08 1 5 10 15 20 25 Position in Utterance Subjecthood subject other 0.2 0.4 0.6 raw normalized "some of" other "of" Figure 5: Left: Average attention weights at each token position for some and other tokens. Center: Average attention weights at each token position for utterances with subject and non-subject some-NPs. Right: Average attention weights of of-tokens in partitive some-NPs and weights of other of-tokens. In the normalized cases, we take only the utterances with multiple of-tokens into account and re-normalize the attention weights across all of-tokens in one utterance. Error bars indicate 95% bootstrapped confidence intervals. Attention weight analysis. As a final type of analysis, we analyzed the attention weights that the model used for combining the token embeddings to a sentence embedding. Attention weight analyses have been successfully used for inspecting and debugging model decisions (e.g., Lee et al., 2017; Ding et al., 2017; Wiegreffe and Pinter, 2019; Vashishth et al., 2019; but see Serrano and Smith, 2019, and Jain and Wallace, 2019, for critical discussions of this approach). Based on these results, we expected the model to attend more to tokens that are relevant for making predictions.6 Given that many of the hand-mined features that predict inference strength occur within or in the vicinity of the some-NP, we should therefore expect the model to attend most to the some-NP. To test this, we first explored whether the model attended on average more to some than to other tokens in the same position. Further, we exploited the fact that in English, subjects generally occur early in a sentence. If the model attends to the vicinity of the some-NP, the average attention weights should be higher at early positions in utterances with a sub6As pointed out by one of the reviewers, given the transformer architecture, BERT token representations are influenced by numerous tokens of the input sentence and therefore it could be that the output representation of the i-th token ultimately contains very little information about the i-th token that was input to the model. Consequently, it could be that the attention weights do not provide information about which tokens the model attends to. To rule out this possibility, we also conducted the attention weight analysis for the model using static GloVe embeddings, which always exclusively represent the input token, and we found the same qualitative patterns as reported in this section, suggesting that the attention weights provide information about the tokens that are most informative for making predictions. Nevertheless, we do want to caution researchers from blindly trusting attention weight analyses and recommend using this type of analysis only in combination with other types of analyses as we have done in this work. ject some-NP compared to utterances with a nonsubject some-NP, and conversely for late utterance positions. We thus compared the average attention weights for each position across utterances with subject versus non-subject some-NPs. To make sure that any effects were not only driven by the attention weight of the some-tokens, we set the attention weights of the token corresponding to some to 0 and re-normalized the attention weights for this analysis. Further, since the attention weights are dependent on the number of tokens in the utterance, it is crucial that the average utterance length across the two compared groups be matched. We addressed this by removing outliers and limiting our analysis to utterances up to length 30 (1,028 utterances), which incidentally equalized the number of tokens across the two groups. These exclusions resulted in tiny differences in the average attention weights, but the qualitative patterns are not affected. The left panel of Figure 5 shows the average attention weight by position for some versus other tokens. The model assigns much higher weight to some. The center panel of Figure 5 shows the average attention weight by position for subject vs. non-subject some-NP utterances. The attention weights are generally higher for tokens early in the utterance,7 but the attention weights of utterances with a subject some-NP are on average higher for tokens early in the utterance compared to utterances with the some-NP in non-subject positions. Both of these findings provide evidence that the model assigns high weight to the tokens within and 7This is in part an artifact of shorter utterances which distribute the attention weights among fewer tokens. 5394 surrounding the some-NP.8 In a more targeted analysis to assess whether the model learned to use the partitive feature, we examined whether the model assigned higher attention to the preposition of in partitive some-NPs compared to when of occurred elsewhere. As utterance length was again a potential confound, we conducted the analysis separately on the full set of utterances with raw attention weights and on a subset that included only utterances with at least two instances of of (128 utterances), in which we renormalized the weights of of-tokens to sum to 1. Results are shown in the right panel of Figure 5. The attention weights were higher for of tokens in partitive some-NPs, suggesting that the model learned an association between partitive of in someNPs and inference strength. 6 Context, revisited In the tuning experiments above, we found that including the preceding conversational context in the input to the model did not improve or lowered prediction accuracy.9 At the same time, we found that the model is capable of making accurate predictions in most cases without taking the preceding context into account. Taken together, these results suggest either that the conversational context is not necessary and one can draw inferences from the target utterance alone, or that the model does not make adequate use of the preceding context. Degen (2015) did not systematically investigate whether the preceding conversational context was used by participants judging inference strength. To assess the extent to which the preceding context in this dataset affects inference strength, we re-ran her experiment, but without presenting participants with the preceding conversational context. We recruited 680 participants on Mechanical Turk who 8The regression analysis suggests that the model learned to make use of the subjecthood feature and previous work on probing behavior of contextual word representations has found that such models are capable of predicting dependency labels, including subjects (e.g., Liu et al., 2019). We therefore also hypothesize that the representations of tokens that are part of a subject some-NP contain information about the subjecthood status. This in return could be an important feature for the output layer of the model and therefore be providing additional signal for the model to attend to these tokens. 9As suggested by a reviewer, we conducted post-hoc experiments in which we limited the conversational context to the preceding 2 or 5 utterances, which presumably have a higher signal-to-noise ratio than a larger conversational context of 10 preceding utterances. In these experiments, we again found that including the conversational context did not improve model predictions. each judged 20 or 22 items, yielding 10 judgments per item. If the context is irrelevant for drawing inferences, then mean inference strength ratings should be very similar across the two experiments, suggesting the model may have rightly learned to not utilize the context. If the presence of context affects inference strength, ratings should differ across experiments, suggesting that the model’s method of integrating context is ill-suited to the task. The new, no-context ratings correlated with the original ratings (r = 0.68, see Appendix D) but were overall more concentrated towards the center of the scale, suggesting that in many cases, participants who lacked information about the conversational context were unsure about the strength of the scalar inference. Since the original dataset exhibited more of a bi-modal distribution with fewer ratings at the center of the scale, this suggests that the broader conversational context contains important cues to scalar inferences. For our model, these results suggest that the representation of the conversational context is inadequate, which highlights the need for more sophisticated representations of linguistic contexts beyond the target utterance.10 We further find that the model trained on the original dataset is worse at predicting the no-context ratings (r = 0.66) than the original ratings (r = 0.78), which is not surprising considering the imperfect correlation between ratings across experiments, but also provides additional evidence that participants indeed behaved differently in the two experiments. 7 Conclusion and future work We showed that despite lacking specific pragmatic reasoning abilities, neural network-based sentence encoders are capable of harnessing the linguistic signal to learn to predict human inference strength ratings from some to not all with high accuracy. Further, several model behavior analyses provided consistent evidence that the model learned associations between previously established linguistic features and the strength of scalar inferences. In an analysis of the contribution of the conversational context, we found that humans make use of the preceding context whereas the models we considered failed to do so adequately. Considering the 10The representation of larger linguistic context is also important for span-based question-answer (QA) systems (e.g., Hermann et al., 2015; Chen, 2018; Devlin et al., 2019) and adapting methods from QA to predicting scalar inferences would be a promising extension of the current model. 5395 importance of context in drawing both scalar and other inferences in communication (Grice, 1975; Clark, 1992; Bonnefon et al., 2009; Zondervan, 2010; Bergen and Grodner, 2012; Goodman and Stuhlm¨uller, 2013; Degen et al., 2015), the development of appropriate representations of larger context is an exciting avenue for future research. We also only considered the supervised setting in which the model was trained to predict inference strength. It would be interesting to investigate how much supervision is necessary and, for example, to what extent a model trained to perform another task such as predicting natural language inferences is able to predict scalar inferences (see Jiang and de Marneffe (2019b) for such an evaluation of predicting speaker commitment, and Jeretiˇc et al. (2020) for an evaluation of different NLI models for predicting lexically triggered scalar inferences). One further interesting line of research would be to extend this work to other pragmatic inferences. Recent experimental work has shown that inference strength is variable across scale and inference type (Doran et al., 2012; van Tiel et al., 2016). We treated some as a case study in this work, but none of our modeling decisions are specific to some. It would be straightforward to train similar models for other types of inferences. Lastly, the fact that the attention weights provided insights into the model’s decisions suggests possibilities for using neural network models for developing more precise theories of pragmatic language use. Our goal here was to investigate whether neural networks can learn associations for already established linguistic features but it would be equally interesting to investigate whether such models could be used to discover new features, which could then be verified in experimental and corpus work, potentially providing a model-driven approach to experimental and formal pragmatics. Acknowledgements We thank the anonymous reviewers for their thoughtful feedback. We also gratefully acknowledge Leyla Kursat for collecting the no-context inference strength ratings, and we thank Jesse Mu, Shyamal Buch, Peng Qi, Marie-Catherine de Marneffe, Tal Linzen, and the members of the ALPS lab and the JHU Computational Psycholinguistics group for helpful discussions. References Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), pages 1173–1182. Jon Barwise and Robin Cooper. 1981. Generalized quantifiers and natural language. Linguistics and Philosophy, 4(2):159–219. Leon Bergen and Daniel J. Grodner. 2012. Speaker knowledge influences the comprehension of pragmatic inferences. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(5):1450– 60. Jean-Franc¸ois Bonnefon, Aidan Feeney, and Ga¨elle Villejoubert. 2009. When some is actually all: Scalar inferences in face-threatening contexts. Cognition, 112(2):249–258. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 632–642. Paul-Christian B¨urkner. 2017. brms: An R package for Bayesian multilevel models using Stan. Journal of Statistical Software, 80(1):1–28. Greg N. Carlson. 1977. A unified analysis of the english bare plural. Linguistics and Philosophy, 1(3):413–456. Bob Carpenter, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: A probabilistic programming language. Journal of Statistical Software, 76(1). Danqi Chen. 2018. Neural Reading Comprehension and Beyond. Ph.D. thesis, Stanford University. Shammur Absar Chowdhury and Roberto Zamparelli. 2018. RNN simulations of grammaticality judgments on long-distance dependencies. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018), pages 133– 144. Herbert H Clark. 1992. Arenas of language use. University of Chicago Press. Reuben Cohn-Gordon, Noah D. Goodman, and Christopher Potts. 2018. Pragmatically informative image captioning with character-level inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2018), pages 439–443. 5396 Alexis Conneau, Guillaume Lample, Ruty Rinott, Holger Schwenk, Ves Stoyanov, Adina Williams, and Samuel R. Bowman. 2018. XNLI: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), pages 2475–2485. Judith Degen. 2015. Investigating the distribution of some (but not all) implicatures using corpora and web-based methods. Semantics and Pragmatics, 8:11–1. Judith Degen, Michael Henry Tessler, and Noah D. Goodman. 2015. Wonky worlds: Listeners revise world knowledge when utterances are odd. In Proceedings of the 37th Annual Conference of the Cognitive Science Society (CogSci 2015), pages 548– 553. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019), pages 4171–4186. Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pages 1150–1159. Ryan Doran, Gregory Ward, Meredith Larson, Yaron McNabb, and Rachel E. Baker. 2012. A novel experimental paradigm for distinguishing between what is said and what is implicated. Language, 88(1):124– 154. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179–211. Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018), pages 1790–1801. Michael Franke and Gerhard J¨ager. 2016. Probabilistic pragmatics, or why Bayes’ rule is probably important for pragmatics. Zeitschrift f¨ur Sprachwissenschaft, 35(1):3–44. Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic subjects: Representations of syntactic state. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019), pages 32–42. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS). Gerald Gazdar. 1979. Pragmatics: Implicature, Presupposition and Logical Form. Academic Press. John J. Godfrey, Edward C. Holliman, and Jane McDaniel. 1992. SWITCHBOARD: Telephone speech corpus for research and development. In Proceedings of the 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 92). Noah D. Goodman and Michael C. Frank. 2016. Pragmatic language interpretation as probabilistic inference. Trends in Cognitive Sciences, 20(11):818– 829. Noah D. Goodman and Andreas Stuhlm¨uller. 2013. Knowledge and implicature: Modeling language understanding as social cognition. Topics in Cognitive Science, 5(1):173–184. Herbert P. Grice. 1975. Logic and conversation. In Peter Cole and Jerry L. Morgan, editors, Syntax and Semantics, Vol. 3, Speech Acts, pages 41–58. Academic Press, New York. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2018), pages 1195–1205. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28 (NeurIPS 2015). Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Laurence Horn. 1997. All John’s children are as bald as the king of France: Existential import and the geometry of opposition. In CLS 33, pages 155–179. Laurence R. Horn. 1984. Toward a new taxonomy for pragmatic inference: Q-based and R-based implicature. In Deborah Schiffrin, editor, Meaning, Form, and Use in Context: Linguistic Applications, pages 11–4. Georgetown University Press, Washington, D.C. Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019), pages 3543– 3556. 5397 Paloma Jeretiˇc, Alex Warstadt, Suvrat Bhooshan, and Adina Williams. 2020. Are natural language inference models IMPPRESsive? Learning IMPlicature and PRESupposition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020). Nanjiang Jiang and Marie-Catherine de Marneffe. 2019a. Do you know that florence is packed with visitors? evaluating state-of-the-art models of speaker commitment. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL 2019), pages 4208–4213. Nanjiang Jiang and Marie-Catherine de Marneffe. 2019b. Evaluating BERT for natural language inference: A case study on the CommitmentBank. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6085– 6090. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2015). Jaesong Lee, Joong-Hwi Shin, and Jun-Seok Kim. 2017. Interactive visualization and manipulation of attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 121–126. Stephen C. Levinson. 2000. Presumptive meanings: The theory of generalized conversational implicature. MIT press. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017). Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019), pages 1073–1094. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), pages 1192–1202. Louise McNally and Veerle Van Geenhoven. 1998. Redefining the weak/strong distinction. Gary Milsark. 1974. Existential sentences in English. Ph.D. thesis, MIT. Will Monroe, Robert X.D. Hawkins, Noah D. Goodman, and Christopher Potts. 2017. Colors in context: A pragmatic neural model for grounded language understanding. Transactions of the Association for Computational Linguistics, 5:325–338. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NeurIPS Autodiff Workshop 2017. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNL 2014), pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2018), pages 2227–2237. Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018. Neural models of factuality. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2018), pages 731–744. Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), pages 2931–2951. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Bob van Tiel, Emiel van Miltenburg, Natalia Zevakhina, and Bart Geurts. 2016. Scalar diversity. Journal of Semantics, 33(1):137–175. Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention interpretability across NLP tasks. arXiv preprint arXiv:1909.11218. Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019a. Investigating BERT’s knowledge of language: Five analysis methods with NPIs. In Proceedings of the 5398 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019), pages 2877–2887. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019b. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP 2019), pages 11–20. Ethan Wilcox, Roger Levy, and Richard Futrell. 2019. What syntactic structures block dependencies in RNN language models? In Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019). Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. Arjen Zondervan. 2010. Scalar Implicatures or Focus: An Experimental Approach. LOT Publications, Utrecht. 5399 A Hyperparameter tuning Figure 6 shows the learning curves averaged over the 5 cross-validation tuning runs for models using different word embeddings. As these plots show, the attention layer improves predictions; contextual word embeddings lead to better results than the static GloVe embeddings; and including the conversational context does not improve predictions and in some cases even lowers prediction accuracy. BERT−base BERT−large ELMo GloVe 0 40 80 120 160 200 0 40 80 120 160 200 0 40 80 120 160 200 0 40 80 120 160 200 0.4 0.5 0.6 0.7 0.8 Epoch Average validation correlation Sentence representation LSTM LSTM+Attention Preceding context yes no Figure 6: Correlation between each model’s predictions on valuation set and empirical means, by training epoch. B Regression analysis on held-out test data Figure 7 shows the estimates of the predictors in the original and extended Bayesian mixed-effects models estimated only on the held-out test data. We find the same qualitative effects as in Figure 4, but since these models were estimated on much less data (only 408 items), there is a lot of uncertainty in the estimates and therefore quantitative comparisons between the coefficients of the different models are less informative. *** * *** * NN prediction Linguistic mention:Subjecthood: Modification Subjecthood:Modification Linguistic mention:Modification Linguistic mention:Subjecthood Partitive:Strength Utterance length Modification Subjecthood Linguistic mention Strength Partitive −1.0 −0.5 0.0 0.5 1.0 Coefficient estimate Parameter Regression model original model extended model Figure 7: Maximum a posteriori estimates and 95%-credible intervals of coefficients for original and extended Bayesian mixed-effects regression models predicting the inference strength ratings on the held-out test set. */**/*** indicate that the probability of the coefficient of the original model having a larger magnitude than the coefficient of the extended model is less than 0.05, 0.01, and 0.001, respectively. 5400 C List of manually constructed sentences Tables 1 and 2 show the 25 manually created sentences for the analyses described in the minimal pairs analysis in Section 5. As described in the main text, we created 16 variants of the sentence with the some-NP in subject position (sentences in the left column), and 16 variants of the sentence with the some-NP in object position (sentences in the right column), yielding in total 800 examples. 5401 Some of the attentive waiters at the gallery opening poured the white wine that my friend really likes. The attentive waiters at the gallery opening poured some of the white wine that my friend really likes. Some of the experienced lawyers in the firm negotiated the important terms of the acquisition. The experienced lawyers in the firm negotiated some of the important terms of the acquisition. Some of the award-winning chefs at the sushi restaurant cut the red salmon from Alaska. The award-winning chefs at the sushi restaurant cut some of the red salmon from Alaska. Some of the brave soldiers who were conducting the midnight raid warned the decorated generals who had served in a previous battle. The brave soldiers who were conducting the midnight raid warned some of the decorated generals who had served in a previous battle. Some of the eccentric scholars from the local college returned the old books written by Camus. The eccentric scholars from the local college returned some of the old books written by Camus. Some of the entertaining magicians with top hats shuffled the black cards with dots. The entertaining magicians with top hats shuffled some of the black cards with dots. Some of the convicted doctors from New York called the former patients with epilepsy. The convicted doctors from New York called some of the former patients with epilepsy. Some of the popular artists with multiple albums performed the fast songs from their first album. The popular artists with multiple albums performed some of the fast songs from their first album. Some of the angry senators from red states impeached the corrupt presidents from the Republican party. The angry senators from red states impeached some of the corrupt presidents from the Republican party. Some of the underfunded researchers without permanent employment transcribed the recorded conversations that they collected while doing fieldwork. The underfunded researchers without permanent employment transcribed some of the recorded conversations that they collected while doing fieldwork. Some of the sharp psychoanalysts in training hypnotized the young clients with depression. The sharp psychoanalysts in training hypnotized some of the young clients with depression. Some of the harsh critics from the Washington Post read the early chapters of the novel. The harsh critics from the Washington Post read some of the early chapters of the novel. Some of the organic farmers in the mountains milked the brown goats who graze on the meadows. The organic farmers in the mountains milked some of the brown goats who graze on the meadows. Some of the artisanal bakers who completed an apprenticeship in France kneaded the gluten-free dough made out of spelt. The artisanal bakers who completed an apprenticeship in France kneaded some of the glutenfree dough made out of spelt. Some of the violent inmates in the high-security prison reported the sleazy guards with a history of rule violations. The violent inmates in the high-security prison reported some of the sleazy guards with a history of rule violations. Table 1: Manually constructed sentences used in the minimal pair analyses. Sentences in the left column have a some-NP in subject position; sentences on the right have a some-NP object position. 5402 Some of the eager managers in the company instructed the hard-working sales representatives in the steel division about the new project management tool. The eager managers in the company instructed some of the hard-working sales representatives in the steel division about the new project management tool. Some of the brilliant chemists in the lab oxidized the shiny metals extracted from ores. The brilliant chemists in the lab oxidized some of the shiny metals extracted from ores. Some of the adventurous pirates on the boat found the valuable treasure that had been buried in the sand. The adventurous pirates on the boat found some of the valuable treasure that had been buried in the sand. Some of the mischievous con artists at the casino tricked the elderly residents of the retirement home. The mischievous con artists at the casino tricked some of the elderly residents of the retirement home. Some of the persistent recruiters at the conference hired the smart graduate students who just started a PhD as interns. The persistent recruiters at the conference hired some of the smart graduate students who just started a PhD as interns. Some of the established professors in the department supported the controversial petitions that were drafted by the student union. The established professors in the department supported some of the controversial petitions that were drafted by the student union. Some of the muscular movers that were hired by the startup loaded the adjustable standing desks made out of oak onto the truck. The muscular movers that were hired by the startup loaded some of the adjustable standing desks made out of oak onto the truck. Some of the careful secretaries at the headquarter mailed the confidential envelopes with the bank statements. The careful secretaries at the headquarter mailed some of the confidential envelopes with the bank statements. Some of the international stations in South America televised the early games of the soccer cup. The international stations in South America televised some of the early games of the soccer cup. Some of the wealthy investors of the fund excessively remunerated the successful brokers working at the large bank. The wealthy investors of the fund excessively remunerated some of the successful brokers working at the large bank. Table 2: Manually constructed sentences used in the minimal pair analyses (continued). 5403 D Results from no-context experiment Figure 8 shows the correlation between the mean inference strength ratings for each item in the experiment from Degen (2015) and the mean strength ratings from the new no-context experiment, discussed in Section 6. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 2 4 6 2 4 6 Mean rating with context Mean rating without context Figure 8: Mean inference strength ratings for items without context (new) against items with context (original), r = .68.
2020
479
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 505–514 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 505 GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media Yi-Ju Lu Department of Statistics National Cheng Kung University Tainan, Taiwan [email protected] Cheng-Te Li Institute of Data Science National Cheng Kung University Tainan, Taiwan [email protected] Abstract This paper solves the fake news detection problem under a more realistic scenario on social media. Given the source short-text tweet and the corresponding sequence of retweet users without text comments, we aim at predicting whether the source tweet is fake or not, and generating explanation by highlighting the evidences on suspicious retweeters and the words they concern. We develop a novel neural network-based model, Graph-aware CoAttention Networks (GCAN), to achieve the goal. Extensive experiments conducted on real tweet datasets exhibit that GCAN can significantly outperform state-of-the-art methods by 16% in accuracy on average. In addition, the case studies also show that GCAN can produce reasonable explanations. 1 Introduction Social media is indispensable in people’s daily life, where users can express themselves, access news, and interact with each other. Information can further spread through the social network. Opinions and sentiments on source stories can be reflected by user participation and interaction. The convenient and low-cost essence of social networking brings collective intelligence, but at the same time leads to a negative by-product, the propagation of misinformation such as fake news. Fake news is a kind of news story possessing intentionally false information on social media (Rashkin et al., 2017; Allcott and Gentzkow, 2017). The widespread of fake news can mislead the public, and produce unjust political, economic, or psychological profit for some parties (Horne and Adali, 2017; Allcott and Gentzkow, 2017). Data mining and machine learning techniques were utilized to detect fake news (Shu et al., 2017; Cha et al., 2020). Typical approaches rely on the content of new articles to extract textual features, such as n-gram and bag of words, and apply supervised learning (e.g., random forest and support vector machine) for binary classification (Shu et al., 2017). NLP researchers also learn advanced linguistic features, such as factive/assertive verbs and subjectivity (Popat, 2017) and writing styles and consistency (Potthast et al., 2018). Multi-modal context information is also investigated, such as user profiles (Yang et al., 2012; Liu and Wu, 2018) and retweet propagation (Ruchansky et al., 2017; Shu et al., 2019a). Nevertheless, there are still critical challenges in detecting fake news online. First, existing contentbased approaches (Castillo et al., 2011; Potthast et al., 2018; Shu et al., 2019a) require documents to be long text, e.g., news articles, so that the representation of words and sentences can be better learned. However, tweets on social media are usually short text (Yan et al., 2015), which produces severe data sparsity problem. Second, some stateof-the-art models (Ruchansky et al., 2017; Liu and Wu, 2018; Shu et al., 2019a) require a rich collection of user comments for every news story, to learn the opinions of retweeters, which usually provide strong evidences in identifying fake news. However, most users on social media tend to simply reshare the source story without leaving any comments (Kwak et al., 2010). Third, some studies (Ma et al., 2018) consider that the pathways of information cascade (i.e., retweets) in the social network are useful for classifying misinformation, and thus learn the representations of the tree-based propagation structures. However, it is costly to obtain the diffusion structure of retweets at most times due to privacy concerns (Li et al., 2018). Many users choose to hide or delete the records of social interactions. Fourth, if the service providers or the government agencies desire to inspect who are the suspicious users who support the fake news, and which topics do they concern in producing fake 506 news (Reis et al., 2019), existing models cannot provide explanations. Although dEFEND (Shu et al., 2019a) can generate reasonable explanation, it requires both long text of source articles and text of user comments. This paper deals with fake news detection under a more realistic scenario on social media. We predict whether a source tweet story is fake, given only its short text content and its retweet sequence of users, along with user profiles. That said, we detect fake news under three settings: (a) short-text source tweet, (b) no text of user comments, and (c) no network structures of social network and diffusion network. Moreover, we require the fake news detection model to be capable of explainability, i.e., highlighting the evidence when determining a story is fake. The model is expected to point out the suspicious retweeters who support the spreading of fake news, and highlight the words they especially pay attention to from the source tweet. To achieve the goal, we propose a novel model, Graph-aware Co-Attention Network (GCAN) 1. We first extract user features from their profiles and social interactions, and learn word embeddings from the source short text. Then we use convolutional and recurrent neural networks to learn the representation of retweet propagation based on user features. A graph is constructed to model the potential interactions between users, and the graph convolution network is used to learn the graph-aware representation of user interactions. We develop a dual co-attention mechanism to learn the correlation between the source tweet and retweet propagation, and the co-influence between the source tweet and user interaction. The binary prediction is generated based on the learned embeddings. We summarize the contributions as follows. (1) We study a novel and more realistic scenario of fake news detection on social media. (2) For accurate detection, we develop a new model, GCAN, to better learn the representations of user interactions, retweet propagation, and their correlation with source short text. (3) Our dual co-attention mechanism can produce reasonable explanations. (4) Extensive experiments on real datasets demonstrate the promising performance of GCAN, comparing to state-of-the-art models. The GCAN explainability is also exhibited in case studies. 1The Code of GCAN model is available and can be accessed via: https://github.com/l852888/GCAN We organize this paper as follows. Section 2 reviews the relevant approaches to fake news detection in social media. We describe the problem statement in Section 3. Then in Section 4, the details of our proposed GCAN model will be elaborated. Section 5 demonstrates the evaluation settings and results. We conclude this work in Section 6. 2 Related Work Content-based approaches rely on the text content to detect the truthfulness of news articles, which usually refer to long text. A variety of text characteristics are investigated for supervised learning, including TF-IDF and topic features (Castillo et al., 2011), language styles (e.g., part of speech, factive/assertive verbs, and subjectivity) (Popat, 2017), writing styles and consistency (Potthast et al., 2018), and social emotions (Guo et al., 2019). Zhao et al. (2015) find the enquiry phrases from user responses are useful, and Ma et al. (2016) use recurrent neural networks to learn better representations of user responses. User-based approaches model the traits of users who retweet the source story. Yang et al. (2012) extract account-based features, such as “is verified”, gender, hometown, and number of followers. Shu et al. (2019b) unveil user profiles between fake and real news are significantly different. CRNN (Liu and Wu, 2018) devise a joint recurrent and convolutional network model (CRNN) to better represent retweeter’s profiles. Session-based heterogeneous graph embedding (Jiang et al., 2018) is proposed to learn the traits of users so that they can be identified in shared accounts. However, since such a method relies on session information, it cannot be directly applied for fake news detection. Structure-based approaches leverage the propagation structure in the social network to detect fake news. Sampson et al. (2016) leverage the implicit information, i.e., hashtags and URLs, to connect conversations whose users do not have social links, and find such implicit info can improve the performance of rumor classification. Ma et al. (2017) create a kernel-based method that captures high-order patterns differentiating different types of rumors. Ma et al. (2018) develop a tree-structured recursive neural networks to learn the embedding of rumor propagation structure. Although multi-relational graph embedding methods (Feng et al., 2019; Wang and Li, 2019) are able to effectively learn how different types of entities (related to source news ar507 Table 1: Comparison of related studies. Column notations: news story texts (NS), response comments (RC), user characteristics (UC), propagation structure (PS), social network (SN), and model explainability (ME). For the NS column, “S” and “L” indicates short and long text, respectively. NS RC UC PS SN ME Ma et al. (2016) ✓(S) ✓ Ma et al. (2018) ✓(S) ✓ ✓ ✓ Liu and Wu (2018) ✓(S) ✓ ✓ Ruchansky et al. (2017) ✓(S) ✓ ✓ Shu et al. (2019a) ✓(L) ✓ ✓ ✓ Our work ✓(S) ✓ ✓ ✓ ✓ ticles) interact with each other in a heterogeneous information network for classification tasks, they cannot be applied for the inductive setting, i.e., detecting the truthfulness of new-coming tweets. Hybrid-based approaches consider and fuse multi-modal context information regarding the source tweets. CSI (Ruchansky et al., 2017) learns the sequential retweet features by incorporating response text and user profiles, and generates suspicious scores of users based on their social interactions. Wang et al. (2018) develop an event adversarial neural network to learn transferable features by removing the event-specific features, along with convolutional neural networks to extract textual and visual features. dEFEND (Shu et al., 2019a) jointly learns the sequential effect of response comments and the correlation between news content and comments, and use an attention mechanism to provide explainability. We compare our work and the most relevant studies in Table 1. The uniqueness of our work lies in: targeting at short text, requiring no user response comments, and allow model explainability. 3 Problem Statement Let Ψ = {s1, s2...s|Ψ|} be a set of tweet stories, and U = {u1, u2...u|U|} be a set of users. Each si ∈Ψ is a short-text document (also called the source tweet), given by si = {qi 1, qi 2, ..., qi li} indicating li words in story si. Each uj ∈U is associated with a user vector xj ∈Rd representing the user feature with d dimensions. When a news story si is posted, some users will share si and generate a sequence of retweet records, which is termed a propagation path. Given a news story si, we denote its propagation path as Ri = {..., (uj, xj, tj), ...}, where (uj, xj, tj) depicts j-th user uj (with their feature vector xj) 𝐅: product 𝐇𝑔: sum 𝐚𝑔: softmax ො𝐠: product 𝐅T: product 𝐇𝑠: sum 𝐚𝑠: softmax ො𝐬1: product 𝐟: concatenate FC Layer ො𝐲: prediction 𝐠1 𝐠2 𝐠𝑛 ... 𝐅T: product 𝐇𝑠: sum 𝐚𝑠: softmax ො𝐬2: product 𝐅: product 𝐇𝑐: sum 𝐚𝑐: softmax Ƹ𝐜: product 𝐬1 𝐬2 𝐬𝑚 ... 𝐜1 𝐜2 𝐜𝜋 ... 𝒉1 𝒉2 𝒉𝒏 ... 𝒗1 𝑣2 𝑣𝑚 GRU GRU GRU ... 𝐱1 𝐱2 𝐱𝑛 CNN CNN CNN ... 𝐞1 𝐞2 𝐞𝑚 𝐱1 𝐱2 𝐱𝑛 ... ... ... 𝐱1 𝐱2 𝐱𝑛 GRU GRU GRU ... ... ... 𝐱1 𝐞1 Source tweet 𝐞2 𝐞3 𝐞𝑚 ... 𝐱2 𝐱3 𝐱4 𝐱𝑛 ... ... Pooling Retweet Order Source-Interaction Co-Attention Source-Propagation Co-Attention GCN GCN GCN Graph-aware Representation Source Tweet Encoding CNN-based Propagation Representation GRU-based Propagation Representation Figure 1: The architecture of our GCAN model. who retweets story si, and j = 1, 2, ..., K (i.e., K = |Ri|). We denote the set of users who retweet story si as Ui. In Ri, we denote the user who originally shares si as u1 at time t1. For j > 1, user uj retweets si at tj (tj > t1). Each story si is associated with a binary label yi ∈{0, 1} to represent its truthfulness, where yi = 0 indicates story si is true, and yi = 1 means si is fake. Given a source tweet si, along with the corresponding propagation path Ri containing users uj who retweet si as well as their feature vectors xj, our goal is to predict the truthfulness yi of story si, i.e., binary classification. In addition, we require our model to highlight few users uj ∈Ui who retweet si and few words qi k ∈si that can interpret why si is identified as a true or fake one. 4 The Proposed GCAN Model We develop a novel model, Graph-aware CoAttention Networks (GCAN), to predict fake news based on the source tweet and its propagation-based users. GCAN consists of five components. The first is user characteristics extraction: creating features to quantify how a user participates in online social networking. The second is new story encoding: generating the representation of words in the source tweet. The third is user propagation representation: modeling and representing how the source tweet propagates by users using their extracted characteristics. The fourth is dual co-attention mechanisms: capturing the correlation between the source tweet and users’ interactions/propagation. The last is making prediction: generating the detection outcome by concatenating all learned representations. 508 4.1 User Characteristics Extraction To depict how users participate in social networking, we employ their metadata and profiles to define the feature vector xj of every user uj. The extracted features are listed as follows: (1) number of words in a user’s self-description, (2) number of words in uj’s screen name, (3) number of users who follows uj, (4) number of users that uj is following, (5) number of created stories for uj, (6) time elapsed after uj’s first story, (7) whether the uj account is verified or not, (8) whether uj allows the geo-spatial positioning, (9) time difference between the source tweet’s post time and uj’s retweet time, and (10) the length of retweet path between uj and the source tweet (1 if uj retweets the source tweet). Eventually, every user feature vector xj ∈Rv is generated, where v is the number of features. 4.2 Source Tweet Encoding The given source tweet is represented by a wordlevel encoder. The input is the one-hot vector of each word in story si. Since the length of every source story is different, we perform zero padding here by setting a maximum length m. Let E = [e1, e2, ..., em] ∈Rm be the input vector of source story, in which em is the one-hot encoding of the m-th word. We create a fullyconnected layer to generate word embeddings, V = [v1, v2, ..., vm] ∈Rd×m, where d is the dimensionality of word embeddings. The derivation of V is given by: V = tanh(WwE + bw) (1) where Ww is the matrix of learnable weights, and bc is the bias term. Then, we utilize Gating Recurrent Units (GRU) (Chung et al., 2014) to learn the words sequence representation from V. The source tweet representation learning can be depicted by: st = GRU(vt), t ∈{1, ..., m}, where m is the GRU dimensionality. We denote the source tweet representation as S = [s1, s2, ..., sm] ∈Rd×m. 4.3 User Propagation Representation The propagation of source tweet si is triggered by a sequence of users as time proceeds. We aim at exploiting the extracted user feature vectors xj, along with the user sequence spreading si, to learn user propagation representation. The underlying idea is that the user characteristics in real news propagations are different from those of fake ones. We make use of Gating Recurrent Units (GRU) and Convolutional Neural Network (CNN) to learn propagation representations. Here the input is the sequence of feature vectors of users retweeting si, denoted by PF(si) = ⟨x1, x2, ..., xt, ..., xn⟩, where n is the fixed length of observed retweets. If the number of users sharing si is higher than n, we take the first n users. If the number is lower than n, we resample users in PF(si) until its length equals to n. GRU-based Representation. Given the sequence of feature vectors PF(si) = ⟨..., xt, ..., ⟩, we utilize GRU to learn the propagation representation. Each GRU state has two inputs, the current feature vector xt and the previous state’s output vector ht−1, and one output vector ht. The GRUbased representation learning can be depicted by: ht = GRU(xt), t ∈{1, ..., n}, where n is the dimensionality of GRU. We generate the final GRUbased user propagation embedding h ∈Rd by average pooling, given by h = 1 n Pn t=1 ht. CNN-based Representation. We take advantage of 1-D convolution neural network to learn the sequential correlation of user features in PF(si). We consider λ consecutive users at one time to model their sequential correlation, i.e., ⟨xt, ..., xt+λ−1⟩. Hence the filter is set as Wf ∈Rλ×v. Then the output representation vector C ∈Rd×(t+λ−1) is given by C = ReLU(Wf · Xt:t+λ−1 + bf) (2) where Wf is the matrix of learnable parameters, ReLU is the activation function, Xt:t+λ−1 depicts sub-matrices whose first row’s index is from t = 1 to t = n −λ + 1, and bf is the bias term. 4.4 Graph-aware Propagation Representation We aim at creating a graph to model the potential interaction among users who retweet source story si. The idea is that some correlation between users with particular characteristics can reveal the possibility that the source tweet is fake. To fulfill such an idea, a graph Gi = (Ui, Ei) is constructed for the set of users who share source story si (i.e., Ui), where Ei is the corresponding edge set. Since the true interactions between users are unknown, we consider Gi is a fully-connected graph, i.e., ∀eαβ ∈Ei, uα ∈Ui, uβ ∈Ui, and uα ̸= uβ, |Ei| = n×(n−1) 2 . To incorporate user features in the graph, each edge eαβ ∈Ei is associated with 509 a weight ωαβ, and the weight is derived based on cosine similarity between user feature vectors xα and xβ, given by ωαβ = xα·xβ ∥xα∥∥xβ∥. We use matrix A = [ωαβ] ∈Rn×n to represent weights between any pair of nodes uα and uβ in graph Gi. A graph convolution network (GCN) layer (Kipf and Welling, 2017) is created based on the constructed graph Gi for source tweet si. A GCN is a multi-layer neural network that performs on graph data and generates embedding vectors of nodes according to their neighborhoods. GCN can capture information from a node’s direct and indirect neighbors through stacking layer-wise convolution. Given the matrix A for graph Gi, and X depicting the matrix of feature vectors for users in Gi, the new g-dimensional node feature matrix H(l+1) ∈Rn×g can be derived by H(l+1) = ρ( ˜AH(l)Wl), (3) where l is the layer number, ˜A = D−1 2 AD−1 2 is the normalized symmetric weight matrix (Dii = P j Aij), and Wl ∈Rd×g is the matrix of learnable parameters at the l-th GCN layer. ρ is an activation function, i.e., a ReLU ρ(x) = max(0, x). Here H(0) is set to be X. We choose to stack two GCN layers in derive the learned graph-aware representation, denoted as G ∈Rg×n. 4.5 Dual Co-attention Mechanism We think the evidence of fake news can be unveiled through investigating which parts of the source story are concerned by which kinds of retweet users, and fake clues can be reflected by how retweet users interact with each other. Therefore, we develop a dual co-attention mechanism to model the mutual influence between the source tweet (i.e., S = [s1, s2, ..., sm]) and user propagation embeddings (i.e., C = [c1, c2, ..., cn−λ+1] from Section 4.3), and between the source tweet and graph-aware interaction embeddings (i.e., G = [g1, g2, ..., gn] from Section 4.4). Equipped with co-attention learning, our model is capable of the explainability by looking into the attention weights between retweet users in the propagation and words in the source tweet. In other words, by extending the co-attention formulation (Lu et al., 2016), the proposed dual co-attention mechanism aims to attend to the source-tweet words and graphaware interaction users simultaneously (sourceinteraction co-attention), and also attend to the source-tweet words and propagated users simultaneously (source-propagation co-attention). Source-Interaction Co-attention. We first compute a proximity matrix F ∈Rm×n as: F = tanh(S⊤WsgG), where Wsg is a d × g matrix of learnable parameters. By treating the proximity matrix as a feature, we can learn to predict source and interaction attention maps, given by Hs = tanh(WsS + (WgG)F⊤) Hg = tanh(WgG + (WsS)F) (4) where Ws ∈Rk×d, Wg ∈Rk×g are matrices of learnable parameters. The proximity matrix F can be thought to transforming user-interaction attention space to source story word attention space, and vice versa for its transpose F⊤. Then we can generate the attention weights of source words and interaction users through the softmax function: as = softmax(w⊤ hsHs) ag = softmax(w⊤ hgHg) (5) where as ∈R1×m and ag ∈R1×n are the vectors of attention probabilities for each word in the source story and each user in the interaction graph, respectively. whs, whg ∈R1×k are learnable weights. Eventually we can generate the attention vectors of source story words and interaction users through weighted sum using the derived attention weights, given by ˆs1 = m X i=1 as isi , ˆg = n X j=1 ag jgj (6) where ˆs1 ∈R1×d and ˆg ∈R1×g are the learned coattention feature vectors that depict how words in the source tweet are attended by users who interact with one another. Source-Propagation Co-attention. The process to generate the co-attention feature vectors, ˆs2 ∈R1×d and ˆc ∈R1×d, for the source story and user propagation, respectively, is the same as source-interaction co-attention, i.e., creating another proximity matrix to transform them into each other’s space. We skip the repeated details due to the page limit. Note that the GRU-based user representations are not used to learn the interactions with the source tweet. The reason is that how user profiles in the retweet sequence look like is also important, as suggested by CRNN (Liu and Wu, 2018), and should 510 Table 2: Statistics of two Twitter datasets. Twitter15 Twitter16 # source tweets 742 412 # true 372 205 # fake 370 207 # users 190,868 115,036 avg. retweets per story 292.19 308.70 avg. words per source 13.25 12.81 be emphasized separately. Nevertheless, the CNNbased user representations (i.e., features that depict the sequence of user profiles) has been used in the co-attention mechanism to learn their interactions with source tweet. 4.6 Make Prediction We aim at predicting fake news using the sourceinteraction co-attention feature vectors ˆs1 and ˆg, the source-propagation feature vectors ˆs2 and ˆc, and the sequential propagation feature vector h. Let f = [ˆs1, ˆg,ˆs2, ˆc, h] which is then fed into a multi-layer feedforward neural network that finally predicts the label. We generate the binary prediction vector ˆy = [ˆy0, ˆy1], where ˆy0 and ˆy1 indicate the predicted probabilities of label being 0 and 1, respectively. It can be derived through ˆy = softmax(ReLU(fWf + bf)), (7) where Wf is the matrix of learnable parameters, and bf is the bias term. The loss function is devised to minimize the cross-entropy value: L(Θ) = −y log(ˆy1) −(1 −y) log(1 −ˆy0) (8) where Θ denotes all learnable parameters in the entire neural network. We choose the Adam optimizer to learn Θ as it can determine the learning rate abortively. 5 Experiments We conduct experiments to answer three questions: (1) whether our GCAN model is able to achieve satisfactory performance of fake news detection, compared to state-of-the-art methods? (2) how does each component of GCAN contribute to the performance? (3) can GCAN generate a convincing explanation that highlights why a tweet is fake? 5.1 Datasets and Evaluation Settings Data. Two well-known datasets compiled by Ma et al. (2017), Twitter15 and Twitter16, are utilized. Each dataset contains a collection of source tweets, along with their corresponding sequences of retweet users. We choose only “true” and “fake” labels as the ground truth. Since the original data does not contain user profiles, we use user IDs to crawl user information via Twitter API. Competing Methods. We compare our GCAN with the state-of-the-art methods and some baselines, as listed below. (1) DTC (Castillo et al., 2011): a decision tree-based model combining user profiles and the source tweet. (2) SVM-TS (Ma et al., 2015): a linear support vector machine classifier that utilizes the source tweet and the sequence of retweet users’ profiles. (3) mGRU (Ma et al., 2016): a modified gated recurrent unit model for rumor detection, which learns temporal patterns from retweet user profile, along with the source’s features. (4) RFC (Kwon et al., 2017): an extended random forest model combining features from retweet user profiles and the source tweet. (5) CSI (Ruchansky et al., 2017): a state-of-the-art fake news detection model incorporating articles, and the group behavior of users who propagate fake news by using LSTM and calculating the user scores. (6) tCNN (Yang et al., 2018): a modified convolution neural network that learns the local variations of user profile sequence, combining with the source tweet features. (7) CRNN (Liu and Wu, 2018): a state-of-the-art joint CNN and RNN model that learns local and global variations of retweet user profiles, together with the resource tweet. (8) dEFEND (Shu et al., 2019a): a state-of-the-art co-attention-based fake news detection model that learns the correlation between the source article’s sentences and user profiles. Model Configuration. Our model is termed “GCAN”. To examine the effectiveness of our graph-aware representation, we create another version “GCAN-G”, denoting our model without the graph convolution part. For both our models and competing methods, we set the number of training epochs to be 50. The hyperparameter setting of GCAN is: number of retweet users = 40, word embedding dim = 32, GRU output dim = 32, 1-D CNN output filter size = 3, 1-D CNN output dim = 32, and GCN output dim = 32. The hyperparameters of competing methods are set by following the settings mentioned in respective studies. Metrics & Settings. The evaluation metrics include Accuracy, Precision, Recall, and F1. We randomly choose 70% data for training and 30% for testing. The conducted train-test is repeated 20 511 Table 3: Main results. The best model and the best competitor are highlighted by bold and underline, respectively. Twitter15 Twitter16 Method F1 Rec Pre Acc F1 Rec Pre Acc DTC 0.4948 0.4806 0.4963 0.4949 0.5616 0.5369 0.5753 0.5612 SVM-TS 0.5190 0.5186 0.5195 0.5195 0.6915 0.6910 0.6928 0.6932 mGRU 0.5104 0.5148 0.5145 0.5547 0.5563 0.5618 0.5603 0.6612 RFC 0.4642 0.5302 0.5718 0.5385 0.6275 0.6587 0.7315 0.6620 tCNN 0.5140 0.5206 0.5199 0.5881 0.6200 0.6262 0.6248 0.7374 CRNN 0.5249 0.5305 0.5296 0.5919 0.6367 0.6433 0.6419 0.7576 CSI 0.7174 0.6867 0.6991 0.6987 0.6304 0.6309 0.6321 0.6612 dEFEND 0.6541 0.6611 0.6584 0.7383 0.6311 0.6384 0.6365 0.7016 GCAN-G 0.7938 0.7990 0.7959 0.8636 0.6754 0.6802 0.6785 0.7939 GCAN 0.8250 0.8295 0.8257 0.8767 0.7593 0.7632 0.7594 0.9084 Improvement 15.0% 20.8% 18.1% 18.7% 19.3% 15.9% 3.8% 19.9% times, and the average values are reported. 5.2 Experimental Results Main Results. The main results are shown in Table 3. We can clearly find that the proposed GCAN significantly outperforms the best competing methods over all metrics across two datasets, improving the performance by around 17% and 15% on average in Twitter15 and Twitter16, respectively. Even without the proposed graph-aware representation, GCAN-G can improve the best competing method by 14% and 3% on average in Twitter15 and Twitter16, respectively. Such promising results prove the effectiveness of GCAN for fake news detection. The results also imply three insights. First, GCAN is better than GCAN-G by 3.5% and 13% improvement in Twitter15 and Twitter16, respectively. This exhibits the usefulness of graph-aware representation. Second, the dual co-attention mechanism in GCAN is quite powerful, as it clearly outperforms the best non-co-attention state-of-the-art model CSI. Third, while both GCAN-G and dEFEND are co-attention-based, additional sequential features learned from the retweet user sequence in GCAN-G can significantly boost the performance. Early Detection. We further report the performance (in only Accuracy due to page limit) by varying the number of observed retweet users per source story (from 10 to 50), as exhibited in Figure 2 and Figure 3. It can be apparently found that our GCAN consistently and significantly outperforms the competitors. Even with only ten retweeters, GCAN can still achieve 90% accuracy. Such results tell GCAN is able to generate accurate early detection of the spreading fake news, which is cru10 20 30 40 50 Number of users 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy Twitter15 GCAN GCAN-G dEFEND CSI CRNN Figure 2: Accuracy by # retweet users in Twitter15. 10 20 30 40 50 Number of users 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy Twitter16 GCAN GCAN-G dEFEND CSI CRNN Figure 3: Accuracy by # retweet users in Twitter16. cial when defending misinformation. Ablation Analysis. We report how each of GCAN component contributes by removing each one from the entire model. Below “ALL” denotes using all components of GCAN. By removing dual co-attention, GRU-based representation, graph-aware representation, and CNN-based representation, we have sub-models “-A”, “-R”, “-G”, 512 ter15 Twitter16 0.52 0.64 0.59 0.65 0.735 0.7 0.88 0.78 0.89 0.88 0.915 0.91 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Twitter15 Twitter16 Accuracy ‐S‐A ‐A ‐R ‐G ‐C ALL Figure 4: GCAN ablation analysis in Accuracy. Figure 5: Highlighting evidential words via word cloud. Larger font sizes indicate higher co-attention weights. and “-C”, respectively. Sub-model “-S-A” denotes the one without both source tweet embeddings and dual co-attention. The results are presented in Figure 4. We can find every component indeed plays a significant contribution, especially for dual coattention (“-A”) and the representation learning of user propagation and interactions (“-R” and “G”). Since the source tweet provides fundamental clues, the accuracy drops significantly without it (“-S-A”). 5.3 GCAN Explainability The co-attention weights derived from Section 4.5 attended on source tweet words and retweet users (source-propagation co-attention) allow our GCAN to be capable of explainability. By exhibiting where attention weights distribute, evidential words and users in predicting fake news can be revealed. Note that we do not consider source-interaction coattention for explainability because user interaction features learned from the constructed graph cannot be intuitively interpretable. Explainability on Source Words. To demonstrate the explainability, we select two source tweets in the test data. One is fake (“breaking: ks patient at risk for ebola: in strict isolation at ku med center in kansas city #kwch12”), and the other is real (“confirmed: this is irrelevant. rt @ks0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 0 F1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 0 F2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 0 F3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 0 T1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 0 T2 0 5 10 15 20 25 30 35 Rewteet Order 0 T3 Figure 6: Visualization of attention weights for user propagations of 3 fake (upper F1-F3) and 3 true source tweets. From left to right is retweet order. Dark colors refer to higher attention weights. Ans: fake news uid verified creation time descpt. length path to  source ⋮ 14 0 4 7 1 15 0 5 11 1 16 0 6 8 1 ⋮ 32 0 9 17 1 33 0 7 13 2 34 1 9 20 2 ⋮ Retweet Propagation highlighted by attention  weights on fake news highlighted by attention  weights on real news Source Tweet Breaking : huge explosion of an #oil  pipeline belonging to @saudi_aramco  near sudair, #saudiarabia.  Figure 7: Evidential words highlighed by GCAN in source tweet (upper) and suspicious users highlighed by GCAN in retweet propagation (bottom), in which each column is a user characteristic. Note that only few user characteristics are presented. dknews: confirmed: #mike-brown had no criminal record. #ferguson”). We highlight evidential words with higher co-attention weights in font sizes of word clouds, as exhibited in Figure 5. GCAN predicts the former to be fake with stronger attention on words “breaking” and “strict”, and detects the latter as real since it contains “confirmed” and “irrelevant.” Such results may correspond to the common knowledge (Rashkin et al., 2017; Horne and Adali, 2017) that fake news tends to use dramatic and obscure words while real news is attended by confirmed and fact checking-related words. Explainability on Retweet Propagation. We aim to exploit the retweet order in propagations to unfold the behavior difference between fake and real news. We randomly pick three fake (F1-F3) and three true (T1-T3) source stories, and plot their 513 weights from source-propagation co-attention (Section 4.5), as exhibited in Figure 6, in which the horizontal direction from left to right denotes the order of retweet. The results show that to determine whether a story is fake, one should first examine the characteristics of users who early retweet the source story. The evidences of fake news in terms of user characteristics may be evenly distributed in the propagation. Explainability on Retweeter Characteristics. The source-propagation co-attention of our GCAN model can further provide an explanation to unveil the traits of suspicious users and the words they focus on. A case study is presented in Figure 7. We can find that the traits of suspicious users in retweet propagation can be: accounts are not verified, shorter account creation time, shorter user description length, and shorter graph path length to the user who posts the source tweet. In addition, what they highly attend are words “breaking” and “pipeline.” We think such kind of explanation can benefit interpret the detection of fake news so as to understand their potential stances. 6 Conclusion In this study, we propose a novel fake news detection method, Graph-aware Co-Attention Networks (GCAN). GCAN is able to predict whether a short-text tweet is fake, given the sequence of its retweeters. The problem scenario is more realistic and challenging than existing studies. Evaluation results show the powerful effectiveness and the reasonable explainability of GCAN. Besides, GCAN can also provide early detection of fake news with satisfying performance. We believe GCAN can be used for not only fake news detection, but also other short-text classification tasks on social media, such as sentiment detection, hate speech detection, and tweet popularity prediction. We will explore model generalization in the future work. Besides, while fake news usually targets at some events, we will also extend GCAN to study how to remove eventspecific features to further boost the performance and explainability. Acknowledgments This work is supported by Ministry of Science and Technology (MOST) of Taiwan under grants 109-2636-E-006-017 (MOST Young Scholar Fellowship) and 108-2218-E-006-036, and also by Academia Sinica under grant AS-TP-107-M05. References Hunt Allcott and Matthew Gentzkow. 2017. Social media and fake news in the 2016 election. The Journal of Economic Perspectives, 31:211–235. Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of the 20th International Conference on World Wide Web, WWW ’11, pages 675–684. Meeyoung Cha, Wei Gao, and Cheng-Te Li. 2020. Detecting fake news in social media: An asia-pacific perspective. Commun. ACM, 63(4):68–71. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. Ming-Han Feng, Chin-Chi Hsu, Cheng-Te Li, MiYen Yeh, and Shou-De Lin. 2019. Marine: Multirelational network embeddings with relational proximity and node attributes. In The World Wide Web Conference, WWW ’19, pages 470–479. Chuan Guo, Juan Cao, Xueyao Zhang, Kai Shu, and Miao Yu. 2019. Exploiting emotions for fake news detection on social media. CoRR, abs/1903.01728. Benjamin Horne and Sibel Adali. 2017. This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. In Proceedings of AAAI International Conference on Web and Social Media, pages 759–766. Jyun-Yu Jiang, Cheng-Te Li, Yian Chen, and Wei Wang. 2018. Identifying users behind shared accounts in online streaming services. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR ’18, pages 65–74. Thomas N. Kipf and Max Welling. 2017. SemiSupervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR ’17. Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon. 2010. What is twitter, a social network or a news media? In Proceedings of the 19th International Conference on World Wide Web, WWW ’10, pages 591–600. Sejeong Kwon, Meeyoung Cha, and Kyomin Jung. 2017. Rumor detection over varying time windows. PLOS ONE, 12(1):1–19. Cheng-Te Li, Yu-Jen Lin, and Mi-Yen Yeh. 2018. Forecasting participants of information diffusion on social networks with its applications. Information Sciences, 422:432 – 446. Yang Liu and Yi-Fang Wu. 2018. Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In AAAI Conference on Artificial Intelligence, pages 254–261. 514 Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pages 289–297. Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J. Jansen, Kam Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. IJCAI International Joint Conference on Artificial Intelligence, pages 3818– 3824. Jing Ma, Wei Gao, Zhongyu Wei, Yueming Lu, and Kam-Fai Wong. 2015. Detect rumors using time series of social context information on microblogging websites. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, pages 1751–1754. Jing Ma, Wei Gao, and Kam Fai Wong. 2017. Detect rumors in microblog posts using propagation structure via kernel learning. In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 708– 717. Jing Ma, Wei Gao, and Kam-Fai Wong. 2018. Rumor detection on twitter with tree-structured recursive neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1980–1989. Kashyap Popat. 2017. Assessing the credibility of claims on the web. In Proceedings of the 26th International Conference on World Wide Web Companion, WWW ’17 Companion, pages 735–739. Martin Potthast, Johannes Kiesel, Kevin Reinartz, Janek Bevendorff, and Benno Stein. 2018. A stylometric inquiry into hyperpartisan and fake news. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL ’18, pages 231–240. Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2931–2937. Julio C. S. Reis, Andr´e Correia, Fabr´ıcio Murai, Adriano Veloso, and Fabr´ıcio Benevenuto. 2019. Explainable machine learning for fake news detection. In Proceedings of the 10th ACM Conference on Web Science, WebSci ’19, pages 17–26. Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. Csi: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM ’17, pages 797–806. Justin Sampson, Fred Morstatter, Liang Wu, and Huan Liu. 2016. Leveraging the implicit structure within social media for emergent rumor detection. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM ’16, pages 2377–2382. Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019a. defend: Explainable fake news detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, pages 395– 405. Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. SIGKDD Explor. Newsl., 19(1):22–36. Kai Shu, Xinyi Zhou, Suhang Wang, Reza Zafarani, and Huan Liu. 2019b. The role of user profile for fake news detection. CoRR, abs/1904.13355. Pei-Chi Wang and Cheng-Te Li. 2019. Spotting terrorists by learning behavior-aware heterogeneous network embedding. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM ’19, pages 2097– 2100. Yaqing Wang, Fenglong Ma, Zhiwei Jin, Ye Yuan, Guangxu Xun, Kishlay Jha, Lu Su, and Jing Gao. 2018. Eann: Event adversarial neural networks for multi-modal fake news detection. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &#38; Data Mining, KDD ’18, pages 849–857. Rui Yan, Ian E.H. Yen, Cheng-Te Li, Shiqi Zhao, and Xiaohua Hu. 2015. Tackling the achilles heel of social networks: Influence propagation based language model smoothing. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15, pages 1318–1328. Fan Yang, Yang Liu, Xiaohui Yu, and Min Yang. 2012. Automatic detection of rumor on sina weibo. In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics, MDS ’12. Yang Yang, Lei Zheng, Jiawei Zhang, Qingcai Cui, Zhoujun Li, and Philip S. Yu. 2018. Ti-cnn: Convolutional neural networks for fake news detection. Zhe Zhao, Paul Resnick, and Qiaozhu Mei. 2015. Enquiring minds: Early detection of rumors in social media from enquiry posts. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15, pages 1395–1405.
2020
48
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5404–5414 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5404 Implicit Discourse Relation Classification: We Need to Talk about Evaluation Najoung Kim∗ Department of Cognitive Science Johns Hopkins University [email protected] Song Feng, Chulaka Gunasekara, Luis A. Lastras IBM Research AI {sfeng@us, chulaka.gunasekara@, lastrasl@us}.ibm.com Abstract Implicit relation classification on Penn Discourse TreeBank (PDTB) 2.0 is a common benchmark task for evaluating the understanding of discourse relations. However, the lack of consistency in preprocessing and evaluation poses challenges to fair comparison of results in the literature. In this work, we highlight these inconsistencies and propose an improved evaluation protocol. Paired with this protocol, we report strong baseline results from pretrained sentence encoders, which set the new state-of-the-art for PDTB 2.0. Furthermore, this work is the first to explore fine-grained relation classification on PDTB 3.0. We expect our work to serve as a point of comparison for future work, and also as an initiative to discuss models of larger context and possible data augmentations for downstream transferability. 1 Introduction Understanding discourse relations in natural language text is crucial to end tasks involving larger context, such as question-answering (Jansen et al., 2014) and conversational systems grounded on documents (Saeidi et al., 2018; Feng et al., 2020). One way to characterize discourse is through relations between two spans or arguments (ARG1/ARG2) as in the Penn Discourse TreeBank (PDTB) (Prasad et al., 2008, 2019). For instance: [Arg1 I live in this world,] [Arg2 assuming that there is no morality, God or police.] (wsj_0790) Label: EXPANSION.MANNER.ARG2-AS-MANNER The literature has focused on implicit discourse relations from PDTB 2.0 (Pitler et al., 2009; Lin et al., 2009), on which deep learning has yielded substantial performance gains (Chen et al., 2016; Liu and Li, 2016; Lan et al., 2017; Qin et al., 2017; Bai and ∗Work done while at IBM Research. Zhao, 2018; Nguyen et al., 2019, i.a.). However, inconsistencies in preprocessing and evaluation such as different label sets (Rutherford et al., 2017) pose challenges to fair comparison of results and to analyzing the impact of new models. In this paper, we revisit prior work to explicate the inconsistencies and propose an improved evaluation protocol to promote experimental rigor in future work. Paired with this guideline, we present a set of strong baselines from pretrained sentence encoders on both PDTB 2.0 and 3.0 that set the state-of-the-art. We furthermore reflect on the results and discuss future directions. We summarize our contributions as follows: • We highlight preprocessing and evaluation inconsistencies in works using PDTB 2.0 for implicit discourse relation classification. We expect our work to serve as a comprehensive guide to common practices in the literature. • We lay out an improved evaluation protocol using section-based cross-validation that preserves document-level structure. • We report state-of-the-art results on both toplevel and second-level implicit discourse relation classification on PDTB 2.0, and the first set of results on PDTB 3.0. We expect these results to serve as simple but strong baselines that motivate future work. • We discuss promising next steps in light of the strength of pretrained encoders, the shift to PDTB 3.0, and better context modeling. 2 The Penn Discourse TreeBank (PDTB) In PDTB, two text spans in a discourse relation are labeled with either one or two senses from a three-level sense hierarchy. PDTB 2.0 contains around 43K annotations with 18.4K explicit and 16K implicit relations in over 2K Wall Street Journal (WSJ) articles. Identifying implicit relations (i.e., without explicit discourse markers such as 5405 Model Ji Lin P&K X-Accuracy Majority class 26.18 26.11 28.54 26.42 Adversarial Net (Qin et al., 2017) 46.23 44.65 Seq2Seq+MemNet (Shi and Demberg, 2019) 47.83 45.82 41.29† ELMo (Bai and Zhao, 2018) 48.22 45.73 ELMo, Memory augmented (Bai et al., 2019) 49.15 46.08 Multitask learning (Nguyen et al., 2019) 49.95 46.48 BERT+MNLI (Nie et al., 2019) 53.7 BERT+DisSent Books 5 (Nie et al., 2019) 54.7 BERT (base, uncased) 52.13 (±0.50) 51.41 (±1.02) 52.00 (±1.02) 49.68 (±0.35) BERT (large, uncased) 57.34∗∗(±0.79) 55.07∗∗(±1.01) 55.61 (±1.32) 53.37 (±0.22) XLNet (base, cased) 54.73 (±1.26) 55.82∗∗∗(±0.79) 54.71 (±0.45) 52.98 (±0.29) XLNet (large, cased) 61.29∗∗∗(±1.49) 58.77∗∗∗(±0.99) 59.90∗(±0.96) 57.74 (±0.90) Table 1: Accuracy on PDTB 2.0 L2 classification. We report average performance and standard deviation across 5 random restarts. Significant improvements according to the N −1 χ2 test after Bonferroni correction are marked with ∗,∗∗,∗∗∗(2-tailed p < .05, < .01, < .001). We compare the best published model and the median result from the 5 restarts of our models. Because we use section-based cross-validation, significance over † is not computed. but) is more challenging than explicitly signaled relations (Pitler et al., 2008). The new version of the dataset, PDTB 3.0 (Prasad et al., 2019), introduces a new annotation scheme with a revised sense hierarchy as well as 13K additional datapoints.2 The third-level in the sense hierarchy is modified to only contain asymmetric (or directional) senses. 2.1 Variation in preprocessing and evaluation We survey the literature to identify several sources of variation in preprocessing and evaluation that could lead to inconsistencies in the results reported. Choice of label sets. Due to the hierarchical annotation scheme and skewed label distribution, a range of different label sets has been employed for formulating classification tasks (Rutherford et al., 2017). The most popular choices for PDTB 2.0 are: (1) top-level senses (L1) comprised of four labels, and (2) finer-grained Level-2 senses (L2). For L2, the standard protocol is to use 11 labels after eliminating five infrequent labels as proposed in Lin et al. (2009). Sometimes ENTREL is also included in the L2 label set (Xue et al., 2015). Level-3 senses (L3) are not often used due to label sparsity. Data partitioning. The variability of data splits used in the literature is substantial. This is problematic considering the small number of examples in a typical setup with 1-2 WSJ sections as test sets. For instance, choosing sections 23-24 rather than 21-22 results in an offset of 149, and a label offset as large as 71 (COMPARISON.CONTRAST). 2Note that there has been an update to PDTB 3.0 since this article has been written. This affects around 130 datapoints. This is a large enough difference to cast doubt on claims for state-of-the-art, considering the small size of the test sets (∼1000). We illustrate the variability of split choices in published work in Appendix B. Recently, splits recommended by Prasad et al. (2008) and Ji and Eisenstein (2015) (Ji) are the most common, but splits from Patterson and Kehler (2013) (P&K), Li and Nenkova (2014), i.a., have also been used. The Prasad et al. split is frequently attributed to Lin et al. (2009) (Lin), and thus we adopt this naming convention. Multiply-annotated labels. Span pairs in PDTB are optionally annotated with multiple sense labels. The common practice is either taking only the first label or the approach in Qin et al. (2017), i.a., where instances with multiple annotations are treated as separate examples during training. A prediction is considered correct if it matches any of the labels during testing. However, a subtle inconsistency exists even across works that follow the latter approach. In PDTB, two connectives (or inferred connectives for implicit relations) are possible for a span pair, where the second connective is optional. A connective can each have two semantic classes (i.e., the labels), where the second class is optional. Thus, a maximum of four distinct labels are possible for each span pair. However, in the actual dataset, the maximum number of distinct labels turns out to be two. An inconsistency arises depending on which of the four possible label fields are counted. For instance, Qin et al. (2017) treat all four fields (SCLASS1A, SCLASS1B, SCLASS2A, SCLASS2B; see link) as possible labels, whereas Bai and Zhao (2018); Bai et al. (2019) use only 5406 SCLASS1A,SCLASS2A. Often, this choice is implicit and can only be deduced from the codebase. Random initialization. Different random initializations of a network often lead to substantial variability (Dai and Huang, 2018). It is important to consider this variability especially when the reported margin of improvement can be as small as half a percentage point (see cited papers in Table 1). We report the mean over 5 random restarts for existing splits, and the mean of mean cross-validation accuracy over 5 random restarts.3 3 Proposed Evaluation Protocol While Xue et al. (2015) lay out one possible protocol, it does not fully address the issues we have raised in Section 2. Another limitation is the unavailability of the preprocessing code as of the date of this submission. We describe our proposal below, which will be accompanied by a publicly available preprocessing code.4 In addition to accounting for the variation previously discussed, we take Shi and Demberg (2017)’s concerns into consideration. Cross-validation. We advocate using crossvalidation for L2 classification, sharing the concerns of Shi and Demberg (2017) on label sparsity. However, we propose using cross-validation at section-level rather than individual example-level as suggested by Shi and Demberg (2017). This is to preserve paragraph and document structures, which are essential for investigating the effect of modeling larger context (e.g., Dai and Huang 2018). We further illustrate the potential utility of document structure in Section 4. We suggest dividing the 25 sections of PDTB into 12 folds with 2 development, 2 test and 21 training sections in each fold. We used a sliding window of two sections starting from P&K (dev: 0-1, test: 23-24, train: 2-22). All but one section (22) is used exactly once for testing. Whether future works should evaluate on these particular cross-validation splits or on randomized splits (Gorman and Bedrick, 2019) is an open issue; we provide an additional discussion in Appendix F. Label sets. We recommend reporting results on both L1 and L2, using the standard 11-way classification for L2 in PDTB 2.0. A standardized label set 3Due to limitations of compute, we only report random restarts of cross-validation (5 seeds x 12 folds) for our main results. For additional experiments in Section 4, we report the average over folds only. Generally, variance over seeds were smaller than over folds for our models. 4https://github.com/najoungkim/pdtb3 does not exist yet for L2 in PDTB 3.0 (L1 remains unchanged). We propose using only the labels with > 100 instances, which leaves us with 14 senses from L2 (see Appendix A for counts). We suggest using all four possible label fields if the senses are multiply-annotated, as discussed in Section 2.1. Model X-Accuracy (±σ) Majority class 26.61 BERT (base, uncased) 57.60 (±0.19) BERT (large, uncased) 61.02 (±0.19) XLNet (base, cased) 60.78 (±0.24) XLNet (large, cased) 64.83 (±0.37) Table 2: Performance on PDTB 3.0 L2 classification. 3.1 Baseline results Following our proposed protocol, we report baseline results from two strong sentence encoder models: BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019), using a publicly available codebase.5 See Appendix C for training details. We present L2 results on PDTB 2.0 in Table 1 and results on PDTB 3.0 in Table 2 (see Appendix D for L1 results). To maintain backwards compatibility to the literature, we also report PDTB 2.0 results on Ji, Lin and P&K splits (see Section 2.1). Ji & Lin are the most common splits, and P&K is the split used by Nie et al. (2019) who claim the current stateof-the-art for L2. For PDTB 2.0 (Table 1), our baselines showed strong performance on all splits. XLNet-large was the single best model, significantly outperforming every best reported result.6 3.2 Single-span baselines Table 4 lists the performance of single-span (either ARG1 or ARG2) baseline models for both PDTB 2.0 and 3.0. This baseline adapts the idea of hypothesis-only baselines in Natural Language Inference (Poliak et al., 2018), where we limit the training data by only showing the models one of the two spans that are in a discourse relation. We discuss these baselines further in Section 4. 4 Discussion: where should we go next? Annotation improvements in PDTB 3.0 are effective. PDTB 3.0 claims several improvements 5https://github.com/huggingface/ pytorch-transformers 6We used the N −1 χ2 test to compare proportions instead of a matched test like McNemar’s, because we only had access to reported accuracies (rather than raw predictions) of the best models in the literature. 5407 Label µ(|train|) µ(|test|) BERT-base BERT-large XLNet-base XLNet-large Cont.Cause.Reason 2474 238 62.1 64.1 62.8 71.0 Cont.Cause.Result 2378 227 56.1 60.2 60.6 70.6 Expn.Level-of-detail.Arg1-as-detail 214 21 0.0 3.3 7.2 8.0 Expn.Level-of-detail.Arg2-as-detail 2602 240 46.8 52.8 53.2 55.8 Expn.Manner.Arg1-as-manner 480 6 29.6 39.8 49.1 34.8 Expn.Manner.Arg2-as-manner 140 12 49.7 55.3 57.6 57.2 Temp.Asynchronous.Precedence 907 85 59.0 62.3 63.2 68.5 Temp.Asynchronous.Succession 174 16 13.3 31.0 37.1 43.7 Table 3: Average label accuracy per directional label in L2+L3 classification, over cross-validation folds. Model X-Accuracy (±σ) Majority class 25.52 BERT-(base, uncased), ARG1-only 42.28 (±1.76) BERT-(large, uncased), ARG1-only 42.79 (±1.31) XLNet-(base, cased), ARG1-only 42.39 (±1.03) XLNet-(large, cased), ARG1-only 42.55 (±1.44) BERT-(base, uncased), ARG2-only 47.59 (±1.94) BERT-(large, uncased), ARG2-only 48.69 (±1.57) XLNet-(base, cased) ARG2-only 48.00 (±1.97) XLNet-(large, cased), ARG2-only 47.99 (±1.72) BERT-(base, uncased), Upper-bound 61.71 (±0.02) BERT-(large, uncased), Upper-bound 63.82 (±0.01) XLNet-(base, cased), Upper-bound 63.43 (±0.01) XLNet-(large, cased), Upper-bound 63.41 (±0.02) Table 4: Cross-validation accuracy on PDTB 3.0 L2 classification (14-way) of single-span baselines. over PDTB 2.0. For instance, the annotation manual (Prasad et al., 2019) remarks that LIST was removed since it was “not in practice distinguishable from CONJUNCTION”. Indeed, models trained on PDTB 2.0 behaved exactly so, classifying most of LIST as CONJUNCTION (but not vice versa, likely due to frequency effect; see Appendix G). We conducted an additional experiment testing the impact of the new annotation scheme, in an attempt to address the question “If we want to detect relation X in a downstream task, which PDTB should we use to train our models?”. We trained the same model (BERT-large) twice on the same set of datapoints, only varying the annotation scheme. Since PDTB 3.0 has both added and removed examples, we filtered the datasets so that the two PDTBs contained exactly the same span pairs. With the model and inputs fixed, the labeling scheme should be the only effective factor. After filtering, the majority-class baseline for both were less than 30%. Table 5 suggests that PDTB 3.0’s annotation scheme does lead to improved distinguishability of CONJUNCTION.7 PDTB 3.0 overall yielded better 7We used pooled cross-validation accuracy (compared us(or unchanged) distinguishability of shared labels except for CONTRAST. This trend was especially salient for CONCESSION that was practically unlearnable from PDTB 2.0. This supports the utility of PDTB 3.0 over 2.0 if downstream transfer is considered, motivating a transition to 3.0. Unsurprisingly, the change in distinguishability was highly dependent on the change in label counts in the training data (Table 5, ∆). But change in frequency alone does not give us the full picture. For instance, SYNCHRONOUS remained difficult to learn even with a substantial increase in labeled examples. The absolute size of the class was also not deterministic of performance. There were 192 training instances of SYNCHRONOUS in the filtered PDTB 2.0 and 261 for PDTB 3.0. Similar/smaller classes such as |ALTERNATIVE| = 118 in PDTB 2.0 and |SUBSTITUTION| = 191 in PDTB 3.0 were still learnable with 26% and 48% accuracy, respectively. This was mostly due to SYNCHRONOUS being mislabeled as CONJUNCTION, which was also the case in the unfiltered dataset (see Appendix G). Label Acc. (2.0) Acc. (3.0) ∆ Cont.Cause 65.3 67.8∗ +25 Comp.Concession 0 46.6∗∗∗ +740 Comp.Contrast 50.5∗ 43.4 -820 Expn.Conjunction 57.6 61.7∗∗ +88 Expn.Instantiation 60.7 57.7 +4 Temp.Asynchronous 48.8 48.0 -7 Temp.Synchronous 0 2.7 +70 Table 5: Pooled cross-validation accuracy of BERTlarge on shared labels. Models were trained on the same set of datapoints, with only the annotation scheme differing. ∆denotes the average per-fold change in (filtered) training label counts from PDTB 2.0 to 3.0. New directional labels are potentially useful but distributionally skewed. The new annoing Fisher’s exact test and Bonferroni correction) because label sparsity made fold-wise comparisons underpowered for small classes like ASYNCHRONOUS. 5408 tation scheme for PDTB 3.0 marks the directionality of relations (e.g., ARG1- vs ARG2-ASMANNER). These relations are important for naturally-occurring discourse, where order-variable asymmetrical relations are common. For example, in Figure 1, span [2] is conditionally dependent on [3], and [5] has a dependency on [4]; such ordered dependencies must be correctly tracked across discourse contexts. We investigated whether directional labels are sufficiently identifiable with our models. We replaced L2 classes with L3 subclasses (L2+L3), if both subclasses had > 100 examples. Except for REASON and RESULT, the distribution of L3 classes under the same L2 is heavily skewed, which led to low performance (Table 3). This calls for a data augmentation that would balance subclass ratios and alleviate label sparsity at L3. Within-document label distribution is informative, even for shallow discourse parsing. We have advocated for an evaluation scheme that preserves larger contexts. This is motivated by the fact that discourse relations are not independently distributed from one another (even when they are annotated in isolation, as in PDTB). For instance, implicit CONJUNCTION (IC) relations are likely to be adjacent; in PDTB 3.0, the probability of one IC following another is P(IC2|IC1) = 0.14, when P(IC) = 0.08. Implicit REASON is likely to be adjacent to RESULT; P(IReason|IResult) = 0.12, P(IReason) = 0.05. Vanilla pretrained encoders are strong, but are overreliant on lexical cues. A simple finetuning of pretrained encoders yielded impressive gains. At the same time, they overrelied on lexical cues. For instance, ARG2-initial to often signals PURPOSE; 79.9% of such cases are true PURPOSE relations. It is reasonable for our models to utilize this strong signal, but the association was much amplified in their prediction. For example, XLNetbase predicted PURPOSE for 95.8% of the examples with ARG2-initial to. We also found that model predictions were in general brittle; a simplistic lexical perturbation with no semantic effect, such as appending ‘-’ to the beginning of spans, resulted in a 9%p drop in performance for BERT-large models. Overall, there still remains much overhead for improvement, with our best model at 66% accuracy on PDTB 3.0 L2 classification. Combining pretrained encoders and expanded context modeling to better capture document-level distributional 1 Why can't I receive recovery email? 2 Some users started to experience the issue of not receiving any recovery email. 3 That typically happens when the account has been logged on using different devices within 24 hours. 4 You can call IT Desk during hours of operation, 5 you will be provided instructions on how to make a reset request. 6 You can submit a request for another user. 7 However, it is only allowed when their computer is broken or not functional. Figure 1: A snippet of an online document for IT troubleshooting, segmented in discourse units. signals could be a promising next step. Aggregation of single-span baselines as decontextualized upper-bounds. Lexical cues continue to be informative even for implicit relations, as with the case of ARG2-initial to. Although these signals could be genuine rather than artifactual, they require comparatively less multi-span reasoning. Then, how much of our dataset only requires shallower reasoning as such? To address this question, we constructed a decontextualized baseline by aggregating predictions of single-span models, and assuming that an oracle always chooses the right answer if it is in the prediction set. This provides an upper-bound estimate of the performance of a model that only disjointly considers the two input spans, but still has full lexical access. Comparing the final rows of Table 4 and Table 2, we see that no model reliably outperforms its decontextualized upper-bound counterpart. 5 Conclusion We have surveyed the literature to highlight experimental inconsistencies in implicit discourse relation classification, and suggested an improved protocol using section-level cross-validation. We provided a set of strong baselines for PDTB 2.0 and 3.0 following this protocol, as well as results on a range of existing setups to maintain comparability. We discussed several future directions, including data augmentation for downstream transferability, applicability of pretrained encoders to discourse, and utilizing larger discourse contexts. Acknowledgments This work was supported by IBM Research. We thank the three anonymous reviewers for their insightful comments. We also thank Sadhwi Srinivas, Grusha Prasad and Tal Linzen for their advice on statistical analysis. 5409 References Hongxiao Bai and Hai Zhao. 2018. Deep enhanced representation for implicit discourse relation recognition. In Proceedings of the 27th International Conference on Computational Linguistics, pages 571– 583, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Hongxiao Bai, Hai Zhao, and Junhan Zhao. 2019. Memorizing all for implicit discourse relation recognition. arXiv:1908.11317v1. Chloé Braud and Pascal Denis. 2015. Comparing word representations for implicit discourse relation classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2201–2211, Lisbon, Portugal. Association for Computational Linguistics. Jifan Chen, Qi Zhang, Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Implicit discourse relation detection via a deep architecture with gated relevance network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1726– 1735, Berlin, Germany. Association for Computational Linguistics. Zeyu Dai and Ruihong Huang. 2018. Improving implicit discourse relation classification by modeling inter-dependencies of discourse units in a paragraph. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 141–151, New Orleans, Louisiana. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Song Feng, Kshitij Fadnis, Q Vera Liao, and Luis A Lastras. 2020. Doc2Dial: a framework for dialogue composition grounded in documents. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20). Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2786–2791, Florence, Italy. Association for Computational Linguistics. Peter Jansen, Mihai Surdeanu, and Peter Clark. 2014. Discourse complements lexical semantics for nonfactoid answer reranking. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 977–986, Baltimore, Maryland. Association for Computational Linguistics. Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. Transactions of the Association for Computational Linguistics, 3:329–344. Man Lan, Jianxiang Wang, Yuanbin Wu, Zheng-Yu Niu, and Haifeng Wang. 2017. Multi-task attentionbased neural networks for implicit discourse relationship representation and identification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1299– 1308, Copenhagen, Denmark. Association for Computational Linguistics. Wenqiang Lei, Yuanxin Xiang, Yuwei Wang, Qian Zhong, Meichun Liu, and Min-Yen Kan. 2018. Linguistic properties matter for implicit discourse relation recognition: Combining semantic interaction, topic continuity and attribution. In Thirty-Second AAAI Conference on Artificial Intelligence. Junyi Jessy Li and Ani Nenkova. 2014. Reducing sparsity improves the recognition of implicit discourse relations. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 199–207, Philadelphia, PA, U.S.A. Association for Computational Linguistics. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 343–351, Singapore. Association for Computational Linguistics. Yang Liu and Sujian Li. 2016. Recognizing implicit discourse relations via repeated reading: Neural networks with multi-level attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1224–1233, Austin, Texas. Association for Computational Linguistics. Annie Louis, Aravind Joshi, Rashmi Prasad, and Ani Nenkova. 2010. Using entity features to classify implicit discourse relations. In Proceedings of the SIGDIAL 2010 Conference, pages 59–62, Tokyo, Japan. Association for Computational Linguistics. Linh The Nguyen, Linh Van Ngo, Khoat Than, and Thien Huu Nguyen. 2019. Employing the correspondence of relations and connectives to identify implicit discourse relations via label embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4201– 4207, Florence, Italy. Association for Computational Linguistics. Allen Nie, Erin Bennett, and Noah Goodman. 2019. DisSent: Learning sentence representations from explicit discourse relations. In Proceedings of the 5410 57th Annual Meeting of the Association for Computational Linguistics, pages 4497–4510, Florence, Italy. Association for Computational Linguistics. Joonsuk Park and Claire Cardie. 2012. Improving implicit discourse relation recognition through feature set optimization. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 108–112, Seoul, South Korea. Association for Computational Linguistics. Gary Patterson and Andrew Kehler. 2013. Predicting the presence of discourse connectives. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 914–923, Seattle, Washington, USA. Association for Computational Linguistics. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 683–691, Suntec, Singapore. Association for Computational Linguistics. Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind Joshi. 2008. Easily identifiable discourse relations. In COLING 2008: Companion volume: Posters, pages 87–90, Manchester, UK. COLING 2008 Organizing Committee. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics (*SEM), pages 180–191, New Orleans, Louisiana. Association for Computational Linguistics. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0. In LREC 2008. Rashmi Prasad, Bonnie Webber, Alan Lee, and Aravind Joshi. 2019. Penn Discourse Treebank Version 3.0. In LDC2019T05. Philadelphia: Linguistic Data Consortium. Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, and Eric Xing. 2017. Adversarial connectiveexploiting networks for implicit discourse relation classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1006–1017, Vancouver, Canada. Association for Computational Linguistics. Attapol Rutherford, Vera Demberg, and Nianwen Xue. 2017. A systematic study of neural discourse models for implicit discourse relation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 281–291, Valencia, Spain. Association for Computational Linguistics. Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2087–2097, Brussels, Belgium. Association for Computational Linguistics. Wei Shi and Vera Demberg. 2017. Do we need cross validation for discourse relation classification? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 150–156, Valencia, Spain. Association for Computational Linguistics. Wei Shi and Vera Demberg. 2019. Learning to explicitate connectives with Seq2Seq network for implicit discourse relation classification. In Proceedings of the 13th International Conference on Computational Semantics - Long Papers, pages 188–199, Gothenburg, Sweden. Association for Computational Linguistics. WenTing Wang, Jian Su, and Chew Lim Tan. 2010. Kernel based discourse relation recognition with temporal ordering information. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 710–719, Uppsala, Sweden. Association for Computational Linguistics. Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol Rutherford. 2015. The CoNLL-2015 shared task on shallow discourse parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning - Shared Task, pages 1–16, Beijing, China. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 5753– 5763. Curran Associates, Inc. Appendix A Dataset Statistics We report the training, development and test set sizes for all dataset splits discussed in the paper (Table 6). These are counts of individual labeled span pairs in the dataset, not the counts of individual labels (development and test set examples can be doubly-annotated). Note that the count we provide 5411 for the training split of Ji is one short of what has been reported in Shi and Demberg (2019) and also the count obtained by using Qin et al. (2017)’s preprocessing code. This is due to a duplicate example with label EXPANSION.ALTERNATIVE, which our preprocessing code does not generate. Split Train Dev Test PDTB 2.0 Ji 12825 1165 1039 Lin 13366 515 766 P&K 13908 1165 1188 X-val 13676 1281 1273 L1 (Ji) 13046 1183 1046 PDTB 3.0 L2 X-val 19005 1756 1747 L2+L3 X-val 19005 1756 1747 L1 (Ji) 17854 1647 1471 Table 6: Dataset sizes for PDTB 2.0 and 3.0. Crossvalidation counts are averaged across 12 folds. Tables 7 and 8 list the label counts of each class in PDTB 3.0 and PDTB 2.0, respectively. Label n Comparison 2298/2518 Contingency 6998/7583 Expansion 10062/10833 Temporal 1731/1828 Comparison.Concession 1494 Comparison.Contrast 983 Contingency.Cause 5785 Contingency.Cause+Belief 202 Contingency.Condition 199 Contingency.Purpose 1373 Expansion.Conjunction 4386 Expansion.Equivalence 336 Expansion.Instantiation 1533 Expansion.Level-of-detail 3361 Expansion.Manner 739 Expansion.Substitution 450 Temporal.Asynchronous 1289 Temporal.Synchronous 539 Contingency.Cause.Result 2835 Contingency.Cause.Reason 2950 Expansion.Level-of-detail.Arg1-as-detail 256 Expansion.Level-of-detail.Arg2-as-detail 3105 Expansion.Manner.Arg1-as-manner 572 Expansion.Manner.Arg2-as-manner 167 Temporal.Asynchronous.Precedence 1081 Temporal.Asynchronous.Succession 208 Table 7: Label counts for PDTB 3.0 L1, L2 and directional senses of L3 that have more than 100 annotated instances. L1 classification is evaluated on Ji split, so we list both the label counts in Ji split and the total label counts in the whole dataset. Label n Comparison 2291/2503 Contingency 3911/4255 Expansion 8249/8861 Temporal 909/950 Comparison.Concession 223 Comparison.Contrast 2120 Contingency.Cause 4172 Contingency.Pragmatic cause 83 Expansion.Conjunction 3534 Expansion.Instantiation 1445 Expansion.Alternative 185 Expansion.List 400 Expansion.Restatement 3206 Temporal.Asynchronous 697 Temporal.Synchrony 251 Table 8: Label counts for PDTB 2.0 L1 and 11 senses of L2 (label set commonly used in the literature for L2 classification). L1 classification is evaluated on Ji split, so we list both the label counts in Ji split and the total label counts in the whole dataset. B List of Splits in Prior Work We compile a (non-exhaustive) list of the Wall Street Journal sections used as training, development, test sets in published work to demonstrate the high variability. We mostly list works that do not explicitly specify the source of the splits, with some exceptions. Some of the works have overlapping sections across splits, which we suspect to be typos but cannot verify. • Prasad et al. (2008) (officially recommended split): 2-21 (train), 22 (dev), 23 (test) • Pitler et al. (2009); Ji and Eisenstein (2015): 2-20 (train), 0-1 (dev), 21-22 (test) • Lin et al. (2009): 2-21 (train), 23 (test) • Patterson and Kehler (2013): 2-22 (train), 0-1 (dev), 23-24 (test) • Wang et al. (2010): 2-22 (train), 23-24 (test) • Louis et al. (2010): 0-22 (train), 23-24 (test) • Braud and Denis (2015): 2-21 (train), 0-1, 23-24 (dev), 21-22 (test) • Li and Nenkova (2014): 2-19 (train), 20-24 (test) • Lei et al. (2018): 2-20 (train), 0-1, 23-24 (dev), 21-22 (test) • Park and Cardie (2012): 2-20 (train), 0-2 (dev), 21-22 (test) 5412 C Training Details For all sentence encoder models, we fine-tuned each encoder for a maximum of 10 epochs with early stopping when the the development set performance did not improve for 5 evaluation steps (step size=500), with a batch size of 8. We used a learning rate of 5e-6 for all models except for XLNet-large, for which we used 2e-6. We used accuracy as the validation metric. We ran each model 5 times with different random initializations of the fine-tuning layer, and reported the average performance across the 5 runs. D Top-level Sense Classification Results Table 9 shows the performance on L1 classification for both PDTB 2.0 and PDTB 3.0. Model PDTB 2.0 PDTB 3.0 F1 Acc F1 Acc Majority class 17.4 54.9 15.2 47.3 Lan et al. (2017) 47.8 57.4 Dai and Huang (2018) 48.7 58.2 Bai and Zhao (2018) 51.1 Bai et al. (2019) 52.2 60.7 Nguyen et al. (2019) 53.0 BERT (base, uncased) 52.6 64.3 62.1 69.0 BERT (large, uncased) 59.1 68.7 66.8 72.4 XLNet (base, cased) 56.0 66.3 64.8 71.3 XLNet (large, cased) 54.3 67.2 68.3 73.8 Table 9: Accuracy and F1 on L1 classification (4-way) for PDTB 2.0 and 3.0, using Ji split for both. We report average performance across 5 random restarts. E Single-span Baselines for L2 Classification Table 10 lists the performance of single-span (either ARG1 or ARG2) baselines for PDTB 2.0. Results on PDTB 3.0 are reported in Table 4. We additionally note that ARG2-only models consistently outperform ARG1-only models in both PDTB 2.0 and 3.0. For PDTB 3.0, the strong association between ARG2-initial to and CONTINGENCY.PURPOSE was largely responsible for this discrepancy (see Section 4 also). F Cross-validation and Randomized validation Gorman and Bedrick (2019) have proposed validation over randomized splits using significance testing with multiple-comparisons correction. An adaptation of this idea to our proposal of sectionbased evaluation would be randomized sampling of sections to create section-based splits. Given label sparsity and distributional skew across sections, cross-validation has an advantage of guaranteed coverage of label counts used for testing, although this may not be a large issue if sufficient number of random splits are sampled. Conversely, the main goal of evaluation on random splits—avoiding overfitting to the standard split—is partially mitigated by reporting the average performance over crossvalidation splits. Still, if a standard cross-validation split is adopted, overfitting may still arise over time. Although we leave it to future work to decide which practice should be followed, we provide comparisons between the four models we tested, using our proposed cross-validation splits and random validation splits (both n = 12). Random splitting was done section-wise instead of instance-wise; we randomly split the dataset into 21 train, 2 dev, 2 test sections 12 times. Table 11 shows the model comparison results. G Additional Error Analyses Figure 2 shows the confusion matrices generated from PDTB 2.0 L2 classification results produced by XLNet-large and BERT-large models. Figure 3 shows the confusion matrices of PDTB 3.0 L2 classification predictions, again from XLNet-large and BERT-large models (we did not observe immediate qualitative differences between XLNet and BERT, or between large and base models). The figures aggregate the predictions from all test sets of the cross-validation experiment, so the datapoints shown span the full dataset except for WSJ section 22. The colors are normalized over each row; the darkest shade is the most frequently predicted label for the true label denoted by the row. It was generally the case for both models that classes sharing the same L1 senses (e.g., CONTINGENCY.CAUSE and CONTINGENCY.PRAGMATIC CAUSE, or COMPARISON.CONTRAST and COMPARISON.CONCESSION) were confused. When such confusions occurred, the more frequent class often subsumed the prediction of the other class (e.g., CONTINGENCY.PRAGMATIC CAUSE was often classified as CONTINGENCY.CAUSE but not vice versa). As noted in Section 4, TEMPORAL.SYNCHRONOUS (SYNCHRONY in PDTB 5413 Accuracy X-Accuracy Model Ji Lin P&K Majority class 26.18 26.11 28.54 26.42 Adversarial Net (Qin et al., 2017) 46.23 44.65 Seq2Seq+MemNet (Shi and Demberg, 2019) 47.83 45.82 41.29 ELMo (Bai and Zhao, 2018) 48.22 45.73 ELMo, Memory augmented (Bai et al., 2019) 49.15 46.08 Multitask learning (Nguyen et al., 2019) 49.95 46.48 BERT+MNLI (Nie et al., 2019) 53.7 BERT+DisSent Books 5 (Nie et al., 2019) 54.7 BERT (base, uncased), ARG1-only 38.59 (±0.67) 36.11 (±1.01) 35.86 (±1.43) 36.66 (±1.26) BERT (large, uncased), ARG1-only 39.31 (±0.70) 36.42 (±0.21) 37.71 (±1.42) 37.23 (±1.22) XLNet (base, cased), ARG1-only 39.48 (±1.10) 35.40 (±1.06) 35.71 (±1.32) 37.38 (±1.76) XLNet (large, cased), ARG1-only 39.77 (±1.58) 35.61 (±1.48) 36.20 (±1.77) 36.33 (±2.04) BERT (base, uncased), ARG2-only 40.99 (±1.34) 40.99 (±1.34) 40.98 (±1.12) 40.60 (±1.48) BERT (large, uncased), ARG2-only 44.27 (±1.00) 40.78 (±1.33) 42.34 (±1.21) 41.45 (±1.64) XLNet (base, cased), ARG2-only 43.20 (±1.48) 40.84 (±0.99) 40.45 (±1.22) 40.46 (±1.45) XLNet (large, cased), ARG2-only 42.00 (±1.24) 41.78 (±1.00) 41.48 (±1.14) 41.17 (±1.48) Table 10: Single-span baseline performance on PDTB 2.0 L2 classification (11-way). All results are averages over 5 random restarts, except for cross-validation where we report averages over 12 folds. X-validation Randomized BERT-base vs BERT-large 8 9 BERT-base vs XLNet-base 8 6 BERT-base vs XLNet-large 12 12 BERT-large vs XLNet-large 6 7 XLNet-base vs BERT-large 0 1 XLNet-base vs XLNet-large 6 10 Table 11: The number of splits out of twelve for which the second model had significantly higher accuracy than the first model after Bonferroni correction. We used McNemar’s test following Gorman and Bedrick (2019). 2.0) was frequently confused with EXPANSION.CONJUNCTION (but not vice versa). The models generally had a tendency to predict CONTINGENCY.CAUSE across the board, likely due to it being the most frequent label. 5414 Temporal.Asynchronous Temporal.Synchrony Contingency.Cause Contingency.Pragmatic cause Comparison.Contrast Comparison.Concession Expansion.Conjunction Expansion.Instantiation Expansion.Restatement Expansion.Alternative Expansion.List Temporal.Asynchronous Temporal.Synchrony Contingency.Cause Contingency.Pragmatic cause Comparison.Contrast Comparison.Concession Expansion.Conjunction Expansion.Instantiation Expansion.Restatement Expansion.Alternative Expansion.List 300 0 111 0 67 0 95 5 42 0 2 9 0 23 0 27 0 85 6 15 0 2 83 0 2676 0 232 0 335 136 521 10 1 0 0 44 0 2 0 4 4 13 0 0 50 0 296 0 1181 0 324 21 84 15 13 12 0 38 0 122 0 28 3 11 0 0 85 0 523 0 355 0 1910 77 306 3 69 8 0 121 0 13 0 79 880 245 0 2 31 0 647 0 119 0 302 285 1616 6 1 1 0 40 0 38 0 14 2 49 32 2 3 0 23 0 19 0 239 8 15 0 74 PDTB2, XLNet-large Temporal.Asynchronous Temporal.Synchrony Contingency.Cause Contingency.Pragmatic cause Comparison.Contrast Comparison.Concession Expansion.Conjunction Expansion.Instantiation Expansion.Restatement Expansion.Alternative Expansion.List 302 0 111 0 58 0 94 8 47 0 2 6 0 23 0 19 0 90 7 18 0 4 92 0 2674 1 215 0 333 126 536 16 1 1 0 43 0 2 0 2 5 14 0 0 66 0 374 0 933 0 424 28 128 24 7 10 0 50 0 92 0 45 3 14 0 0 101 0 562 1 308 0 1854 77 350 3 72 2 0 151 0 12 0 87 805 288 2 1 40 1 798 0 97 0 387 279 1386 16 3 1 0 46 0 29 0 14 2 33 53 0 4 0 22 0 14 0 246 5 18 0 72 PDTB2, BERT-large Figure 2: Confusion matrices of XLNet-large and BERT-large models on PDTB 2.0 L2 classification task. The rows are true labels and the columns are predicted labels. Temporal.Asynchronous Temporal.Synchronous Contingency.Cause Contingency.Cause+Belief Contingency.Condition Contingency.Purpose Comparison.Contrast Comparison.Concession Expansion.Conjunction Expansion.Instantiation Expansion.Equivalence Expansion.Level-of-detail Expansion.Manner Expansion.Substitution Temporal.Asynchronous Temporal.Synchronous Contingency.Cause Contingency.Cause+Belief Contingency.Condition Contingency.Purpose Comparison.Contrast Comparison.Concession Expansion.Conjunction Expansion.Instantiation Expansion.Equivalence Expansion.Level-of-detail Expansion.Manner Expansion.Substitution 778 7 148 0 0 4 29 58 122 8 0 42 8 6 26 98 57 0 0 14 44 27 190 4 0 30 16 4 136 6 4166 0 2 25 26 218 355 164 19 378 43 34 3 0 130 0 0 0 2 6 12 16 0 21 1 5 1 1 23 0 132 25 2 3 0 0 0 2 1 0 5 13 16 0 31 1170 0 0 3 0 0 8 69 11 23 2 60 0 0 0 438 114 131 6 0 12 0 34 45 2 219 0 0 1 86 881 143 14 0 33 6 6 116 25 548 0 0 2 144 208 2846 88 8 239 8 21 6 3 116 0 0 1 2 11 85 966 0 229 2 2 3 0 148 0 0 2 0 12 41 7 26 80 4 5 47 13 620 0 0 13 18 72 305 322 25 1626 47 23 16 7 24 0 0 14 0 1 3 2 0 44 107 1 6 1 55 0 0 0 5 20 14 2 2 17 0 234 PDTB3, XLNet-large Temporal.Asynchronous Temporal.Synchronous Contingency.Cause Contingency.Cause+Belief Contingency.Condition Contingency.Purpose Comparison.Contrast Comparison.Concession Expansion.Conjunction Expansion.Instantiation Expansion.Equivalence Expansion.Level-of-detail Expansion.Manner Expansion.Substitution 728 7 181 0 0 5 23 51 139 9 0 48 14 5 29 84 65 0 1 8 41 22 195 7 0 34 23 1 130 9 3950 0 3 25 34 215 459 143 18 468 65 53 5 0 128 0 0 0 2 4 11 12 0 31 0 3 1 1 15 0 138 19 0 6 0 1 0 4 4 1 2 9 14 0 30 1131 0 0 3 0 0 9 116 12 20 3 78 0 0 0 373 105 164 11 1 34 0 31 51 5 334 0 0 0 65 680 189 18 1 81 1 11 119 12 610 0 1 4 162 182 2748 83 8 300 10 14 3 0 171 0 0 0 8 7 103 834 0 289 5 3 5 0 172 0 0 1 4 12 62 6 12 47 3 4 53 10 705 0 0 12 23 82 379 294 7 1499 44 23 12 7 28 0 0 10 0 0 4 0 0 37 118 3 6 1 73 0 0 1 7 6 17 1 1 20 2 221 PDTB3, BERT-large Figure 3: Confusion matrices of XLNet-large and BERT-large models on PDTB 3.0 L2 classification task. The rows are true labels and the columns are predicted labels.
2020
480
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5415–5428 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5415 PeTra: A Sparsely Supervised Memory Model for People Tracking Shubham Toshniwal1, Allyson Ettinger2, Kevin Gimpel1, Karen Livescu1 1Toyota Technological Institute at Chicago 2Department of Linguistics, University of Chicago {shtoshni, kgimpel, klivescu}@ttic.edu, [email protected] Abstract We propose PeTra, a memory-augmented neural network designed to track entities in its memory slots. PeTra is trained using sparse annotation from the GAP pronoun resolution dataset and outperforms a prior memory model on the task while using a simpler architecture. We empirically compare key modeling choices, finding that we can simplify several aspects of the design of the memory module while retaining strong performance. To measure the people tracking capability of memory models, we (a) propose a new diagnostic evaluation based on counting the number of unique entities in text, and (b) conduct a small scale human evaluation to compare evidence of people tracking in the memory logs of PeTra relative to a previous approach. PeTra is highly effective in both evaluations, demonstrating its ability to track people in its memory despite being trained with limited annotation. 1 Introduction Understanding text narratives requires maintaining and resolving entity references over arbitrary-length spans. Current approaches for coreference resolution (Clark and Manning, 2016b; Lee et al., 2017, 2018; Wu et al., 2019) scale quadratically (without heuristics) with length of text, and hence are impractical for long narratives. These models are also cognitively implausible, lacking the incrementality of human language processing (Tanenhaus et al., 1995; Keller, 2010). Memory models with finite memory and online/quasi-online entity resolution have linear runtime complexity, offering more scalability, cognitive plausibility, and interpretability. Memory models can be viewed as general problem solvers with external memory mimicking a Turing tape (Graves et al., 2014, 2016). Some of the earliest applications of memory networks in language understanding were for question answering, where the external memory simply stored all of the word/sentence embeddings for a document (Sukhbaatar et al., 2015; Kumar et al., 2016). To endow more structure and interpretability to memory, key-value memory networks were introduced by Miller et al. (2016). The key-value architecture has since been used for narrative understanding and other tasks where the memory is intended to learn to track entities while being guided by varying degrees of supervision (Henaff et al., 2017; Liu et al., 2018a,b, 2019a). We propose a new memory model, PeTra, for entity tracking and coreference resolution, inspired by the recent Referential Reader model (Liu et al., 2019a) but substantially simpler. Experiments on the GAP (Webster et al., 2018) pronoun resolution task show that PeTra outperforms the Referential Reader with fewer parameters and simpler architecture. Importantly, while Referential Reader performance degrades with larger memory, PeTra improves with increase in memory capacity (before saturation), which should enable tracking of a larger number of entities. We conduct experiments to assess various memory architecture decisions, such as learning of memory initialization and separation of memory slots into key/value pairs. To test interpretability of memory models’ entity tracking, we propose a new diagnostic evaluation based on entity counting—a task that the models are not explicitly trained for—using a small amount of annotated data. Additionally, we conduct a small scale human evaluation to assess quality of people tracking based on model memory logs. PeTra substantially outperforms Referential Reader on both measures, indicating better and more interpretable tracking of people.1 1Code available at https://github.com/ shtoshni92/petra 5416 The IG character IG Amelia OW Shepherd CR , portrayed by . . . IG Caterina OW Scorsone CR , visits . . . IG her CR friend IG Addison OW Figure 1: Illustration of memory cell updates in an example sentence where IG = ignore, OW = overwrite, CR = coref. Different patterns indicate the different entities, and an empty pattern indicates that the cell has not been used. The updated memory cells at each time step are highlighted. 2 Model Figure 2 depicts PeTra, which consists of three components: an input encoder that given the tokens generates the token embeddings, a memory module that tracks information about the entities present in the text, and a controller network that acts as an interface between the encoder and the memory. BERT Encoder wt ht Mt−1 Mt . . . . . . Controller et, ot, ct . . . . . . . . . . . . GRU Hidden States Input Tokens Memory Controller Outputs Figure 2: Proposed model. 2.1 Input Encoder Given a document consisting of a sequence of tokens {w1, · · · , wT }, we first pass the document through a fixed pretrained BERT model (Devlin et al., 2019) to extract contextual token embeddings. Next, the BERT-based token embeddings are fed into a single-layer unidirectional Gated Recurrent Unit (GRU) (Cho et al., 2014) running left-to-right to get task-specific token embeddings {h1, · · · , hT }. 2.2 Memory The memory Mt consists of N memory cells. The ith memory cell state at time step t consists of a tuple (mi t, ui t) where the vector mi t represents the content of the memory cell, and the scalar ui t ∈ [0, 1] represents its recency of usage. A high value of ui t is intended to mean that the cell is tracking an entity that has been recently mentioned. Initialization Memory cells are initialized to the null tuple, i.e. (0, 0); thus, our memory is parameterfree. This is in contrast with previous entity tracking models such as EntNet (Henaff et al., 2017) and the Referential Reader (Liu et al., 2019a) where memory initialization is learned and the cells are represented with separate key and value vectors. We will later discuss variants of our memory with some of these changes. 2.3 Controller At each time step t the controller network determines whether token t is part of an entity span and, if so, whether the token is coreferent with any of the entities already being tracked by the memory. Depending on these two variables, there are three possible actions: (i) IGNORE: The token is not part of any entity span, in which case we simply ignore it. (ii) OVERWRITE: The token is part of an entity span but is not already being tracked in the memory. (iii) COREF: The token is part of an entity span and the entity is being tracked in the memory. Therefore, the two ways of updating the memory are OVERWRITE and COREF. There is a strict ordering constraint to the two operations: OVERWRITE precedes COREF, because it is not possible to corefer with a memory cell that is not yet tracking anything. That is, the COREF operation cannot be applied to a previously unwritten memory cell, i.e. one with ui t = 0. Figure 1 illustrates an idealized version of this process. Next we describe in detail the computation of the probabilities of the two operations for each memory cell at each time step t. 5417 First, the entity mention probability et, which reflects the probability that the current token wt is part of an entity mention, is computed by: et = σ(MLP1(ht)) (1) where MLP1 is a multi-layer perceptron and σ is the logistic function. Overwrite and Coref If the current token wt is part of an entity mention, we need to determine whether it corresponds to an entity being currently tracked by the memory or not. For this we compute the similarity between the token embedding ht and the contents of the memory cells currently tracking entities. For the ith memory cell with memory vector mi t−1 the similarity with ht is given by: simi t = MLP2([ht; mi t−1; ht ⊙mi t−1; ui t−1]) (2) where MLP2 is a second MLP and ⊙is the Hadamard (elementwise) product. The usage scalar ui t−1 in the above expression provides a notion of distance between the last mention of the entity in cell i and the potential current mention. The higher the value of ui t−1, the more likely there was a recent mention of the entity being tracked by the cell. Thus ui t−1 provides an alternative to distance-based features commonly used in pairwise scores for spans (Lee et al., 2017). Given the entity mention probability et and similarity score simi t, we define the coref score csi t as: csi t = simi t −∞· 1[ui t−1 = 0] (3) where the second term ensures that the model does not predict coreference with a memory cell that has not been previously used, something not enforced by Liu et al. (2019a).2 Assuming the coref score for a new entity to be 0,3 we compute the coref probability ci t and new entity probability nt as follows:      c1 t... cN t nt     = et · softmax      cs1 t... csN t 0      (4) Based on the memory usage scalars ui t and the new entity probability nt, the overwrite probability for 2A threshold higher than 0 can also be used to limit coreference to only more recent mentions. 3The new entity coref score is a free variable that can be assigned any value, since only the relative value matters. each memory cell is determined as follows: oi t = nt · 1i=arg minj uj t−1 (5) Thus we pick the cell with the lowest usage scalar uj t−1 to OVERWRITE. In case of a tie, a cell is picked randomly among the ones with the lowest usage scalar. The above operation is non-differentiable, so during training we instead use oi t = nt · GS 1 −ui t−1 τ  i (6) where GS(.) refers to Gumbel-Softmax (Jang et al., 2017), which makes overwrites differentiable. For each memory cell, the memory vector is updated based on the three possibilities of ignoring the current token, being coreferent with the token, or considering the token to represent a new entity (causing an overwrite): mi t = IGNORE z }| { (1 −(oi t + ci t))mi t−1 + OVERWRITE z }| { oi t · ht + ci t · MLP3([ht; mi t−1]) | {z } COREF (7) In this expression, the coreference term takes into account both the previous cell vector mi t−1 and the current token representation ht, while the overwrite term is based only on ht. In contrast to a similar memory update equation in the Referential Reader which employs a pair of GRUs and MLPs for each memory cell, our update parameter uses just MLP3 which is memory cell-agnostic. Finally, the memory usage scalar is updated as ui t = min(1, oi t + ci t + γ · ui t−1) (8) where γ ∈(0, 1) is the decay rate for the usage scalar. Thus the usage scalar ui t keeps decaying with time unless the memory is updated via OVERWRITE or COREF in which case the value is increased to reflect the memory cell’s recent use. Memory Variants In vanilla PeTra, each memory cell is represented as a single vector and the memory is parameter-free, so the total number of model parameters is independent of memory size. This is a property that is shared with, for example, differentiable neural computers (Graves et al., 2016). On the other hand, recent models for entity tracking, such as the EntNet (Henaff et al., 2017) and the Referential Reader (Liu et al., 2019a), learn 5418 memory initialization parameters and separate the memory cell into key-value pairs. To compare these memory cell architectures, we investigate the following two variants of PeTra: 1. PeTra + Learned Initialization: memory cells are initialized at t = 0 to learned parameter vectors. 2. PeTra + Fixed Key: a fixed dimensions of each memory cell are initialized with learned parameters and kept fixed throughout the document read, as in EntNet (Henaff et al., 2017). Apart from initialization, the initial cell vectors are also used to break ties for overwrites in Eqs. (5) and (6) when deciding among unused cells (with ui t = 0). The criterion for breaking the tie is the similarity score computed using Eq. (2). 2.4 Coreference Link Probability The probability that the tokens wt1 and wt2 are coreferential according to, say, cell i of the memory depends on three things: (a) wt1 is identified as part of an entity mention and is either overwritten to cell i or is part of an earlier coreference chain for an entity tracked by cell i, (b) Cell i is not overwritten by any other entity mention from t = t1 + 1 to t = t2, and (c) wt2 is also predicted to be part of an entity mention and is coreferential with cell i. Combining these factors and marginalizing over the cell index results in the following expression for the coreference link probability: PCL(wt1, wt2) = N X i=1 (oi t1 + ci t1) · t2 Y j=t1+1 (1 −oi j) · ci t2 (9) 2.5 Losses The GAP (Webster et al., 2018) training dataset is small and provides sparse supervision with labels for only two coreference links per instance. In order to compensate for this lack of supervision, we use a heuristic loss Lent over entity mention probabilities in combination with the end task loss Lcoref for coreference. The two losses are combined with a tunable hyperparameter λ resulting in the following total loss: L = Lcoref + λLent. 2.5.1 Coreference Loss The coreference loss is the binary cross entropy between the ground truth labels for mention pairs and the coreference link probability PCL in Eq. (9). Eq. (9) expects a pair of tokens while the annotations are on pairs of spans, so we compute the loss for all ground truth token pairs: Lcoref = X (sa,sb,yab)∈G X wa∈sa X wb∈sb H(yab, PCL(wa, wb)) ! where G is the set of annotated span pairs and H(p, q) represents the cross entropy of the distribution q relative to distribution p. Apart from the ground truth labels, we use “implied labels” in the coreference loss calculation. For handling multi-token spans, we assume that all tokens following the head token are coreferential with the head token (self-links). We infer more supervision based on knowledge of the setup of the GAP task. Each GAP instance has two candidate names and a pronoun mention with supervision provided for the {name, pronoun} pairs. By design the two names are different, and therefore we use them as a negative coreference pair. Even after the addition of this implied supervision, our coreference loss calculation is restricted to the three mention spans in each training instance; therefore, the running time is O(T) for finite-sized mention spans. In contrast, Liu et al. (2019a) compute the above coreference loss for all token pairs (assuming a negative label for all pairs outside of the mentions), which results in a runtime of O(T 3) due to the O(T 2) pairs and O(T) computation per pair, and thus will scale poorly to long documents. 2.5.2 Entity Mention Loss We use the inductive bias that most tokens do not correspond to entities by imposing a loss on the average of the entity mention probabilities predicted across time steps, after masking out the labeled entity spans. For a training instance where spans sA and sB correspond to the person mentions and span sP is a pronoun, the entity mention loss is Lent = PT t=1 et · mt PT t=1 mt where mt = 0 if wt ∈sA ∪sB ∪sP and mt = 1 otherwise. Each GAP instance has only 3 labeled entity mention spans, but the text typically has other entity mentions that are not labeled. Unlabeled entity mentions will be inhibited by this loss. However, on average there are far more tokens outside entity spans than inside the spans. In experiments without 5419 this loss, we observed that the model is susceptible to predicting a high entity probability for all tokens while still performing well on the end task of pronoun resolution. We are interested in tracking people beyond just the entities that are labeled in the GAP task, for which this loss is very helpful. 3 Experimental Setup 3.1 Data GAP is a gender-balanced pronoun resolution dataset introduced by Webster et al. (2018). Each instance consists of a small snippet of text from Wikipedia, two spans corresponding to candidate names along with a pronoun span, and two binary labels indicating the coreference relationship between the pronoun and the two candidate names. Relative to other popular coreference datasets (Pradhan et al., 2012; Chen et al., 2018), GAP is comparatively small and sparsely annotated. We choose GAP because its small size allows us to do extensive experiments. 3.2 Model Details For the input BERT embeddings, we concatenate either the last four layers of BERTBASE, or layers 19–22 of BERTLARGE since those layers have been found to carry the most information related to coreference (Liu et al., 2019b). The BERT embeddings are fed to a 300-dimensional GRU model, which matches the dimensionality of the memory vectors. We vary the number of memory cells N from 2 to 20. The decay rate for the memory usage scalar γ is 0.98. The MLPs used for predicting the entity probability and similarity score consist of two 300-dimensional ReLU hidden layers. For the Fixed Key variant of PeTra we use 20 dimensions for the learned key vector and the remaining 280 dimensions as the value vector. 3.3 Training All models are trained for a maximum of 100 epochs with the Adam optimizer (Kingma and Ba, 2015). The learning rate is initialized to 10−3 and is reduced by half, until a minimum of 10−4, whenever there is no improvement on the validation performance for the last 5 epochs. Training stops when there is no improvement in validation performance for the last 15 epochs. The temperature τ of the Gumbel-Softmax distribution used in the OVERWRITE operation is initialized to 1 and halved every 10 epochs. The coreference loss terms in Section 2.5.1 are weighted differently for different coreference links: (a) self-link losses for multi-token spans are given a weight of 1, (b) positive coreference link losses are weighted by 5, and (c) negative coreference link losses are multiplied by 50. To prevent overfitting: (a) we use early stopping based on validation performance, and (b) apply dropout at a rate of 0.5 on the output of the GRU model. Finally, we choose λ = 0.1 to weight the entity prediction loss described in Section 2.5.2. 3.4 People Tracking Evaluation One of the goals of this work is to develop memory models that not only do well on the coreference resolution task, but also are interpretable in the sense that the memory cells actually track entities. Hence in addition to reporting the standard metrics on GAP, we consider two other ways to evaluate memory models. As our first task, we propose an auxiliary entitycounting task. We take 100 examples from the GAP validation set and annotate them with the number of unique people mentioned in them.4 We test the models by predicting the number of people from their memory logs as explained in Section 3.5. The motivation behind this exercise is that if a memory model is truly tracking entities, then its memory usage logs should allow us to recover this information. To assess the people tracking performance more holistically, we conduct a human evaluation in which we ask annotators to assess the memory models on people tracking performance, defined as:(a) detecting references to people including pronouns, and (b) maintaining a 1-to-1 correspondence between people and memory cells. For this study, we pick the best run (among 5 runs) of PeTra and the Referential Reader for the 8-cell configuration using BERTBASE (PeTra: 81 F1; Referential Reader: 79 F1). Next we randomly pick 50 documents (without replacement) from the GAP dev set and split those into groups of 10 to get 5 evaluation sets. We shuffle the original 50 documents and follow the same steps to get another 5 evaluation sets. In the end, we have a total of 10 evaluation sets with 10 documents each, where each unique document belongs to exactly 2 evaluation sets. We recruit 10 annotators for the 10 evaluation sets. The annotators are shown memory log visualizations as in Figure 5, and instructed to compare 4In the GAP dataset, the only relevant entities are people. 5420 (a) BERTBASE (b) BERTLARGE Figure 3: Mean F1 score on the GAP validation set as a function of the number of memory cells. the models on their people tracking performance (detailed instructions in Appendix A.3). For each document the annotators are presented memory logs of the two models (ordered randomly) and asked whether they prefer the first model, prefer the second model, or have no preference (neutral). 3.5 Inference GAP Given a pronoun span sP and two candidate name spans sA & sB, we have to predict binary labels for potential coreference links between (sA, sP ) and (sB, sP ). Thus, for a pair of entity spans, say sA and sP , we predict the coreference link probability as: PCL(sA, sP ) = max wA∈sA,wP ∈sP PCL(wA, wP ) where PCL(wA, wP ) is calculated using the procedure described in Section 2.45. The final binary prediction is made by comparing the probability against a threshold. Counting unique people For the test of unique people counting, we discretize the overwrite operation, which corresponds to new entities, against a threshold α and sum over all tokens and all memory cells to predict the count as follows: # unique people = T X t=1 N X i=1 1[oi t ≥α] 3.6 Evaluation Metrics For GAP we evaluate models using F-score.6 First, we pick a threshold from the set {0.01, 0.02, · · · , 5The computation of this probability includes the mention detection steps required byWebster et al. (2018). 6GAP also includes evaluation related to gender bias, but this is not a focus of this paper so we do not report it. 1.00} which maximizes the validation F-score. This threshold is then used to evaluate performance on the GAP test set. For the interpretability task of counting unique people, we choose a threshold that minimizes the absolute difference between ground truth count and predicted count summed over the 100 annotated examples. We select the best threshold from the set {0.01, 0.02, · · · , 1.00}. The metric is then the number of errors corresponding to the best threshold.7 3.7 Baselines The Referential Reader (Liu et al., 2019a) is the most relevant baseline in the literature, and the most similar to PeTra. The numbers reported by Liu et al. (2019a) are obtained by a version of the model using BERTBASE, with only two memory cells. To compare against PeTra for other configurations, we retrain the Referential Reader using the code made available by the authors.8 We also report the results of Joshi et al. (2019) and Wu et al. (2019), although these numbers are not comparable since both of them train on the much larger OntoNotes corpus and just test on GAP. 4 Results 4.1 GAP results We train all the memory models, including the Referential Reader, with memory size varying from {2, 4, · · · , 20} memory cells for both BERTBASE and BERTLARGE, with each configuration being trained 5 times. Figure 3 shows the performance of the 7Note that the error we report is therefore a best-case result. We are not proposing a way of counting unique people in new test data, but rather using this task for analysis. 8https://github.com/liufly/refreader 5421 (a) BERTBASE (b) BERTLARGE Figure 4: Error in counting unique people as a function of number of memory cells; lower is better. BERTBASE BERTLARGE PeTra 81.5 ± 0.6 85.3 ± 0.6 + Learned Init. 80.9 ± 0.7 84.4 ± 1.2 + Fixed Key 81.1 ± 0.7 85.1 ± 0.8 Ref. Reader 78.9 ± 1.3 83.7 ± 0.8 Ref. Reader (2019a) 78.8 Joshi et al. (2019) 82.8 85.0 Wu et al. (2019) 87.5 (SpanBERT) Table 1: Results (%F1) on the GAP test set. models on the GAP validation set as a function of memory size. The Referential Reader outperforms PeTra (and its memory variants) when using a small number of memory cells, but its performance starts degrading after 4 and 6 memory cells for BERTBASE and BERTLARGE respectively. PeTra and its memory variants, in contrast, keep improving with increased memory size (before saturation at a higher number of cells) and outperform the best Referential Reader performance for all memory sizes ≥6 cells. With larger numbers of memory cells, we see a higher variance, but the curves for PeTra and its memory variants are still consistently higher than those of the Referential Reader. Among different memory variants of PeTra, when using BERTBASE the performances are comparable with no clear advantage for any particular choice. For BERTLARGE, however, vanilla PeTra has a clear edge for almost all memory sizes, suggesting the limited utility of initialization. The results show that PeTra works well without learning vectors for initializing the key or memory cell contents. Rather, we can remove the key/value distinction and simply initialize all memory cells with the zero vector. To evaluate on the GAP test set, we pick the memory size corresponding to the best validation performance for all memory models. Table 1 shows that the trends from validation hold true for test as well, with PeTra outperforming the Referential Reader and the other memory variants of PeTra. 4.2 Counting unique people Figure 4 shows the results for the proposed interpretability task of counting unique people. For both BERTBASE and BERTLARGE, PeTra achieves the lowest error count. Interestingly, from Figure 4b we can see that for ≥14 memory cells, the other memory variants of PeTra perform worse than the Referential Reader while being better at the GAP validation task (see Figure 3b). This shows that a better performing model is not necessarily better at tracking people. BERTBASE BERTLARGE PeTra 0.76 0.69 + Learned Init 0.72 0.60 + Fixed Key 0.72 0.65 Ref. Reader 0.49 0.54 Table 2: Spearman’s correlation between GAP validation F1 and negative error count for unique people. To test the relationship between the GAP task and the proposed interpretability task, we compute the correlation between the GAP F-score and the negative count of unique people for each model separately.9 Table 2 shows the Spearman’s correlation between these measures. For all models we see a positive correlation, indicating that a dip in coreference performance corresponds to an increase in error on counting unique people. The correlations for PeTra are especially high, again suggesting it’s greater interpretability. 9Each correlation is computed over 50 runs (5 runs each for 10 memory sizes). 5422 Amelia Shepherd1 , M.D. is a fictional character on the ABC American television medical drama Private Practice, and the spinoff series’ progenitor show, Grey’s Anatomy, portrayed by Caterina Scorsone2 . In her1 debut appearance in season three, Amelia1 visited her former sister-in-law, Addison Montgomery3 , and became a partner at the Oceanside Wellness Group. CR OW CR OW CR OW [CLS] Amelia Shepherd , M . D . is a fictional character ... ... portrayed by Cat ##erina Sc ##ors ##one . In her debut appearance in season three , Amelia visited her former sister in law , Addison Montgomery , ... ... Well ##ness Group . [SEP] CR OW Memory Cells (a) A successful run of PeTra with 4 memory cells. The model accurately links all the mentions of “Amelia” to the same memory cell while also detecting other people in the discourse. Bethenny1 calls a meeting to get everyone on the same page, but Jason2 is hostile with the group, making things worse and forcing Bethenny1 to play referee. Emotions are running high with Bethenny1 ’s assistant, Julie3 , who breaks down at a lunch meeting when asked if she3 is committed to the company for the long haul. CR OW CR OW CR OW CR OW CR OW CR OW CR OW [CLS] Beth ##en ##ny calls ... ... worse and forcing Beth ##en ##ny to play referee . Em ##otion ##s are running high with Beth ##en ##ny ' s assistant , Julie , who breaks down at a lunch meeting when asked if she is committed to the company for the long haul . [SEP] CR OW Memory Cells (b) Memory log of PeTra with 8 memory cells. The model correctly links “she” and “Julie” but fails at linking the three “Bethenny” mentions, and also fails at detecting “Jason”. Figure 5: Visualization of memory logs for different configurations of PeTra. The documents have their GAP annotations highlighted in red (italics) and blue (bold), with blue (bold) corresponding to the right answer. For illustration purposes only, we highlight all the spans corresponding to mentions of people and mark cluster indices as subscript. In the plot, X-axis corresponds to document tokens, and Y-axis corresponds to memory cells. Each memory cell has the OW=OVERWRITE and CR=COREF labels. Darker color implies higher value. We skip text, indicated via ellipsis, when the model doesn’t detect people for extended lengths of text. 4.3 Human Evaluation for People Tracking Model Preference (in %) PeTra 74 Ref. Reader 08 Neutral 18 Table 3: Human Evaluation results for people tracking. Table 3 summarizes the results of the human evaluation for people tracking. The annotators prefer PeTra in 74% cases while the Referential Reader for only 8% instances (see Appendix A.4 for visualizations comparing the two). Thus, PeTra easily outperforms the Referential Reader on this task even though they are quite close on the GAP evaluation. The annotators agree on 68% of the documents, disagree between PeTra and Neutral for 24% of the documents, and disagree between PeTra and the Referential Reader for the remaining 8% documents. For more details, see Appendix A.2. 5423 4.4 Model Runs We visualize two runs of PeTra with different configurations in Figure 5. For both instances the model gets the right pronoun resolution, but clearly in Figure 5b the model fails at correctly tracking repeated mentions of “Bethenny”. We believe these errors happen because (a) GAP supervision is limited to pronoun-proper name pairs, so the model is never explicitly supervised to link proper names, and (b) there is a lack of span-level features, which hurts the model when a name is split across multiple tokens. 5 Related Work There are several strands of related work, including prior work in developing neural models with external memory as well as variants that focus on modeling entities and entity relations, and neural models for coreference resolution. Memory-augmented models. Neural network architectures with external memory include memory networks (Weston et al., 2015; Sukhbaatar et al., 2015), neural Turing machines (Graves et al., 2014), and differentiable neural computers (Graves et al., 2016). This paper focuses on models with inductive biases that produce particular structures in the memory, specifically those related to entities. Models for tracking and relating entities. A number of existing models have targeted entity tracking and coreference links for a variety of tasks. EntNet (Henaff et al., 2017) aims to track entities via a memory model. EntityNLM (Ji et al., 2017) represents entities dynamically within a neural language model. Hoang et al. (2018) augment a reading comprehension model to track entities, incorporating a set of auxiliary losses to encourage capturing of reference relations in the text. Dhingra et al. (2018) introduce a modified GRU layer designed to aggregate information across coreferent mentions. Memory models for NLP tasks. Memory models have been applied to several other NLP tasks in addition to coreference resolution, including targeted aspect-based sentiment analysis (Liu et al., 2018b), machine translation (Maruf and Haffari, 2018), narrative modeling (Liu et al., 2018a), and dialog state tracking (Perez and Liu, 2017). Our study of architectural choices for memory may also be relevant to models for these tasks. Neural models for coreference resolution. Several neural models have been developed for coreference resolution, most of them focused on modeling pairwise interactions among mentions or spans in a document (Wiseman et al., 2015; Clark and Manning, 2016a; Lee et al., 2017, 2018). These models use heuristics to avoid computing scores for all possible span pairs in a document, an operation which is quadratic in the document length T assuming a maximum span length. Memory models for coreference resolution, including our model, differ by seeking to store information about entities in memory cells and then modeling the relationship between a token and a memory cell. This reduces computation from O(T 2) to O(TN), where N is the number of memory cells, allowing memory models to be applied to longer texts by using the global entity information. Past work (Wiseman et al., 2016) have used global features, but in conjunction with other features to score span pairs. Referential Reader. Most closely related to the present work is the Referential Reader (Liu et al., 2019a), which uses a memory model to perform coreference resolution incrementally. We significantly simplify this model to accomplish the same goal with far fewer parameters. 6 Conclusion and Future Work We propose a new memory model for entity tracking, which is trained using sparse coreference resolution supervision. The proposed model outperforms a previous approach with far fewer parameters and a simpler architecture. We propose a new diagnostic evaluation and conduct a human evaluation to test the interpretability of the model, and find that our model again does better on this evaluation. In future work, we plan to extend this work to longer documents such as the recently released dataset of Bamman et al. (2019). Acknowledgments This material is based upon work supported by the National Science Foundation under Award Nos. 1941178 and 1941160. We thank the ACL reviewers, Sam Wiseman, and Mrinmaya Sachan for their valuable feedback. We thank Fei Liu and Jacob Eisenstein for answering questions regarding the Referential Reader. Finally, we want to thank all the annotators at TTIC who participated in the human evaluation study. 5424 References David Bamman, Olivia Lewke, and Anya Mansoor. 2019. An Annotated Dataset of Coreference in English Literature. arXiv preprint arXiv:1912.01140. Hong Chen, Zhenhua Fan, Hao Lu, Alan Yuille, and Shu Rong. 2018. PreCo: A Large-scale Dataset in Preschool Vocabulary for Coreference Resolution. In EMNLP. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder– Decoder for Statistical Machine Translation. In EMNLP. Kevin Clark and Christopher D. Manning. 2016a. Deep Reinforcement Learning for Mention-Ranking Coreference Models. In EMNLP. Kevin Clark and Christopher D Manning. 2016b. Improving Coreference Resolution by Learning EntityLevel Distributed Representations. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2018. Neural Models for Reasoning over Multiple Mentions Using Coreference. In NAACL. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural Turing machines. arXiv preprint arXiv:1410.5401. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adri`aPuigdom`enech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. 2016. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471– 476. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2017. Tracking the world state with recurrent entity networks. In ICLR. Luong Hoang, Sam Wiseman, and Alexander Rush. 2018. Entity Tracking Improves Cloze-style Reading Comprehension. In EMNLP. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical Reparameterization with Gumbel-Softmax. In ICLR. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic Entity Representations in Neural Language Models. In EMNLP. Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for Coreference Resolution: Baselines and Analysis. In EMNLP. Frank Keller. 2010. Cognitively Plausible Models of Human Language Processing. In ACL. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In ICML. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end Neural Coreference Resolution. In EMNLP. Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-Order Coreference Resolution with Coarseto-Fine Inference. In NAACL-HLT. Fei Liu, Trevor Cohn, and Timothy Baldwin. 2018a. Narrative Modeling with Memory Chains and Semantic Supervision. In ACL. Fei Liu, Trevor Cohn, and Timothy Baldwin. 2018b. Recurrent Entity Networks with Delayed Memory Update for Targeted Aspect-Based Sentiment Analysis. In NAACL-HLT. Fei Liu, Luke Zettlemoyer, and Jacob Eisenstein. 2019a. The Referential Reader: A recurrent entity network for anaphora resolution. In ACL. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019b. Linguistic Knowledge and Transferability of Contextual Representations. In NAACL-HLT. Sameen Maruf and Gholamreza Haffari. 2018. Document Context Neural Machine Translation with Memory Networks. In ACL. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-Value Memory Networks for Directly Reading Documents. In EMNLP. Julien Perez and Fei Liu. 2017. Dialog state tracking, a machine reading approach using Memory Network. In EACL. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL2012 Shared Task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL - Shared Task, CoNLL ’12. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In NeurIPS. 5425 MK Tanenhaus, MJ Spivey-Knowlton, KM Eberhard, and JC Sedivy. 1995. Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217). Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns. In TACL, volume 6. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory Networks. In ICLR. Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning Anaphoricity and Antecedent Ranking Features for Coreference Resolution. In ACL-IJCNLP. Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning Global Features for Coreference Resolution. In NAACL. Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, and Jiwei Li. 2019. Coreference Resolution as Query-based Span Prediction. arXiv preprint arXiv:1911.01746. A Appendix A.1 Best Runs vs. Worst Runs As Table 1 shows, there is significant variance in the performance of these memory models. To analyze how the best runs diverge from the worst runs, we analyze how the controller network is using the different memory cells in terms of overwrites. For this analysis, we choose the best and worst among the 5 runs for each configuration, as determined by GAP validation performance. For the selected runs, we calculate the KL-divergence of the average overwrite probability distribution from the uniform distribution and average it for each model type. Table 4 shows that for the memory variants Learned Init and Fixed Key, the worst runs overwrite more to some memory cells than others (high average KLdivergence). Note that both PeTra and Referential Reader are by design intended to have no preference for any particular memory cell (which the numbers support), hence the low KL-divergence. Avg KL-div Best run Worst run PeTra 0.00 0.01 + Learned Init. 0.3 0.83 + Fixed Key 0.2 0.8 Ref. Reader 0.05 0.04 Table 4: A comparison of best runs vs. worst runs. A.2 Human Evaluation Results The agreement matrix for the human evaluation study described in Section 4.3 is shown in Figure 6. This agreement matrix is a result of the two annotations per document that we get as per the setup described in Section 3.4. Note that the annotations are coming from two sets of annotators rather than two individual annotators. This is also the reason why we don’t report standard inter-annotator agreement coefficients. PeTra Ref Reader Neutral Annotation 2 PeTra Ref Reader Neutral Annotation 1 29 4 2 12 0 3 Figure 6: Agreement matrix for human evaluation study. A.3 Instructions for Human Evaluation The detailed instructions for the human evaluation study described in Section 4.3 are shown in Figure 7. We simplified certain memory model specific terms such as “overwrite” to “new person” since the study was really about people tracking. A.4 Comparative visualization of memory logs of PeTra and the Referential Reader Figure 8 and 9 compare the memory logs of PeTra and the Referential Reader. 5426 • In this user study we will be comparing memory models at tracking people. • What are memory models? Memory models are neural networks coupled with an external memory which can be used for reading/writing. • (IMPORTANT) What does it mean to track people for memory models? – Detect all references to people which includes pronouns. – A 1-to-1 correspondence between people and memory cells i.e. all references corresponding to a person should be associated with the same memory cell AND each memory cell should be associated with at most 1 person. • The memory models use the following scores (which are visualized) to indicate the tracking decisions: – New Person Probability (Cell i): Probability that the token refers to a new person (not introduced in the text till now) and we start tracking it in cell i. – Coreference Probability (Cell i): Probability that the token refers to a person already being tracked in cell i. • The objective of this study is to compare the models on the interpretability of their memory logs i.e. are the models actually tracking entities or not. You can choose how you weigh the different requirements for tracking people (from 3). • For this study, you will compare two memory models with 8 memory cells (represented via 8 rows). The models are ordered randomly for each instance. • For each document, you can choose model A or model B, or stay neutral in case both the models perform similarly. Figure 7: Instructions for the human evaluation study. 5427 Neef1 took an individual silver medal at the 1994 European Cup behind Russia’s Svetlana Goncharenko2 and returned the following year to win gold. She1 was a finalist individually at the 1994 European Championships and came sixth for Scotland at the 1994 Commonwealth Games. (a) GAP validation instance 293. The ground truth GAP annotation is indicated via colors. CR OW CR OW CR OW CR OW CR OW CR OW CR OW [CLS] N ##ee ##f took an individual silver medal at the 1994 European Cup behind Russia ' s S ##vet ##lana Go ##nch ##are ##nko and returned the following year to win gold . She was a finalist individually at the 1994 European Championships and came sixth for Scotland at the 1994 Commonwealth Games . [SEP] CR OW Memory Cells (b) Memory log of PeTra with 8 memory cells. PeTra uses only 2 memory cells for the 2 unique people, namely Neef and Svetlana Goncharenko, and correctly resolves the pronoun. CR OW CR OW CR OW CR OW CR OW CR OW CR OW [CLS] N ##ee ##f took an individual silver medal at the 1994 European Cup behind Russia ' s S ##vet ##lana Go ##nch ##are ##nko and returned the following year to win gold . She was a finalist individually at the 1994 European Championships and came sixth for Scotland at the 1994 Commonwealth Games . [SEP] CR OW Memory Cells (c) Memory log of the Referential Reader with 8-memory cells. The Referential Reader does successfully resolve the pronoun in the topmost memory cell but it ends up tracking Neef in as many as 4 memory cells. Figure 8: Both the models only weakly detect “Svetlana Goncharenko” which could be due to lack of span modeling. 5428 Fripp1 has performed Soundscapes in several situations: * Fripp1 has featured Soundscapes on various King Crimson albums. He1 has also released pure Soundscape recordings as well: * On May 4, 2006, Steve Ball2 invited Robert Fripp1 back to the Microsoft campus for a second full day of work on Windows Vista following up on his1 first visit in the Fall of 2005. (a) GAP validation instance 17. The ground truth GAP annotation is indicated via colors. CR OW CR OW CR OW CR OW CR OW CR OW CR OW [CLS] Fr ##ip ##p has performed Sounds ##cape ##s in several situations : * Fr ##ip ##p has featured Sounds ##cape ##s on various King Crimson albums . He has also released ... ... 4 , 2006 , Steve Ball invited Robert Fr ##ip ##p back to the Microsoft campus ... ... on Windows Vista following up on his first visit in the Fall of 2005 . [SEP] CR OW Memory Cells (b) Memory log of PeTra with 8-memory cells. PeTra is pretty accurate at tracking Robert Fripp but it misses out on connecting “Fripp” from the earlier part of the document to “Robert Fripp”. CR OW CR OW CR OW CR OW CR OW CR OW CR OW [CLS] Fr ##ip ##p has performed Sounds ##cape ##s in several situations : * Fr ##ip ##p has featured Sounds ##cape ##s on various King Crimson albums . He has also released ... ... 4 , 2006 , Steve Ball invited Robert Fr ##ip ##p back to the Microsoft campus ... ... on Windows Vista following up on his first visit in the Fall of 2005 . [SEP] CR OW Memory Cells (c) Memory log of the Referential Reader with 8-memory cells. The Referential Reader completely misses out on all the mentions in the first half of the document (which is not penalized in GAP evaluations where the relevant annotations are typically towards the end of the document). Apart from this, the model ends up tracking Robert Fripp in as many as 6 memory cells, and Steve Ball in 3 memory cells. Figure 9: PeTra clearly performs better than the Referential Reader at people tracking for this instance. PeTra’s output is more sparse, detects more relevant mentions, and is better at maintaining a 1-to-1 correspondence between memory cells and people.
2020
481
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5429–5434 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5429 ZPR2: Joint Zero Pronoun Recovery and Resolution using Multi-Task Learning and BERT Linfeng Song1, Kun Xu1, Yue Zhang2,3, Jianshu Chen1 and Dong Yu1 1Tencent AI Lab, Bellevue, WA, USA 2School of Engineering, Westlake University, China 3Institute of Advanced Technology, Westlake Institute for Advanced Study, China Abstract Zero pronoun recovery and resolution aim at recovering the dropped pronoun and pointing out its anaphoric mentions, respectively. We propose to better explore their interaction by solving both tasks together, while the previous work treats them separately. For zero pronoun resolution, we study this task in a more realistic setting, where no parsing trees or only automatic trees are available, while most previous work assumes gold trees. Experiments on two benchmarks show that joint modeling significantly outperforms our baseline that already beats the previous state of the arts. Our code is available at https://github.com/ freesunshine0316/lab-zp-joint. 1 Introduction Zero pronoun (ZP) is a linguistic phenomenon where a pronoun is dropped for simplicity. Figure 1 shows an example, where two pronouns at positions φ1 and φ2 are omitted. They both refer to “fπ (The police)” in the sentence beginning and their original form is “÷Ï (they)”. The situation of dropping pronouns happens in most languages. While this phenomenon is not frequent in non-pro-drop languages, such as English, it is extremely severe for pro-drop languages, such as Chinese. In addition, dropped pronouns happens more frequently in conversations than in news. Our preliminary statistics of Chinese shows that 59.2% pronouns are dropped in a corpus of casual dialogues domain, while the number is just 41.6% in another data of broadcast news. In NLP, dropped pronouns can cause loss of important information, such as the subject or object of the central predicate in a sentence, introducing ambiguity to applications such as machine translation (Nakaiwa and Shirai, 1996; Wang et al., 2016; Takeno et al., 2016), question answering (Choi et al., 2018; Reddy et al., 2019; Sun et al., 2019; [ fπ ] 怀ë Ÿ / 一w —™Hˆ ,φ1 ⌃™∞å Æ⇧§送⇥à φ2  ⇧⌃H≈⇥ [ The police ] suspected that this is a criminal case about illegal guns , φ1 brought the guns and bags to the city φ2 to deal with the case . Figure 1: An zero pronoun example and its English translation, where φ1 and φ2 are zero pronouns pointing to the span in square brackets. Chen and Choi, 2016) and dialogue understanding (Chen et al., 2017; Rolih, 2018). As a result, zero pronouns have recently received much research attention (Liu et al., 2017; Yin et al., 2018a,b). We study Chinese zero pronoun in dialogue settings. There are two long-existing tasks namely zero pronoun recovery, which aims at recovering the original pronoun (such as “÷ (he)” and “y (she)”), and zero pronoun resolution, where the goal is to pinpoint the mention that each dropped pronoun refers to. Intuitively, the results of the two tasks highly interact with each other. Taking Figure 1 as an example, it will be much easier to resolute φ1 to “fπ (The police)” rather than “—™Hˆ (criminal case about illegal guns)” if we know φ1 corresponds to “÷Ï (they)”. Similarly, it would be more likely to recover φ1 as “÷Ï (they)” than other candidate pronouns, if we know φ1 points to “fπ (The police)”. Despite their high correlation, previous work considers them as irrelevant tasks, solving them separately by different models. This can waste training resources, as each task has a limited number of labeled instances, and thus data sparsity can limit model performance. Besides, we believe that it is unnecessary to keep a specific model for each task, as they can be close enough to be solved together. In addition, most zero pronoun resolution research (Chen and Ng, 2013, 2016; Kong and Zhou, 2010; Iida and Poesio, 2011; Sasano et al., 2008; Yin et al., 2018b; Yang et al., 2019) as5430 sumes gold trees being available with the positions of zero pronouns, which is unrealistic in practical applications. During decoding, a zero pronoun resolution model has to rely on automatic trees and zero pronoun detection, thus suffering from error propagation. In this paper, we propose to jointly solve both tasks under a heterogeneous multi-task learning framework, where each data point only has the annotation of one task, to benefit from the supervised data of both tasks. As the result, we enjoy the benefit of more supervised training data. To improve the robustness of heterogeneous training and introduce more supervision, we introduce zero pronoun detection, a common sub-task for both ZP resolution and recovery. Zero pronoun detection is a binaryclassification task aiming to detect whether a word space has a dropped pronoun. We consider ZP recovery as a sequence labeling task, regarding whether each word space has a dropped pronoun and what type the pronoun is. ZP resolution is solved as extractive reading comprehension (Rajpurkar et al., 2016), where each word space is taken as a query and its anaphoric mentions are treated as the answers. For non-ZP spaces where there is no corresponding anaphoric mentions, we assign the sentence beginning (span [0,0]) as the answer. Experiments on two benchmarks, OntoNotes 5.01 (ZP resolution) and BaiduZhdiao (Zhang et al., 2016) (ZP recovery), show that joint modeling gives us 1.5+ absolute F1-score gains for both tasks over our very strong baselines using BERT (Devlin et al., 2019). Our overall system gives an dramatic improvement of 3.5 F1 points over previous stateof-the-art results on both tasks. 2 Related work Previous work considers zero pronoun resolution and recovery separately. For zero pronoun recovery, existing methods can be classified according to the types of annotations they use. One line of work (Yang et al., 2015, 2019) simply relies on the human annotations, solving the task as sequence labeling. The other line of work (Chung and Gildea, 2010; Xiang et al., 2013; Wang et al., 2016) mines weak supervision signals from a large bilingual parallel corpus, where the other language is non-prodrop with fewer pronoun drops. The latter requires massive training data, and the MT performance is 1https://catalog.ldc.upenn.edu/LDC2013T19 the primary goal, thus we follow the first line of research using human-annotated data. Rao et al. (2015) studied zero pronoun resolution in multi-turn dialogues, claiming that their model does not rely on parsing trees to extract ZP positions and noun phrase as resolution candidates. However, they only consider the dropped pronouns that correspond to one of the dialogue participant. As a result, they only explore a small subset of the entire ZP resolution problem, and their task is closer to zero pronoun recovery. Most similar to our work, Liu et al. (2017) converted zero pronoun resolution as a machine reading comprehension task (Rajpurkar et al., 2016) in order to automatically construct a large-scale pseudo dataset for model pretraining. However, their model finetuning and evaluation with benchmark data still rely on human-annotated trees and gold zero pronoun positions. As a result, it is still uncertain what performance a model can achieve without such gold inputs. We address both issues in the joint task. Our work is inspired by the recent advances of heterogeneous multi-task learning using BERT (Devlin et al., 2019), which combines the supervised data of several related tasks to achieve further improvements. In particular, Liu et al. (2019) utilize this framework to jointly solve GLUE tasks (Wang et al., 2019). But their experiments show that multitask learning does not help across all tasks. Our work takes a similar spirit, and our contribution is mainly on the zero pronoun tasks. In addition, we find that it helps the robustness of multi-task learning to add a common sub-task (e.g. zero pronoun detection in our case) for additional supervision and alleviating annotation variances, if such a subtask is available. 3 Model As shown in Figure 2, we model ZP recovery (frec), ZP resolution (fres), and the auxiliary ZP detection (fdet) task with multi-task learning, where BERT (Devlin et al., 2019) is used to represent each input sentence s1 . . . sN of N words to provide shared features. 3.1 Zero pronoun recovery ZP recovery is to restore any dropped pronouns for an input text. Since pronouns are enumerable (e.g. there are 10 types for Chinese), we cast this task into a classification problem for each word space. Taking some shared input representations 5431 BERT ... ... ... ... Figure 2: Model framework. h0, h1, . . . , hN, the probability for recovering pronoun pi at the space between si−1 and si is: p(pi|X, i) = softmax(W rhi + br) (1) where W r and br are model parameters. 3.2 Zero pronoun resolution Our zero pronoun resolution task is to predict the span that each dropped pronoun points to, while the gold ZP positions are not available. One potential solution is executing zero pronoun recovery first and utilize that information, while this introduces error propagation. Conversely, we manually assign span “(0,0)” for non-ZP positions. This will not introduce conflicts, as position “0” corresponds to the special token [CLS] for BERT encoding and thus no real spans can be “(0,0)”. We cast the resolution task for each word space (such as between si−1 and si) as machine reading comprehension (MRC) (Rajpurkar et al., 2016), where a resolution span corresponds to a MRC target answer. Following previous work on MRC, we separately model the start (rst i ) and end (red i ) positions for each span with self-attention: p(rst i |X, i) = SelfAttnst(H, hi) p(red i |X, i) = SelfAttned(H, hi) (2) where H = [h0, . . . , hN] is the concatenation of all word states, and SelfAttnst() and SelfAttned() are the self-attention modules for predicting the start and end positions of each ZP resolution span. The probability for the whole span ri is: p(ri|X, i) = p(rst i |X, i)p(red i |X, i) (3) 3.3 Auxiliary task: zero pronoun detection We also introduce pronoun detection as an auxiliary task to enhance multi-task training. This task is to determine whether each word space has a dropped pronoun. Similar with zero pronoun recovery, we formulate it as binary classification: p(di|X, i) = softmax(W dhi + bd) (4) where di is the binary detection result. W d and bd are model parameters. 3.4 Encoding input with BERT Given an input sentence s1, . . . , sN, we use BERT to encode them into a sequence of input features shared across all our tasks. We append the [CLS] token to inputs, before sending them to BERT. Our task features are represented as h0, h1, . . . , hN, where h0 corresponds to token [CLS]. 3.5 Training We train our model on the combined and shuffled data of both tasks to leverage more supervision signals. Each data instance only contains the annotation of either ZP recovery or resolution, thus the loss for one example is defined as: loss = − X i21..N ⇣ ↵log p(pi|X, i) −β log p(ri|X, i) −γ log p(di|X, i) ⌘ (5) where ↵, β and γ are the coefficients for the tasks. For ↵and β, the value of is 1 if the corresponding supervision exists, otherwise it is 0. We empirically set the value of γ to 0.1, as the supervision of ZP detection exists for all instances, and we do not want this auxiliary loss signal to be too strong. 4 Experiments We study the effectiveness of jointly modeling ZP resolution, recovery and detection. 4.1 Data and setting We take two benchmark datasets: BaiduZhidao (Zhang et al., 2016), a benchmark for ZP recovery, and OntoNotes 5.0, a benchmark for ZP resolution. For BaiduZhidao, we use the version cleaned by Yang et al. (2019), containing 5504, 1175 and 1178 instances for training, development and testing, respectively. OntoNotes 5.0 has 36487 training and 6083 testing instances, and we separate 20% training instances for development. 5432 Model OntoNotes 5.0 (RES) BaiduZhidao (REC) Avg. F1 P R F P R F ZPMN (Yin et al., 2017) 18.5 29.3 22.7 – – – – NDPR-W (Yang et al., 2019) – – – 38.60 50.12 43.36 – BERT 26.87 22.43 24.45 43.50 47.30 45.32 34.89 BERT-MTL 24.55 25.49 25.01 41.63 48.22 44.68 34.85 BERT-MTL w/ detection 30.96 22.51 26.07 46.09 47.54 46.81 36.44 Table 1: Main results for ZP resolution and recovery, where RES and REC are short for resolution and recovery. Model P R F Gold Tree + Gold ZP ZPMN (Yin et al., 2017) 55.1 54.8 54.9 AttentionZP (Yin et al., 2018b) – – 57.3 Our model 59.40 57.61 58.49 Gold Tree + Auto ZP ZPMN (Yin et al., 2017) 31.1 39.4 34.8 Our model 42.56 32.03 36.55 Table 2: ZP resolution with gold trees and ZP positions. Method Auto Tree + Auto ZP P R F Our model 30.96 22.51 26.07 w/ auto tree cons. 36.13 32.32 34.12 Table 3: Resolution using automatic trees as constraint. We choose the official pretrained Chinese BERTbase model2. Models are trained with Adam (Kingma and Ba, 2014) with a learning rate of 10−5 and a warm-up proportion of 10%. To avoid overfitting, we apply l2 norm for BERT parameters with a coefficient of 0.01. Models are selected by early stopping with development results. 4.2 Main results Table 1 shows the results for both resolution and recovery tasks, where ZPMN and NDPR-W show the state-of-the-art performances without relying on any gold syntactic information. ZPMN treats zero pronoun resolution as a classification task over noun phrase candidates, and the final result is selected using an attention mechanism. NDPR-W studies zero pronoun recovery in dialogues by modeling all dialogue history. For our models, BERT represents finetuning BERT only on one task, BERT-MTL means jointly finetuning BERT on both tasks with multi-task learning (as shown in Figure 2), and BERT-MTL w/ detection is our model with auxiliary detection loss. Using BERT already gives us much better performances than the previous state-of-the-art results. Initial usage of heterogeneous multi-task learning helps ZP resolution, while hurting ZP recovery, 2https://github.com/google-research/bert and one potential reason is that the ZP resolution dataset (OntoNotes 5.0) has much more instances than the ZP recovery dataset (BaiduZhidao). This problem is alleviated by introducing the auxiliary ZP detection task due to the following possible reasons. Most importantly, ZP detection is very close to ZP recovery (binary vs multi-class), thus this extra supervision helps to alleviate the data magnitude imbalance problem. Besides, ZP detection introduces more useful training signals to the overall training process. 4.3 More analysis on ZP resolution We also evaluate on other previously studied settings, where gold trees or even gold ZP positions are given. As ZPMN also reported strong performances cross these settings, we take this model as a baseline for comparison. Using gold trees and ZP positions Since most previous work on ZP resolution uses gold syntactic trees and/or ZP positions, we also investigate our performance under these settings. In particular, we take the noun phrases and/or ZP positions from gold trees to serve as constraints. Besides, our model is only trained on the ZP positions when they are given. Table 2 shows the results, AttentionZP gives the previous state-of-the-art performance under the Gold Tree + Gold ZP setting. Our model outperforms AttentionZP by a significant margin. Beside, we also report the best performance, which significantly outperforms the previous best system (ZPMN) under the Gold Tree + Auto ZP setting, where only gold trees are available. Effectiveness of automatic trees Currently, our model considers all free spans when making a resolution decision. Using automatic tree can greatly limit the search space, while that could introduce errors. We conduct a preliminary comparison as shown in Table 3, where such a constraint dramatically helps the performance. But, this is based on the assumption that the target-domain syntactic parsing is very accurate, as our ZP resolution data (OntoNotes 5.0) is mostly collected from 5433 broadcast news. The F1 score using automatic trees (34.12) is close to the score using gold trees (36.55 in Table 2), which also indicates the conjecture above. As a result, we may expect a performance drop for web and biomedical domains, where the parsing accuracies are much lower. 5 Conclusion We studied the effectiveness of jointly modeling ZP recovery and resolution using the recently introduced multi-task learning + BERT framework. To alleviate the data magnitude imbalance problem, we introduce ZP detection as a common auxiliary sub-task for extra supervision. Experiments on two benchmarks show that our model is consistently better than previous results under various settings, and that the auxiliary ZP detection sub-task can make the training process more robust. References Chen Chen and Vincent Ng. 2013. Chinese zero pronoun resolution: Some recent advances. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1360–1365. Chen Chen and Vincent Ng. 2016. Chinese zero pronoun resolution with deep neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778–788. Henry Y Chen, Ethan Zhou, and Jinho D Choi. 2017. Robust coreference resolution and entity linking on dialogues: Character identification on tv show transcripts. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 216–225. Yu-Hsin Chen and Jinho D Choi. 2016. Character identification on multiparty conversation: Identifying mentions of characters in tv shows. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 90– 100. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174–2184. Tagyoung Chung and Daniel Gildea. 2010. Effects of empty categories on machine translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 636– 645. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Ryu Iida and Massimo Poesio. 2011. A cross-lingual ilp solution to zero anaphora resolution. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 804–813. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Fang Kong and Guodong Zhou. 2010. A tree kernelbased unified framework for chinese zero anaphora resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 882–891. Ting Liu, Yiming Cui, Qingyu Yin, Wei-Nan Zhang, Shijin Wang, and Guoping Hu. 2017. Generating and exploiting large-scale pseudo training data for zero pronoun resolution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 102–111. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 4487–4496. Hiromi Nakaiwa and Satoshi Shirai. 1996. Anaphora resolution of japanese zero pronouns with deictic reference. In Proceedings of the 16th conference on Computational linguistics-Volume 2, pages 812–817. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Sudha Rao, Allyson Ettinger, Hal Daum´e III, and Philip Resnik. 2015. Dialogue focus tracking for zero pronoun resolution. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 494–503. Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266. Gabi Rolih. 2018. Applying coreference resolution for usage in dialog systems. 5434 Ryohei Sasano, Daisuke Kawahara, and Sadao Kurohashi. 2008. A fully-lexicalized probabilistic model for japanese zero anaphora resolution. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 769–776. Association for Computational Linguistics. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge data set and models for dialogue-based reading comprehension. Transactions of the Association for Computational Linguistics, 7:217–231. Shunsuke Takeno, Masaaki Nagata, and Kazuhide Yamamoto. 2016. Integrating empty category detection into preordering machine translation. In Proceedings of the 3rd Workshop on Asian Translation (WAT2016), pages 157–165. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. the Proceedings of ICLR. Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, and Qun Liu. 2016. A novel approach to dropped pronoun translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 983–993. Bing Xiang, Xiaoqiang Luo, and Bowen Zhou. 2013. Enlisting the ghost: Modeling empty categories for machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 822– 831. Jingxuan Yang, Jianzhuo Tong, Si Li, Sheng Gao, Jun Guo, and Nianwen Xue. 2019. Recovering dropped pronouns in chinese conversations via modeling their referents. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 892–901. Yaqin Yang, Yalin Liu, and Nianwen Xue. 2015. Recovering dropped pronouns from chinese text messages. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 309–313. Qingyu Yin, Yu Zhang, Weinan Zhang, and Ting Liu. 2017. Chinese zero pronoun resolution with deep memory network. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1309–1318. Qingyu Yin, Yu Zhang, Weinan Zhang, Ting Liu, and William Yang Wang. 2018a. Deep reinforcement learning for chinese zero pronoun resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 569–578. Qingyu Yin, Yu Zhang, Weinan Zhang, Ting Liu, and William Yang Wang. 2018b. Zero pronoun resolution with attention-based neural network. In Proceedings of the 27th International Conference on Computational Linguistics, pages 13–23. Wei-Nan Zhang, Ting Liu, Qingyu Yin, and Yu Zhang. 2016. Neural recovery machine for chinese dropped pronoun. arXiv preprint arXiv:1605.02134.
2020
482
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5435–5442 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5435 Contextualizing Hate Speech Classifiers with Post-hoc Explanation Brendan Kennedy∗and Xisen Jin∗and Aida Mostafazadeh Davani Morteza Dehghani and Xiang Ren University of Southern California {btkenned,xisenjin,mostafaz,mdehghan,xiangren}@usc.edu Abstract Hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like “gay” or “black” are used in offensive or prejudiced ways. Such biases manifest in false positives when these identifiers are present, due to models’ inability to learn the contexts which constitute a hateful usage of identifiers. We extract post-hoc explanations from fine-tuned BERT classifiers to detect bias towards identity terms. Then, we propose a novel regularization technique based on these explanations that encourages models to learn from the context of group identifiers in addition to the identifiers themselves. Our approach improved over baselines in limiting false positives on out-of-domain data while maintaining or improving in-domain performance.† 1 Introduction Hate speech detection is part of the ongoing effort to limit the harm done by oppressive and abusive language (Waldron, 2012; Gelber and McNamara, 2016; Gagliardone et al., 2015; Mohan et al., 2017). Performance has improved with access to more data and more sophisticated algorithms (e.g., Mondal et al., 2017; Silva et al., 2016; Del Vigna12 et al., 2017; Basile et al., 2019), but the relative sparsity of hate speech requires sampling using keywords (e.g., Olteanu et al., 2018) or sampling from environments with unusually high rates of hate speech (e.g., de Gibert et al., 2018; Hoover et al., 2019). Modern text classifiers thus struggle to learn a model of hate speech that generalizes to real-world applications (Wiegand et al., 2019). A specific problem found in neural hate speech classifiers is their over-sensitivity to group identifiers like “Muslim”, “gay”, and “black”, which are only hate speech when combined with the right ∗Authors contributed equally † Code is available here “[F]or many Africans, the most threatening kind of ethnic hatred is black against black.” - New York Times “There is a great discrepancy between whites and blacks in SA. It is … [because] blacks will always be the most backward race in the world.” Anonymous user, Gab.com Figure 1: Two documents which are classified as hate speech by a fine-tuned BERT classifier. Group identifiers are underlined. context (Dixon et al., 2018). In Figure 1 we see two documents containing the word “black” that a finetuned BERT model predicted to be hate speech, while only the second occurs in a hateful context. Neural text classifiers achieve state-of-the-art performance in hate speech detection, but are uninterpretable and can break when presented with unexpected inputs (Niven and Kao, 2019). It is thus difficult to contextualize a model’s treatment of identifier words. Our approach to this problem is to use the Sampling and Occlusion (SOC) explanation algorithm, which estimates model-agnostic, posthoc feature importance (Jin et al., 2020). We apply this approach to the Gab Hate Corpus (Kennedy et al., 2020), a new corpus labeled for “hate-based rhetoric”, and an annotated corpus from the Stormfront white supremacist online forum (de Gibert et al., 2018). Based on the explanations generated via SOC, which showed models were biased towards group identifiers, we then propose a novel regularizationbased approach in order to increase model sensitivity to the context surrounding group identifiers. We apply regularization during training to the explanation-based importance of group identifiers, coercing models to consider the context surrounding them. We find that regularization reduces the attention given to group identifiers and heightens the importance of the more generalizable features of hate speech, such as dehumanizing and insulting language. In experiments on an out-of-domain test set 5436 of news articles containing group identifiers, which are heuristically assumed to be non-hate speech, we find that regularization greatly reduces the false positive rate, while in-domain, out-of-sample classification performance is either maintained or improved. 2 Related Work Our work is conceptually influenced by Warner and Hirschberg (2012), who formulated hate speech detection as disambiguating the use of offensive words from abusive versus non-abusive contexts. More recent approaches applied to a wide typology of hate speech (Waseem et al., 2017), build supervised models trained on annotated (e.g., Waseem and Hovy, 2016; de Gibert et al., 2018) or heuristically-labeled (Wulczyn et al., 2017; Olteanu et al., 2018) data. These models suffer from the highly skewed distributions of language in these datasets (Wiegand et al., 2019). Research on bias in classification models also influences this work. Dixon et al. (2018) measured and mitigated bias in toxicity classifiers towards social groups, avoiding undesirable predictions of toxicity towards innocuous sentences containing tokens like “gay”. Similarly, annotators’ biases towards certain social groups were found to be magnified during classifier training Mostafazadeh Davani et al. (2020). Specifically within the domain of hate speech and abusive language, Park et al. (2018) and Sap et al. (2019) have defined and studied genderand racial-bias, emphasizing issues of undetected dialect variation and imbalanced training data, respectively. Techniques for bias reduction in these settings include data augmentation by training on less biased data, term swapping during training (i.e., swapping gender words), and using debiased word embeddings (Bolukbasi et al., 2016). Complementing these works, we directly manipulate models’ modeling of the context surrounding identifier terms by regularizing explanations of these terms. Specifically, we use post-hoc explanation algorithms to interpret and modulate finetuned language models like BERT (Devlin et al., 2018), which achieve state of the art performance on many hate speech detection tasks (MacAvaney et al., 2019; Mandl et al., 2019). We focus on post-hoc explanation approaches, which interpret model predictions without elucidating the mechanisms by which the model works (Guidotti et al., 2019). These explanations reveal either wordlevel (Ribeiro et al., 2016; Sundararajan et al., 2017) or phrase-level importance (Murdoch et al., 2018; Singh et al., 2019) of inputs to predictions. 3 Data We selected two public corpora for our experiments which highlight the rhetorical aspects of hate speech, versus merely the usage of slurs and explicitly offensive language (see Davidson et al., 2017). The “Gab Hate Corpus” (GHC; Kennedy et al., 2020) is a large, random sample (N = 27,655) from the Pushshift.io data dump of the Gab network ∗, which we have annotated according to a typology of “hate-based rhetoric”, a construct motivated by hate speech criminal codes outside the U.S. and social science research on prejudice and dehumanization. Gab is a social network with a high rate of hate speech (Zannettou et al., 2018; Lima et al., 2018) and populated by the “Alt-right” (Anthony, 2016; Benson, 2016). Similarly with respect to domain and definitions, de Gibert et al. (2018) sampled and annotated posts from the “Stormfront” web domain (Meddaugh and Kay, 2009) and annotated at the sentence level according to a similar annotation guide as used in the GHC. Train and test splits were randomly generated for Stormfront sentences (80/20) with “hate” taken as a positive binary label, and a test set was compiled from the GHC by drawing a random stratified sample with respect to the “target population” tag (possible values including race/ethnicity target, gender, religious, etc.). A single “hate” label was created by taking the union of two main labels, “human degradation” and “calls for violence”. Training data for the GHC (GHCtrain) included 24,353 posts with 2,027 labeled as hate, and test data for the GHC (GHCtest) included 1,586 posts with 372 labeled as hate. Stormfront splits resulted in 7,896 (1,059 hate) training sentences, 979 (122) validation, and 1,998 (246) test. 4 Analyzing Group Identifier Bias To establish and define our problem more quantitatively, we analyze hate speech models’ bias towards group identifiers and how this leads to false positive errors during prediction. We analyze the top features of a linear model and use post-hoc explanations applied to a fine-tuned BERT model in order to measure models’ bias towards these terms. We then establish the effect of these tendencies on ∗https://files.pushshift.io/gab/ 5437 0 10 20 # Removed Identity Terms 0.30 0.35 0.40 0.45 0.50 0.55 0.60 F1 Hate Detection Gab Stormfront 0 10 20 # Removed Identity Terms 0.76 0.78 0.80 0.82 0.84 0.86 0.88 0.90 Accuracy NYT Adversarial Figure 2: BoW F1 scores (trained on GHCtrain and evaluated on GHCtest) as a function of how many group identifiers are removed (left). Accuracy of same models on NYT dataset with no hate speech (right). model predictions using an adversarial-like dataset of New York Times articles. 4.1 Classification Models We apply our analyses on two text classifiers, logistic regression with bag of words features and a fine-tuned BERT model (Devlin et al., 2018). The BERT model appends a special CLS token at the beginning of the input sentence and feeds the sentence into stacked layers of Transformer (Vaswani et al., 2017) encoders. The representation of the CLS token at the final layer is fed into a linear layer to perform 2-way classification (hate or non-hate). Model configuration and training details can be found in the Section A.3. 4.2 Model Interpretation We first determine a model’s sensitivity towards group identifiers by examining the models themselves. Linear classifiers can be examined in terms of their most highly-weighted features. We apply a post-hoc explanation algorithm for this task of extracting similar information from the fine-tuned methods discussed above. Group identifiers in linear models From the top features in a bag-of-words logistic regression of hate speech on GHCtrain, we collected a set of twenty-five identity words (not restricted to social group terms, but terms identifying a group in general), including “homosexual”, “muslim”, and “black”, which are used in our later analyses. The full list is in Supplementals (A.1). Explanation-based measures State-of-the-art fine-tuned BERT models are able to model complicated word and phrase compositions: for example, some words are only offensive when they are composed with specific ethnic groups. To capture this, we apply a state-of-the-art Sampling and Occlusion (SOC) algorithm which is capable of generating hierarchical explanations for a prediction. To generate hierarchical explanations, SOC starts by assigning importance score for phrases in a way that eliminates compositional effect between the phrase and its context xδ around it within a window. Given a phrase p appearing in a sentence x, SOC assigns an importance score φ(p) to show how the phrase p contribute so that the sentence is classified as a hate speech. The algorithm computes the difference of the unnormalized prediction score s(x) between “hate” and “non-hate” in the 2-way classifier. Then the algorithm evaluates average change of s(x) when the phrase is masked with padding tokens (noted as x\p) for different inputs, in which the N-word contexts around the phrase p are sampled from a pretrained language model, while other words remain the same as the given x. Formally, the importance score φ(p) is measured as, φ(p) = Exδ[s(x) −s(x\p)] (1) In the meantime, SOC algorithm perform agglomerative clustering over explanations to generate a hierarchical layout. Averaged Word-level SOC Explanation Using SOC explanations output on GHCtest, we compute average word importance and present the top 20 in Table 2. 4.3 Bias in Prediction Hate speech models can be over-attentive to group identifiers, as we have seen by inspecting them through feature analysis and a post-hoc explanation approach. The effect of this during prediction is that models over-associate these terms with hate speech and choose to neglect the context around the identifier, resulting in false positives. To provide an external measure of models’ over-sensitivity to group identifiers, we construct an adversarial test set of New York Times (NYT) articles that are filtered to contain a balanced, random sample of the twenty-five group identifiers (Section A.1). This gives us 12, 500 documents which are devoid of hate speech as defined by our typologies, excepting quotation. It is key for models to not ignore identifiers, but to match them with the right context. Figure 2 shows the effect of ignoring identifiers: random 5438 There has been a rise and fall of hate against the jews hate against the jews of hate of the jews (a) BERT There has been a rise and fall of hate against the jews hate against the jews hate against of (b) BERT + SOC regularization Figure 3: Hierarchical explanations on a test instance from GHCtest before and after explanation regularization, where false positive predictions are corrected. subsets of words ranging in size from 0 to 25 are removed, with each subset sample size repeated 5 times. Decreased rates of false positives on the NYT set are accompanied by poor performance in hate speech detection. 5 Contextualizing Hate Speech Models We have shown hate speech models to be oversensitive to group identifiers and unable to learn from the context surrounding these words during training. To address this problem in state-of-the-art models, we propose that models can be regularized to give no explained importance to identifier terms. We explain our approach as well as a naive baseline based on removing these terms. Word Removal Baseline. The simplest approach is to remove group identifiers altogether. We remove words from the term list found in Section A.1 from both training and testing sentences. Explanation Regularization. Given that SOC explanations are fully differentiable, during training, we regularize SOC explanations on the group identifiers to be close to 0 in addition to the classification objective L′. The combined learning objective is written as follows. L = L′ + α X w∈x∩S [φ(w)]2, (2) where S notes for the set of group names and x notes for the input word sequence. α is a hyperparameter for the strength of the regularization. In addition to SOC, we also experiment with regularizing input occlusion (OC) explanations, defined as the prediction change when a word or phrase is masked out, which bypass the sampling step in SOC. 6 Regularization Experiments 6.1 Experiment Details Balancing performance on hate speech detection and the NYT test set is our quantitative measure of how well a model has learned the contexts in which group identifiers are used for hate speech. We apply our regularization approach to this task, and compare with a word removal strategy for the fine-tuned BERT model. We repeat the process for both the GHC and Stormfront, evaluating test set hate speech classification in-domain and accuracy on the NYT test set. For the GHC, we used the full list of 25 terms; for Stormfront, we used the 10 terms which were also found in the top predictive features in linear classifiers for the Stormfront data. Congruently, for Stormfront we filtered the NYT corpus to only contain these 10 terms (N = 5,000). 6.2 Results Performance is reported in Table 1. For the GHC, we see an improvement for in-domain hate speech classification, as well as an improvement in false positive reduction on the NYT corpus. For Stormfront, we see the same improvements for in-domain F1) and NYT. For the GHC, the most marked difference between BERT+WR and BERT+SOC is increased recall, suggesting that baseline removal largely mitigates bias towards identifiers at the cost of more false negatives. As discussed in section 4.2, SOC eliminates the compositional effects of a given word or phrase. As a result, regularizing SOC explanations does not prohibit the model from utilizing contextual information related to group identifiers. This can possibly explain the improved performance in hate speech detection relative to word removal. Word Importance in Regularized Models We determined that regularization improves a models focus on non-identifier context in prediction. In table 2 we show the changes in word importance as measured by SOC. Identity terms’ importance decreases, and we also see a significant increase in importance of terms related to hate speech (“poisoned”, “blamed”, etc.) suggesting that models have learned from the identifier terms’ context. Visualizing Effects of Regularization We can further see the effect of regularization by considering Figure 3, where hierarchically clustered expla5439 Training set GHC Stormfront Method / Metrics Precision Recall F1 NYT Acc. Precision Recall F1 NYT Acc. BoW 62.80 56.72 59.60 75.61 36.95 58.13 45.18 66.78 BERT 69.87 ± 1.7 66.83 ± 7.0 67.91 ± 3.1 77.79 ± 4.8 57.76 ± 3.9 54.43 ± 8.1 55.44 ± 2.9 92.29 ± 4.1 BoW + WR 54.65 52.15 53.37 89.72 36.24 55.69 43.91 81.34 BERT + WR 67.61 ± 2.8 60.08 ± 6.6 63.44 ± 3.1 89.78 ± 3.8 53.16 ± 4.3 57.03 ± 5.7 54.60 ± 1.7 92.47 ± 3.4 BERT + OC (α=0.1) 60.56 ± 1.8 69.72 ± 3.6 64.14 ± 3.2 89.43 ± 4.3 57.47 ± 3.7 51.10 ± 4.4 53.82 ± 1.3 95.39 ± 2.3 BERT + SOC (α=0.1) 70.17 ± 2.5 69.03 ± 3.0 69.52 ± 1.3 83.16 ± 5.0 57.29 ± 3.4 54.27 ± 3.3 55.55 ± 1.1 93.93 ± 3.6 BERT + SOC (α=1.0) 64.29 ± 3.1 69.41 ± 3.8 66.67 ± 2.5 90.06 ± 2.6 56.05 ± 3.9 54.35 ± 3.4 54.97 ± 1.1 95.40 ± 2.0 Table 1: Precision, recall, F1 (%) on GHCtest and Stormfront (Stf.) test set and accuracy (%) on NYT evaluation set. We report mean and standard deviation of the performance across 10 runs for BERT, BERT + WR (word removal), BERT + OC, and BERT + SOC. BERT ∆Rank Reg. ∆Rank ni**er +0 ni**er +0 ni**ers -7 fag +35 kike -90 traitor +38 mosques -260 faggot +5 ni**a -269 bastard +814 jews -773 blamed +294 kikes -190 alive +1013 nihon -515 prostitute +56 faggot +5 ni**ers -7 nip -314 undermine +442 islam -882 punished +491 homosexuality -1368 infection +2556 nuke -129 accusing +2408 niro -734 jaggot +8 muhammad -635 poisoned +357 faggots -128 shitskin +62 nitrous -597 ought +229 mexican -51 rotting +358 negro -346 stayed +5606 muslim -1855 destroys +1448 Table 2: Top 20 words by mean SOC weight before (BERT) and after (Reg.) regularization for GHC. Changes in the rank of importance as a result of regularization are also shown. Curated set of group identifiers are highlighted. nations from SOC are visualized before and after regularization, correcting a false positive. 7 Conclusion & Future Work Regularizing SOC explanations of group identifiers tunes hate speech classifiers to be more contextsensitive and less reliant on high-frequency words in imbalanced training sets. Complementing prior work in bias detection and removal in the context of hate speech and in other settings, our method is directly integrated into Transformer-based models and does not rely on data augmentation. As such, it is an encouraging technique towards directing models’ internal representation of target phenomena via lexical anchors. Future work includes direct extension and validation of this technique with other language models such as GPT-2 (Radford et al., 2019); experimenting with other hate speech or offensive language datasets; and experimenting with these and other sets of identity terms. Also motivated by the present work is the more general pursuit of integrating structure into neural models like BERT. Regularized hate speech classifiers increases sensitivity to the compositionality of hate speech, but the phenomena remain highly complex rhetorically and difficult to learn through supervision. For example, this post from the GHC requires background information and reasoning across sentences in order to classify as offensive or prejudiced: “Donald Trump received much criticism for referring to Haiti, El Salvador and Africa as ‘shitholes’. He was simply speaking the truth.” The examples we presented (see Appendix 4 and 5) show that regularization leads to models that are context-sensitive to a degree, but not to the extent of reasoning over sentences like those above. We hope that the present work can motivate more attempts to inject more structure into hate speech classification. Explanation algorithms offer a window into complex predictive models, and regularization as performed in this work can improve models’ internal representations of target phenomena. In this work, we effectively applied this technique to hate speech classifiers biased towards group identifiers; future work can determine the effectiveness and further potential for this technique in other tasks and contexts. Acknowledgments This research was sponsored in part by NSF CAREER BCS-1846531 (Morteza Dehghani). Xiang Ren’s research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 201919051600007, United States Office Of Naval Research under Contract No. N660011924033, and NSF SMA 18-29268. 5440 References Andrew Anthony. 2016. Inside the hate-filled echo chamber of racism and conspiracy theories. The guardian, 18. Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54–63. Thor Benson. 2016. Inside the twitter for racists: Gab the site where milo yiannopoulos goes to troll now. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems, pages 4349–4357. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Eleventh international AAAI conference on web and social media. Fabio Del Vigna12, Andrea Cimino23, Felice DellOrletta, Marinella Petrocchi, and Maurizio Tesconi. 2017. Hate me, hate me not: Hate speech detection on facebook. In Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), pages 86–95. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67–73. ACM. Iginio Gagliardone, Danit Gal, Thiago Alves, and Gabriela Martinez. 2015. Countering online hate speech. Unesco Publishing. Katharine Gelber and Luke McNamara. 2016. Evidencing the harms of hate speech. Social Identities, 22(3):324–341. Ona de Gibert, Naiara Perez, Aitor Garc´ıa Pablos, and Montse Cuadros. 2018. Hate speech dataset from a white supremacy forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 11–20. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):93. Joseph Hoover, Mohammad Atari, Aida Mostafazadeh Davani, Brendan Kennedy, Gwenyth PortilloWightman, Leigh Yeh, Drew Kogon, and Morteza Dehghani. 2019. Bound in hatred: The role of group-based morality in acts of hate. PsyArxiv Preprint 10.31234/osf.io/359me. Xisen Jin, Zhongyu Wei, Junyi Du, Xiangyang Xue, and Xiang Ren. 2020. Towards hierarchical importance attribution: Explaining compositional semantics for neural sequence models. In International Conference on Learning Representations. Brendan Kennedy, Mohammad Atari, Aida M Davani, Leigh Yeh, Ali Omrani, Yehsong Kim, Kris Coombs Jr., Shreya Havaldar, Gwenyth PortilloWightman, Elaine Gonzalez, Joe Hoover, Aida Azatian, Gabriel Cardenas, Alyzeh Hussain, Austin Lara, Adam Omary, Christina Park, Xin Wang, Clarisa Wijaya, Yong Zhang, Beth Meyerowitz, and Morteza Dehghani. 2020. The gab hate corpus: A collection of 27k posts annotated for hate speech. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Lucas Lima, Julio CS Reis, Philipe Melo, Fabricio Murai, Leandro Araujo, Pantelis Vikatos, and Fabricio Benevenuto. 2018. Inside the right-leaning echo chambers: Characterizing gab, an unmoderated social system. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 515–522. IEEE. Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PloS one, 14(8). Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019. Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages. In Proceedings of the 11th Forum for Information Retrieval Evaluation, pages 14–17. Priscilla Marie Meddaugh and Jack Kay. 2009. Hate speech or “reasonable racism?” the other in stormfront. Journal of Mass Media Ethics, 24(4):251– 268. Shruthi Mohan, Apala Guha, Michael Harris, Fred Popowich, Ashley Schuster, and Chris Priebe. 2017. The impact of toxic language on the health of reddit communities. In Canadian Conference on Artificial Intelligence, pages 51–56. Springer. Mainack Mondal, Leandro Ara´ujo Silva, and Fabr´ıcio Benevenuto. 2017. A measurement study of hate speech in social media. In Proceedings of the 28th ACM Conference on Hypertext and Social Media, pages 85–94. ACM. 5441 Aida Mostafazadeh Davani, Mohammad Atari, Brendan Kennedy, Shreya Havaldar, and Morteza Dehghani. 2020. Hatred is in the eye of the annotator: Hate speech classifiers learn human-like social stereotypes (in press). In 31st Annual Conference of the Cognitive Science Society (CogSci). W. James Murdoch, Peter J. Liu, and Bin Yu. 2018. Beyond word importance: Contextual decomposition to extract interactions from LSTMs. In International Conference on Learning Representations. Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658–4664. Alexandra Olteanu, Carlos Castillo, Jeremy Boy, and Kush R Varshney. 2018. The effect of extremist violence on hateful speech online. In Twelfth International AAAI Conference on Web and Social Media. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2799–2804. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Open AI Blog. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. ACM. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678. Leandro Silva, Mainack Mondal, Denzil Correa, Fabr´ıcio Benevenuto, and Ingmar Weber. 2016. Analyzing the targets of hate in online social media. In Tenth International AAAI Conference on Web and Social Media. Chandan Singh, W. James Murdoch, and Bin Yu. 2019. Hierarchical interpretations for neural network predictions. In International Conference on Learning Representations. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3319–3328. JMLR. org. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Jeremy Waldron. 2012. The harm in hate speech. Harvard University Press. William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the second workshop on language in social media, pages 19–26. Association for Computational Linguistics. Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78–84. Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop, pages 88–93. Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of abusive language: the problem of biased datasets. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602–608. Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Proceedings of the 26th International Conference on World Wide Web, pages 1391–1399. International World Wide Web Conferences Steering Committee. Savvas Zannettou, Barry Bradlyn, Emiliano De Cristofaro, Haewoon Kwak, Michael Sirivianos, Gianluca Stringini, and Jeremy Blackburn. 2018. What is gab: A bastion of free speech or an alt-right echo chamber. In Companion Proceedings of the The Web Conference 2018, pages 1007–1014. International World Wide Web Conferences Steering Committee. 5442 A Appendices A.1 Full List of Curated Group Identifiers muslim jew jews white islam blacks muslims women whites gay black democat islamic allah jewish lesbian transgender race brown woman mexican religion homosexual homosexuality africans Table 3: 25 group identifiers selected from top weighted words in the TF-IDF BOW linear classifier on the GHC. jew jews mexican blacks jewish brown black muslim homosexual islam Table 4: 10 group identifiers selected for the Stormfront dataset. A.2 Visualizations of Effect of Regularization ‘… truth behind them, ’ said one muslim shop owner shop owner muslim one said one muslim shop owner (a) BERT ‘… truth behind them, ’ said one muslim shop owner shop owner muslim one said said one muslim (b) BERT + SOC regularization Figure 4: Hierarchical explanations on a test instance from the NYT dataset where false positive predictions are corrected. A.3 Implementation Details Training Details. We fine-tune over the BERTbase model using the public code†, where the batch size is set to 32 and the learning rate of the Adam (Kingma and Ba, 2015) optimizer is set to 2 × 10−5. The validation is performed every 200 iterations and the learning rate is halved when the validation F1 decreases. The training stops when the learning rate is halved for 5 times. To handle the data imbalance issue, we reweight the training loss so that positive examples are weighted 10 † https://github.com/huggingface/ transformers The jews are just evil money lenders just money are jews The evil lenders The jews are (a) BERT The jews are just evil money lenders just money are jews The just evil evil lenders The jews (b) BERT + SOC regularization Figure 5: Hierarchical explanations on a test instance from the Gab dataset where both models make correct positive predictions. However, the explanations reveal that only the regularized model is making correct predictions for correct reasons. times as negative examples on the Gab dataset and 8 times on the Stormfront dataset. Explanation Algorithm Details. For the SOC algorithm, we set the number of samples and the size of the context window as 20 and 20 respectively for explanation analysis, and set two parameters as 5 and 5 respectively for explanation regularization. A.4 Cross-Domain Performance In addition to evaluating each model within-domain (i.e., training on GHCtrain and evaluating on GHCtest) we evaluated each model across domains. The results of these experiments, conducted in the same way as before, are presented in Table 5. Method / Dataset Gab →Stf. F1 Stf. →Gab F1 BoW 32.39 46.71 BERT 42.84 ± 1.2 53.80 ± 5.5 BoW + WR 27.45 44.81 BERT + WR 39.10 ± 1.3 55.31 ± 4.0 BERT + OC (α=0.1) 40.60 ± 1.6 56.90 ± 1.8 BERT + SOC (α=0.1) 41.88 ± 1.0 55.75 ± 2.1 BERT + SOC (α=1.0) 39.20 ± 2.7 56.82 ± 3.9 Table 5: Cross domain F1 on Gab, Stormfront (Stf.) datasets. We report mean and standard deviation of the performance within 10 runs for BERT, BERT + WR (word removal), BERT + OC, and BERT + SOC.
2020
483
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5443–5453 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5443 Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation Tianlu Wang1∗ Xi Victoria Lin2 Nazneen Fatema Rajani2 Bryan McCann2 Vicente Ordonez1 Caiming Xiong2 1University of Virginia {tw8cb, vicente}@virginia.edu 2Salesforce Research {xilin, nazneen.rajani, bmccann, cxiong}@salesforce.com Abstract Word embeddings derived from humangenerated corpora inherit strong gender bias which can be further amplified by downstream models. Some commonly adopted debiasing approaches, including the seminal Hard Debias algorithm (Bolukbasi et al., 2016), apply post-processing procedures that project pre-trained word embeddings into a subspace orthogonal to an inferred gender subspace. We discover that semantic-agnostic corpus regularities such as word frequency captured by the word embeddings negatively impact the performance of these algorithms. We propose a simple but effective technique, Double-Hard Debias, which purifies the word embeddings against such corpus regularities prior to inferring and removing the gender subspace. Experiments on three bias mitigation benchmarks show that our approach preserves the distributional semantics of the pre-trained word embeddings while reducing gender bias to a significantly larger degree than prior approaches. 1 Introduction Despite widespread use in natural language processing (NLP) tasks, word embeddings have been criticized for inheriting unintended gender bias from training corpora. Bolukbasi et al. (2016) highlights that in word2vec embeddings trained on the Google News dataset (Mikolov et al., 2013a), “programmer” is more closely associated with “man” and “homemaker” is more closely associated with “woman”. Such gender bias also propagates to downstream tasks. Studies have shown that coreference resolution systems exhibit gender bias in predictions due to the use of biased word embeddings (Zhao et al., 2018a; Rudinger et al., 2018). Given the fact that pre-trained word embeddings ∗This research was conducted during the author’s internship at Salesforce Research. have been integrated into a vast number of NLP models, it is important to debias word embeddings to prevent discrimination in NLP systems. To mitigate gender bias, prior work have proposed to remove the gender component from pre-trained word embeddings through postprocessing (Bolukbasi et al., 2016), or to compress the gender information into a few dimensions of the embedding space using a modified training scheme (Zhao et al., 2018b; Kaneko and Bollegala, 2019). We focus on post-hoc gender bias mitigation for two reasons: 1) debiasing via a new training approach is more computationally expensive; and 2) pre-trained biased word embeddings have already been extensively adopted in downstream NLP products and post-hoc bias mitigation presumably leads to less changes in the model pipeline since it keeps the core components of the original embeddings. Existing post-processing algorithms, including the seminal Hard Debias (Bolukbasi et al., 2016), debias embeddings by removing the component that corresponds to a gender direction as defined by a list of gendered words. While Bolukbasi et al. (2016) demonstrates that such methods alleviate gender bias in word analogy tasks, Gonen and Goldberg (2019) argue that the effectiveness of these efforts is limited, as the gender bias can still be recovered from the geomrtry of the debiased embeddings. We hypothesize that it is difficult to isolate the gender component of word embeddings in the manner employed by existing post-processing methods. For example, Gong et al. (2018); Mu and Viswanath (2018) show that word frequency significantly impact the geometry of word embeddings. Consequently, popular words and rare words cluster in different subregions of the embedding space, despite the fact that words in these clusters are not semantically similar. This can degrade the ability of component-based methods for debiasing gender. 5444 (a) Change the frequency of “boy”. (b) Change the frequency of “daughter”. Figure 1: ∆of cosine similarities between gender difference vectors before / after adjusting the frequency of word w. When the frequency of w changes, the cosine similarities between the gender difference vector (−→v ) for w and other gender difference vectors exhibits a large change. This demonstrates that frequency statistics for w have a strong influence on the the gender direction represented by −→v . Specifically, recall that Hard Debias seeks to remove the component of the embeddings corresponding to the gender direction. The important assumption made by Hard Debias is that we can effectively identify and isolate this gender direction. However, we posit that word frequency in the training corpora can twist the gender direction and limit the effectiveness of Hard Debias. To this end, we propose a novel debiasing algorithm called Double-Hard Debias that builds upon the existing Hard Debias technique. It consists of two steps. First, we project word embeddings into an intermediate subspace by subtracting component(s) related to word frequency. This mitigates the impact of frequency on the gender direction. Then we apply Hard Debias to these purified embeddings to mitigate gender bias. Mu and Viswanath (2018) showed that typically more than one dominant directions in the embedding space encode frequency features. We test the effect of each dominant direction on the debiasing performance and only remove the one(s) that demonstrated the most impact. We evaluate our proposed debiasing method using a wide range of evaluation techniques. According to both representation level evaluation (WEAT test (Caliskan et al., 2017), the neighborhood metric (Gonen and Goldberg, 2019)) and downstream task evaluation (coreference resolution (Zhao et al., 2018a)), Double-Hard Debias outperforms all previous debiasing methods. We also evaluate the functionality of debiased embeddings on several benchmark datasets to demonstrate that DoubleHard Debias effectively mitigates gender bias without sacrificing the quality of word embeddings1. 2 Motivation Current post-hoc debiasing methods attempt to reduce gender bias in word embeddings by subtracting the component associated with gender from them. Identifying the gender direction in the word embedding space requires a set of gender word pairs, P, which consists of “she & he”, “daughter & son”, etc. For every pair, for example “boy & girl”, the difference vector of the two embeddings is expected to approximately capture the gender direction: −→v boy,girl = −→ w boy −−→ w girl (1) Bolukbasi et al. (2016) computes the first principal component of ten such difference vectors and use that to define the gender direction.2 Recent works (Mu and Viswanath, 2018; Gong et al., 2018) show that word frequency in a training 1Code and data are available at https://github. com/uvavision/Double-Hard-Debias.git 2The complete definition of P is: “woman & man”, “girl & boy”, “she & he”, “mother & father”, “daughter & son”, “gal & guy”, “female & male”, “her & his”, “herself & himself”, and “Mary & John” (Bolukbasi et al., 2016). 5445 corpus can degrade the quality of word embeddings. By carefully removing such frequency features, existing word embeddings can achieve higher performance on several benchmarks after fine-tuning. We hypothesize that such word frequency statistics also interferes with the components of the word embeddings associated with gender. In other words, frequency-based features learned by word embedding algorithms act as harmful noise in the previously proposed debiasing techniques. To verify this, we first retrain GloVe (Pennington et al., 2014) embeddings on the one billion English word benchmark (Chelba et al., 2013) following previous work (Zhao et al., 2018b; Kaneko and Bollegala, 2019). We obtain ten difference vectors for the gendered pairs in P and compute pairwise cosine similarity. This gives a similarity matrix S in which Spi,pj denotes the cosine similarity between difference vectors −→v pairi and −→v pairj. We then select a specific word pair, e.g. “boy” & “girl”, and augment the corpus by sampling sentences containing the word “boy” twice. In this way, we produce a new training corpus with altered word frequency statistics for “boy”. The context around the token remains the same so that changes to the other components are negligible. We retrain GloVe with this augmented corpus and get a set of new offset vectors for the gendered pairs P. We also compute a second similarity matrix S′ where S′ pi,pj denotes the cosine similarity between difference vectors −→v ′ pairi and −→v ′ pairj. By comparing these two similarity matrices, we analyze the effect of changing word frequency statistics on gender direction. Note that the offset vectors are designed for approximating the gender direction, thus we focus on the changes in offset vectors. Because statistics were altered for “boy”, we focus on the difference vector −→v boy,girl and make two observations. First, the norm of −→v boy,girl has a 5.8% relative change while the norms of other difference vectors show much smaller changes. For example, the norm of −→v man,woman only changes by 1.8%. Second, the cosine similarities between −→v boy,girl and other difference vectors also show more significant change, as highlighted by the red bounding box in Figure 1a. As we can see, the frequency change of “boy” leads to deviation of the gender direction captured by −→v boy,girl. We observe similar phenomenon when we change the frequency of the word “daughter” and present these results in Figure 1b. Based on these observations, we conclude that word frequency plays an important role in gender debiasing despite being overlooked by previous works. 3 Method In this section, we first summarize the terminology that will be used throughout the rest of the paper, briefly review the Hard Debias method, and provide background on the neighborhood evaluation metric. Then we introduce our proposed method: DoubleHard Debias. 3.1 Preliminary Definitions Let W be the vocabulary of the word embeddings we aim to debias. The set of word embeddings contains a vector −→ w ∈Rn for each word w ∈ W. A subspace B is defined by k orthogonal unit vectors B = {b1, . . . , bk} ∈Rd. We denote the projection of vector v on B by vB = k X j=1 (v · bj)bj. (2) Following (Bolukbasi et al., 2016), we assume there is a set of gender neutral words N ⊂W, such as “doctor” and “teacher”, which by definition are not specific to any gender. We also assume a pre-defined set of n male-female word pairs D1, D2, . . . , Dn ⊂W, where the main difference between each pair of words captures gender. Hard Debias. The Hard Debias algorithm first identifies a subspace that captures gender bias. Let µi := X w∈Di −→ w /|Di|. (3) The bias subspace B is the first k (≥1) rows of SVD(C), where C := m X i=1 X w∈Di (−→ w −µi)T (−→ w −µi)/|Di| (4) Following the original implementation of Bolukbasi et al. (2016), we set k = 1. As a result the subspace B is simply a gender direction.3 Hard Debias then neutralizes the word embeddings by transforming each −→ w such that every word 3Bolukbasi et al. (2016) normalize all embeddings. However, we found it is unnecessary in our experiments. This is also mentioned in Ethayarajh et al. (2019) 5446 Figure 2: Clustering accuracy after projecting out D-th dominating direction and applying Hard Debias. Lower accuracy indicates less bias. w ∈N has zero projection in the gender subspace. For each word w ∈N, we re-embed −→ w : −→ w := −→ w −−→ w B (5) Neighborhood Metric. The Neighborhood Metric proposed by (Gonen and Goldberg, 2019) is a bias measurement that does not rely on any specific gender direction. To do so it looks into similarities between words. The bias of a word is the proportion of words with the same gender bias polarity among its nearest neighboring words. We selected k of the most biased male and females words according to the cosine similarity of their embedding and the gender direction computed using the word embeddings prior to bias mitigation. We use Wm and Wf to denote the male and female biased words, respectively. For wi ∈Wm, we assign a ground truth gender label gi = 0. For wi ∈Wf, gi = 1. Then we run KMeans (k = 2) to cluster the embeddings of selected words ˆgi = KMeans(−→ w i), and compute the alignment score a with respect to the assigned ground truth gender labels: a = 1 2k 2k X i=1 1[ˆgi == gi] (6) We set a = max(a, 1 −a). Thus, a value of 0.5 in this metric indicates perfectly unbiased word embeddings (i.e. the words are randomly clustered), and a value closer to 1 indicates stronger gender bias. 3.2 Double-Hard Debiasing According to Mu and Viswanath (2018), the most statistically dominant directions of word embeddings encode word frequency to a significant extent. Mu and Viswanath (2018) removes these frequency features by centralizing and subtracting components along the top D dominant directions Algorithm 1: Double-Hard Debias. Input :Word embeddings: {−→ w ∈Rd, w ∈W} Male biased words set: Wm Female biased words set: Wf 1 Sdebias = [] 2 Decentralize −→ w : µ ← 1 |V| P w∈V −→ w , for each −→ w ∈W, ˜w ←−→ w −µ; 3 Compute principal components by PCA: {u1 . . . ud} ←PCA({ ˜w, w ∈W}); 4 //discover the frequency directions 5 for i = 1 to d do 6 w′ m ←˜ wm −(uT i wm)ui; 7 w′ f ←˜ wf −(uT i wf)ui; 8 ˆwm ←HardDebias(w′ m); 9 ˆwf ←HardDebias(w′ f); 10 output = KMeans([ ˆwm ˆwf]); 11 a = eval(output, Wm, Wf); 12 Sdebias.append(a); 13 end 14 k = arg mini Sdebias; 15 // remove component on frequency direction 16 w′ ←˜w −(uT k w)uk; 17 // remove components on gender direction 18 ˆw ←HardDebias(w′); Output :Debiased word embeddings: { ˆw ∈Rd, w ∈W} from the original word embeddings. These postprocessed embedddings achieve better performance on several benchmark tasks, including word similarity, concept categorization, and word analogy. It is also suggested that setting D near d/100 provides maximum benefit, where d is the dimension of a word embedding. We speculate that most the dominant directions also affect the geometry of the gender space. To address this, we use the aforementioned clustering experiment to identify whether a direction contains frequency features that alter the gender direction. More specifically, we first pick the top biased words (500 male and 500 female) identified using the original GloVe embeddings. We then apply PCA to all their word embeddings and take the top principal components as candidate directions to drop. For every candidate direction u, we project the embeddings into a space that is orthogonal to u. In this intermediate subspace, we apply Hard Debias and get debiased embeddings. Next, we cluster the debiased embeddings of these words 5447 and compute the gender alignment accuracy (Eq. 6). This indicates whether projecting away direction u improves the debiasing performance. Algorithm 1 shows the details of our method in full. We found that for GloVe embeddings pre-trained on Wikipedia dataset, elimination of the projection along the second principal component significantly decreases the clustering accuracy. This translates to better debiasing results, as shown in Figure 2. We further demonstrate the effectiveness of our method for debaising using other evaluation metrics in Section 4. 4 Experiments In this section, we compare our proposed method with other debiasing algorithms and test the functionality of these debiased embeddings on word analogy and concept categorization task. Experimental results demonstrate that our method effectively reduces bias to a larger extent without degrading the quality of word embeddings. 4.1 Dataset We use 300-dimensional GloVe (Pennington et al., 2014) 4 embeddings pre-trained on the 2017 January dump of English Wikipedia5, containing 322, 636 unique words. To identify the gender direction, we use 10 pairs of definitional gender words compiled by (Bolukbasi et al., 2016)6. 4.2 Baselines We compare our proposed method against the following baselines: GloVe: the pre-trained GloVe embeddings on Wikipedia dataset described in 4.1. GloVe is widely used in various NLP applications. This is a nondebiased baseline for comparision. GN-GloVe: We use debiased Gender-Neutral GNGloVe embeddings released by the original authors (Zhao et al., 2018b). GN-GloVe restricts gender information in certain dimensions while neutralizing the rest dimensions. GN-GloVe(wa): We exclude the gender dimensions from GN-GloVe. This baseline tries to completely remove gender. GP-GloVe: We use debiased embeddings released by the original authors (Kaneko and Bollegala, 4Experiments on Word2Vec are included in the appendix. 5https://github.com/uclanlp/gn_glove 6https://github.com/tolga-b/debiaswe 2019). Gender-preserving Debiasing attempts to preserve non-discriminative gender information, while removing stereotypical gender bias. GP-GN-GloVe:: This baseline applies Genderpreserving Debiasing on already debaised GNGloVe embeddings. We also use debiased embeddings provided by authors. Hard-GloVe: We apply Hard Debias introduced in (Bolukbasi et al., 2016) on GloVe embeddings. Following the implementation provided by original authors, we debias netural words and preserve the gender specific words. Strong Hard-GloVe: A variant of Hard Debias where we debias all words instead of avoiding gender specific words. This seeks to entirely remove gender from GloVe embeddings. Double-Hard GloVe: We debias the pre-trained GloVe embeddings by our proposed Double-Hard Debias method. 4.3 Evaluation of Debiasing Performance We demonstrate the effectiveness of our debiasing method for downstream applications and according to general embedding level evaluations. 4.3.1 Debiasing in Downstream Applications Coreference Resolution. Coreference resolution aims at identifying noun phrases referring to the same entity. Zhao et al. (2018a) identified gender bias in modern coreference systems, e.g. “doctor” is prone to be linked to “he”. They also introduce a new benchmark dataset WinoBias, to study gender bias in coreference systems. WinoBias provides sentences following two prototypical templates. Each type of sentences can be divided into a pro-stereotype (PRO) subset and a antistereotype (ANTI) subset. In the PRO subset, gender pronouns refer to professions dominated by the same gender. For example, in sentence “The physician hired the secretary because he was overwhelmed with clients.”, “he” refers to “physician”, which is consistent with societal stereotype. On the other hand, the ANTI subset consists of same sentences, but the opposite gender pronouns. As such, “he” is replaced by “she” in the aforementioned example. The hypothesis is that gender cues may distract a coreference model. We consider a system to be gender biased if it performs better in prostereotypical scenarios than in anti-stereotypical scenarios. 5448 Embeddings OntoNotes PRO-1 ANTI-1 Avg-1 |Diff-1 | PRO-2 ANTI-2 Avg-2 |Diff-2 | GloVe 66.5 77.7 48.2 62.9 29.0 82.7 67.5 75.1 15.2 GN-GloVe 66.1 68.4 56.5 62.5 12.0 78.2 71.3 74.7 6.9 GN-GloVe(wa) 66.4 66.7 56.6 61.6 10.2 79.0 72.3 75.7 6.7 GP-GloVe 66.1 72.0 52.0 62.0 20.0 78.5 70.0 74.3 8.6 GP-GN-GloVe 66.3 70.0 54.5 62.0 15.0 79.9 70.7 75.3 9.2 Hard-GloVe 66.2 72.3 52.7 62.6 19.7 80.6 78.3 79.4 2.3 Strong Hard-GloVe 66.0 69.0 58.6 63.8 10.4 82.2 78.6 80.4 3.6 Double-Hard GloVe 66.4 66.0 58.3 62.2 7.7 85.4 84.5 85.0 0.9 Table 1: F1 score (%) of coreference systems on OntoNotes test set and WinoBias dataset. |Diff | represents the performance gap between pro-stereotype (PRO) subset and anti-stereotype (ANTI) subset. Coreference system trained on our Double-Hard GloVe embeddings has the smallest |Diff | values, suggesting less gender bias. We train an end-to-end coreference resolution model (Lee et al., 2017) with different word embeddings on OntoNotes 5.0 training set and report the performance on WinoBias dataset. Results are presented in Table1. Note that absolute performance difference (Diff) between the PRO set and ANTI set connects with gender bias. A smaller Diff value indicates a less biased coreference system. We can see that on both types of sentences in WinoBias, Double-Hard GloVe achieves the smallest Diff compared to other baselines. This demonstrates the efficacy of our method. Meanwhile, Double-Hard GloVe maintains comparable performance as GloVe on OntoNotes test set, showing that our method preserves the utility of word embeddings. It is also worth noting that by reducing gender bias, Double-Hard GloVe can significantly improve the average performance on type-2 sentences, from 75.1% (GloVe) to 85.0%. 4.3.2 Debiasing at Embedding Level The Word Embeddings Association Test (WEAT). WEAT is a permutation test used to measure the bias in word embeddins. We consider male names and females names as attribute sets and compute the differential association of two sets of target words7 and the gender attribute sets. We report effect sizes (d) and p-values (p) in Table2. The effect size is a normalized measure of how separated the two distributions are. A higher value of effect size indicates larger bias between target words with regard to gender. p-values denote if the bias is significant. A high p-value (larger than 0.05) indicates the bias is insignificant. We refer readers to Caliskan et al. (2017) for more details. 7All word lists are from Caliskan et al. (2017). Because GloVeembeddings are uncased, we use lower cased people names and replace “bill” with “tom” to avoid ambiguity. As shown in Table 2, across different target words sets, Double-Hard GloVe consistently outperforms other debiased embeddings. For Career & Family and Science & Arts, Double-Hard GloVe reaches the lowest effect size, for the latter one, Double-Hard GloVe successfully makes the bias insignificant (p-value > 0.05). Note that in WEAT test, some debiasing methods run the risk of amplifying gender bias, e.g. for Math & Arts words, the bias is significant in GN-GloVe while it is insignificant in original GloVe embeddings. Such concern does not occur in Double-Hard GloVe. Neighborhood Metric. (Gonen and Goldberg, 2019) introduces a neighborhood metric based on clustering. As described in Sec 3.1, We take the top k most biased words according to their cosine similarity with gender direction in the original GloVe embedding space8. We then run k-Means to cluster them into two clusters and compute the alignment accuracy with respect to gender, results are presented in Table 3. We recall that in this metric, a accuracy value closer to 0.5 indicates less biased word embeddings. Using the original GloVe embeddings, k-Means can accurately cluster selected words into a male group and a female group, suggesting the presence of a strong bias. Hard Debias is able to reduce bias in some degree while other baselines appear to be less effective. Double-Hard GloVe achieves the lowest accuracy across experiments clustering top 100/500/1000 biased words, demonstrating that the proposed technique effectively reduce gender bias. We also conduct tSNE (van der Maaten and Hinton, 2008) projection for all baseline embed8To be fair, we exclude all gender specific words used in debiasing, so Hard-GloVe and Strong Hard-GloVe have same acurracy performance in Table 3 5449 Embeddings Career & Family Math & Arts Science & Arts d p d p d p GloVe 1.81 0.0 0.55 0.14 0.88 0.04 GN-GloVe 1.82 0.0 1.21 6e−3 1.02 0.02 GN-GloVe(wa) 1.76 0.0 1.43 1e−3 1.02 0.02 GP-GloVe 1.81 0.0 0.87 0.04 0.91 0.03 GP-GN-GloVe 1.80 0.0 1.42 1e−3 1.04 0.01 Hard-GloVe 1.55 2e−4 0.07 0.44 0.16 0.62 Strong Hard-GloVe 1.55 2e−4 0.07 0.44 0.16 0.62 Double-Hard GloVe 1.53 2e−4 0.09 0.57 0.15 0.61 Table 2: WEAT test of embeddings before/after Debiasing. The bias is insignificant when p-value, p > 0.05. Lower effective size (d) indicates less gender bias. Significant gender bias related to Career & Family and Science & Arts words is effectively reduced by Double-Hard GloVe. Note for Math & Arts words, gender bias is insignificant in original GloVe. dings. As shown in Figure 3, original non-debiased GloVe embeddings are clearly projected to different regions. Double-Hard GloVe mixes up male and female embeddings to the maximum extent compared to other baselines, showing less gender information can be captured after debiasing. Embeddings Top 100 Top 500 Top 1000 GloVe 100.0 100.0 100.0 GN-GloVe 100.0 100.0 99.9 GN-GloVe(wa) 100.0 99.7 88.5 GP-GloVe 100.0 100.0 100.0 GP-GN-GloVe 100.0 100.0 99.4 (Strong) Hard GloVe 59.0 62.1 68.1 Double-Hard GloVe 51.5 55.5 59.5 Table 3: Clustering Accuracy (%) of top 100/500/1000 male and female words. Lower accuracy means less gender cues can be captured. Double-Hard GloVe consistently achieves the lowest accuracy. 4.4 Analysis of Retaining Word Semantics Word Analogy. Given three words A, B and C, the analogy task is to find word D such that “A is to B as C is to D”. In our experiments, D is the word that maximize the cosine similarity between D and C −A + B. We evaluate all non-debiased and debiased embeddings on the MSR (Mikolov et al., 2013c) word analogy task, which contains 8000 syntactic questions, and on a second Google word analogy (Mikolov et al., 2013a) dataset that contains 19, 544 (Total) questions, including 8, 869 semantic (Sem) and 10, 675 syntactic (Syn) questions. The evaluation metric is the percentage of questions for which the correct answer is assigned the maximum score by the algorithm. Results are shown in Table4. Double-Hard GloVe achieves comparable good results as GloVe and slightly outperforms some other debiased embeddings. This proves that Double-Hard Debias is capable of preserving proximity among words. Concept Categorization. The goal of concept categorization is to cluster a set of words into different categorical subsets. For example, “sandwich” and “hotdog” are both food and “dog” and “cat” are animals. The clustering performance is evaluated in terms of purity (Manning et al., 2008) - the fraction of the total number of the words that are correctly classified. Experiments are conducted on four benchmark datasets: the Almuhareb-Poesio (AP) dataset (Almuhareb, 2006); the ESSLLI 2008 (Baroni et al., 2008); the Battig 1969 set (Battig and Montague, 1969) and the BLESS dataset (Baroni and Lenci, 2011). We run classical Kmeans algorithm with fixed k. Across four datasets, the performance of Double-Hard GloVe is on a par with GloVe embeddings, showing that the proposed debiasing method preserves useful semantic information in word embeddings. Full results can be found in Table4. 5 Related Work Gender Bias in Word Embeddings. Word embeddings have been criticized for carrying gender bias. Bolukbasi et al. (2016) show that word2vec (Mikolov et al., 2013b) embeddings trained on the Google News dataset exhibit occupational stereotypes, e.g. “programmer” is closer to “man” and “homemaker” is closer to “woman”. More recent works (Zhao et al., 2019; Kurita et al., 2019; Basta 5450 (a) GloVe (b) GN-GloVe (c) GN-GloVe(wa) (d) GP-GloVe (e) GP-GN-GloVe (f) Hard-GloVe (g) Strong Hard-GloVe (h) Double-Hard GloVe Figure 3: tSNE visualization of top 500 most male and female embeddings. Double-Hard GloVe mixes up two groups to the maximum extent, showing less gender information is encoded. Embeddings Analogy Concept Categorization Sem Syn Total MSR AP ESSLI Battig BLESS GloVe 80.5 62.8 70.8 54.2 55.6 72.7 51.2 81.0 GN-GloVe 77.7 61.6 68.9 51.9 56.9 70.5 49.5 85.0 GN-GloVe(wa) 77.7 61.6 68.9 51.9 56.9 75.0 51.3 82.5 GP-GloVe 80.6 61.7 70.3 51.3 56.1 75.0 49.0 78.5 GP-GN-GloVe 77.7 61.7 68.9 51.8 61.1 72.7 50.9 77.5 Hard-GloVe 80.3 62.5 70.6 54.0 62.3 79.5 50.0 84.5 Strong Hard-GloVe 78.6 62.4 69.8 53.9 64.1 79.5 49.2 84.5 Double-Hard GloVe 80.9 61.6 70.4 53.8 59.6 72.7 46.7 79.5 Table 4: Results of word embeddings on word analogy and concept categorization benchmark datasets. Performance (x100) is measured in accuracy and purity, respectively. On both tasks, there is no significant degradation of performance due to applying the proposed method. et al., 2019) demonstrate that contextualized word embeddings also inherit gender bias. Gender bias in word embeddings also propagate to downstream tasks, which substantially affects predictions. Zhao et al. (2018a) show that coreference systems tend to link occupations to their stereotypical gender, e.g. linking “doctor” to “he” and “nurse” to “she”. Stanovsky et al. (2019) observe that popular industrial and academic machine translation systems are prone to gender biased translation errors. Recently, Vig et al. (2020) proposed causal mediation analysis as a way to interpret and analyze gender bias in neural models. Debiasing Word Embeddings. For contextualized embeddings, existing works propose taskspecific debiasing methods, while in this paper we focus on more generic ones. To mitigate gender bias, Zhao et al. (2018a) propose a new training approach which explicitly restricts gender information in certain dimensions during training. While this method separates gender information from embeddings, retraining word embeddings on massive corpus requires an undesirably large amount of resources. Kaneko and Bollegala (2019) tackles this problem by adopting an encoder-decoder model to re-embed word embeddings. This can be applied to existing pre-trained embeddings, but it still requires train different encoder-decoders for different embeddings. Bolukbasi et al. (2016) introduce a more simple and direct post-processing method which zeros out the component along the gender direction. This method reduces gender bias to some degree, however, Gonen and Goldberg (2019) present a series of experiments to show that they are far from delivering gender-neutral embeddings. Our work builds on top of Bolukbasi et al. (2016). We discover the important factor – word frequency – that limits the effectiveness of existing methods. By carefully eliminating the effect of word frequency, our method is able to significantly improve debiasing performance. 5451 6 Conclusion We have discovered that simple changes in word frequency statistics can have an undesirable impact on the debiasing methods used to remove gender bias from word embeddings. Though word frequency statistics have until now been neglected in previous gender bias reduction work, we propose Double-Hard Debias, which mitigates the negative effects that word frequency features can have on debiasing algorithms. We experiment on several benchmarks and demonstrate that our DoubleHard Debias is more effective on gender bias reduction than other methods while also preserving the quality of word embeddings suitable for the downstream applications and embedding-based word analogy tasks. While we have shown that this method significantly reduces gender bias while preserving quality, we hope that this work encourages further research into debiasing along other dimensions of word embeddings in the future. References Abdulrahman Almuhareb. 2006. Attributes in lexical acquisition. Ph.D. thesis, University of Essex, Colchester, UK. Marco Baroni, Stefan Evert, and Alessandro Lenci. 2008. Bridging the gap between semantic theory and computational simulations: Proceedings of the esslli workshop on distributional lexical semantics. Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, GEMS ’11, pages 1–10, Stroudsburg, PA, USA. Association for Computational Linguistics. Christine Basta, Marta Ruiz Costa-juss`a, and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. CoRR, abs/1904.08783. William F. Battig and William E. Montague. 1969. Category norms of verbal items in 56 categories a replication and extension of the connecticut category norms. Journal of Experimental Psychology, 80(3p2):1. Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NIPS. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding undesirable word embedding associations. arXiv preprint arXiv:1908.06361. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In NAACL-HLT. Chengyue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2018. Frage: Frequency-agnostic word representation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 1334–1345. Curran Associates, Inc. Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. CoRR, abs/1906.00742. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W. Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. CoRR, abs/1906.07337. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 911, 2017, pages 188–197. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schtze. 2008. Introduction to Information Retrieval. Cambridge University Press, Cambridge, UK. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. 5452 Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751, Atlanta, Georgia. Association for Computational Linguistics. Jiaqi Mu and Pramod Viswanath. 2018. All-but-thetop: Simple and effective postprocessing for word representations. In International Conference on Learning Representations. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. arXiv preprint arXiv:1804.09301. Gabriel Stanovsky, Noah A Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. arXiv preprint arXiv:1906.00591. Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. 2020. Causal mediation analysis for interpreting neural nlp: The case of gender bias. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In North American Chapter of the Association for Computational Linguistics (NAACL). Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In North American Chapter of the Association for Computational Linguistics (NAACL). Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning gender-neutral word embeddings. In EMNLP. A Appendices Text Text Figure 4: Clustering accuracy after projecting out D-th dominating direction and applying Hard Debias. Lower accuracy indicates less bias. Embeddings Top 100 Top 500 Top 1000 Word2Vec 100.0 99.3 99.3 Hard-Word2Vec 79.5 74.3 79.8 Double-Hard Word2Vec 71.0 52.3 56.7 Table 5: Clustering Accuracy(%) of top 100/500/1000 male and female words. Lower accuracy means less gender cues captured. Double-Hard Word2Vec consistently achieves the lowest accuracy. We also apply Double-Hard Debias on Word2Vec embeddings (Mikolov et al., 2013b) which have been widely used by many NLP applications. As shown in Figure 4, our algorithm is able to identify that the eighth principal component significantly affects the debiasing performance. Similarly, we first project away the identified direction u from the original Word2Vec embeddings and then apply Hard Debias algorithm. We compare embeddings debiased by our method with the original Word2Vec embeddings and HardWord2Vec embeddings. Table 5 reports the experimental result using the neighborhood metric. Across three experiments where we cluster top 100/500/1000 male and female words, Double-Hard Word2Vec consistently achieves the lowest accuracy . Note that neighborhood metric reflects gender information that can be captured by the clustering algorithm. Experimental result validates that our method can further improve Hard Debias algorithm. This is also verified in Figure 5 where we conduct tSNE visualization of top 500 male and female embeddings. While the original Word2Vec embeddings clearly locate separately into two groups corresponding to different genders, this phenomenon becomes less obvious after applying our debiasing method. We further evaluate the debiasing outcome with WEAT test. Similar to experiments on GloVe em5453 Embeddings Career & Family Math & Arts Science & Arts d p d p d p Word2Vec 1.89 0.0 1.82 0.0 1.57 2e−4 Hard-Word2Vec 1.80 0.0 1.57 7e−5 0.83 0.05 Double-Hard Word2Vec 1.73 0.0 1.51 5e−4 0.68 0.09 Table 6: WEAT test of embeddings before/after Debiasing. The bias is insignificant when p-value, p > 0.05. Lower effective size (d) indicates less gender bias. Across all target words sets, Double-Hard Word2Vec leads to the smallest effective size. Specifically, for Science & Arts words, Double-Hard Word2Vec successfully reaches a bias insignificant state (p = 0.09). Embeddings Analogy Concept Categorization Sem Syn Total MSR AP ESSLI Battig BLESS Word2Vec 24.8 66.5 55.3 73.6 64.5 75.0 46.3 78.9 Hard-Word2Vec 23.8 66.3 54.9 73.5 62.7 75.0 47.1 77.4 Double-Hard Word2Vec 23.5 66.3 54.9 74.0 63.2 75.0 46.5 77.9 Table 7: Results of word embeddings on word analogy and concept categorization benchmark datasets. Performance (x100) is measured in accuracy and purity, respectively. On both tasks, there is no significant degradation of performance due to applying the proposed method. beddings, we use male names and female names as attribute sets and analyze the association between attribute sets and three target sets. We report effective size and p-value in Table 6. Across three target sets, Double-Hard Word2Vec is able to consistently reduce the effect size. More importantly, the bias related to Science & Arts words becomes insignificant after applying our debiasing method. To test the functionality of debiased embeddings, we again conduct experiments on word analogy and concept categorization tasks. Results are included in Table 7. We demonstrate that our proposed debiasing method brings no significant performance degradation in these two tasks. To summarize, experiments on Word2Vec embeddings also support our conclusion that the proposed Double-Hard Debiasing reduces gender bias to a larger degree while is able to maintain the semantic information in word embeddings. 40 20 0 20 40 40 20 0 20 40 (a) Word2Vec 40 20 0 20 40 60 40 20 0 20 40 60 (b) Hard-Word2Vec 40 20 0 20 40 60 40 20 0 20 40 (c) Double-Hard Word2Vec Figure 5: tSNE visualization of top 500 most male and female embeddings. Double-Hard Word2Vec mixes up two groups to the maximum extent, showing less gender information encoded.
2020
484
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454–5476 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5454 Language (Technology) is Power: A Critical Survey of “Bias” in NLP Su Lin Blodgett Solon Barocas College of Information and Computer Sciences Microsoft Research University of Massachusetts Amherst Cornell University [email protected] [email protected] Hal Daumé III Hanna Wallach Microsoft Research Microsoft Research University of Maryland [email protected] [email protected] Abstract We survey 146 papers analyzing “bias” in NLP systems, fnding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing “bias” is an inherently normative process. We further fnd that these papers’ proposed quantitative techniques for measur­ ing or mitigating “bias” are poorly matched to their motivations and do not engage with the relevant literature outside of NLP. Based on these fndings, we describe the beginnings of a path forward by proposing three recommenda­ tions that should guide work analyzing “bias” in NLP systems. These recommendations rest on a greater recognition of the relationships between language and social hierarchies, encouraging researchers and practitioners to articulate their conceptualizations of “bias”—i.e., what kinds of system behaviors are harmful, in what ways, to whom, and why, as well as the normative reasoning underlying these statements—and to center work around the lived experiences of members of commu­ nities affected by NLP systems, while inter­ rogating and reimagining the power relations between technologists and such communities. 1 Introduction A large body of work analyzing “bias” in natural language processing (NLP) systems has emerged in recent years, including work on “bias” in embed­ ding spaces (e.g., Bolukbasi et al., 2016a; Caliskan et al., 2017; Gonen and Goldberg, 2019; May et al., 2019) as well as work on “bias” in systems developed for a breadth of tasks including language modeling (Lu et al., 2018; Bordia and Bowman, 2019), coreference resolution (Rudinger et al., 2018; Zhao et al., 2018a), machine translation (Van­ massenhove et al., 2018; Stanovsky et al., 2019), sentiment analysis (Kiritchenko and Mohammad, 2018), and hate speech/toxicity detection (e.g., Park et al., 2018; Dixon et al., 2018), among others. Although these papers have laid vital ground­ work by illustrating some of the ways that NLP systems can be harmful, the majority of them fail to engage critically with what constitutes “bias” in the frst place. Despite the fact that analyzing “bias” is an inherently normative process—in which some system behaviors are deemed good and others harmful—papers on “bias” in NLP systems are rife with unstated assumptions about what kinds of system behaviors are harmful, in what ways, to whom, and why. Indeed, the term “bias” (or “gender bias” or “racial bias”) is used to describe a wide range of system behaviors, even though they may be harmful in different ways, to different groups, or for different reasons. Even papers analyzing “bias” in NLP systems developed for the same task often conceptualize it differently. For example, the following system behaviors are all understood to be self-evident statements of “racial bias”: (a) embedding spaces in which embed­ dings for names associated with African Americans are closer (compared to names associated with European Americans) to unpleasant words than pleasant words (Caliskan et al., 2017); (b) senti­ ment analysis systems yielding different intensity scores for sentences containing names associated with African Americans and sentences containing names associated with European Americans (Kir­ itchenko and Mohammad, 2018); and (c) toxicity 5455 detection systems scoring tweets containing fea­ tures associated with African-American English as more offensive than tweets without these features (Davidson et al., 2019; Sap et al., 2019). Moreover, some of these papers focus on “racial bias” expressed in written text, while others focus on “racial bias” against authors. This use of imprecise terminology obscures these important differences. We survey 146 papers analyzing “bias” in NLP systems, fnding that their motivations are often vague and inconsistent. Many lack any normative reasoning for why the system behaviors that are described as “bias” are harmful, in what ways, and to whom. Moreover, the vast majority of these papers do not engage with the relevant literature outside of NLP to ground normative concerns when proposing quantitative techniques for measuring or mitigating “bias.” As a result, we fnd that many of these techniques are poorly matched to their motivations, and are not comparable to one another. We then describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing “bias” in NLP systems. We argue that such work should examine the relationships between language and social hi­ erarchies; we call on researchers and practitioners conducting such work to articulate their conceptu­ alizations of “bias” in order to enable conversations about what kinds of system behaviors are harmful, in what ways, to whom, and why; and we recom­ mend deeper engagements between technologists and communities affected by NLP systems. We also provide several concrete research questions that are implied by each of our recommendations. 2 Method Our survey includes all papers known to us analyzing “bias” in NLP systems—146 papers in total. We omitted papers about speech, restricting our survey to papers about written text only. To identify the 146 papers, we frst searched the ACL Anthology1 for all papers with the keywords “bias” or “fairness” that were made available prior to May 2020. We retained all papers about social “bias,” and discarded all papers about other defnitions of the keywords (e.g., hypothesis-only bias, inductive bias, media bias). We also discarded all papers us­ ing “bias” in NLP systems to measure social “bias” in text or the real world (e.g., Garg et al., 2018). To ensure that we did not exclude any relevant 1https://www.aclweb.org/anthology/ NLP task Papers Embeddings (type-level or contextualized) 54 Coreference resolution 20 Language modeling or dialogue generation 17 Hate-speech detection 17 Sentiment analysis 15 Machine translation 8 Tagging or parsing 5 Surveys, frameworks, and meta-analyses 20 Other 22 Table 1: The NLP tasks covered by the 146 papers. papers without the keywords “bias” or “fairness,” we also traversed the citation graph of our initial set of papers, retaining any papers analyzing “bias” in NLP systems that are cited by or cite the papers in our initial set. Finally, we manually inspected any papers analyzing “bias” in NLP systems from leading machine learning, human–computer inter­ action, and web conferences and workshops, such as ICML, NeurIPS, AIES, FAccT, CHI, and WWW, along with any relevant papers that were made available in the “Computation and Language” and “Computers and Society” categories on arXiv prior to May 2020, but found that they had already been identifed via our traversal of the citation graph. We provide a list of all 146 papers in the appendix. In Table 1, we provide a breakdown of the NLP tasks covered by the papers. We note that counts do not sum to 146, because some papers cover multiple tasks. For example, a paper might test the effcacy of a technique for mitigating “bias” in embed­ ding spaces in the context of sentiment analysis. Once identifed, we then read each of the 146 pa­ pers with the goal of categorizing their motivations and their proposed quantitative techniques for mea­ suring or mitigating “bias.” We used a previously developed taxonomy of harms for this categoriza­ tion, which differentiates between so-called alloca­ tional and representational harms (Barocas et al., 2017; Crawford, 2017). Allocational harms arise when an automated system allocates resources (e.g., credit) or opportunities (e.g., jobs) unfairly to dif­ ferent social groups; representational harms arise when a system (e.g., a search engine) represents some social groups in a less favorable light than others, demeans them, or fails to recognize their existence altogether. Adapting and extending this taxonomy, we categorized the 146 papers’ motiva­ tions and techniques into the following categories: . Allocational harms. 5456 Papers Category Motivation Technique Allocational harms 30 4 Stereotyping 50 58 Other representational harms 52 43 Questionable correlations 47 42 Vague/unstated 23 0 Surveys, frameworks, and 20 20 meta-analyses Table 2: The categories into which the 146 papers fall. . Representational harms:2 . Stereotyping that propagates negative gen­ eralizations about particular social groups. . Differences in system performance for dif­ ferent social groups, language that misrep­ resents the distribution of different social groups in the population, or language that is denigrating to particular social groups. . Questionable correlations between system be­ havior and features of language that are typi­ cally associated with particular social groups. . Vague descriptions of “bias” (or “gender bias” or “racial bias”) or no description at all. . Surveys, frameworks, and meta-analyses. In Table 2 we provide counts for each of the six categories listed above. (We also provide a list of the papers that fall into each category in the appendix.) Again, we note that the counts do not sum to 146, because some papers state multiple motivations, propose multiple techniques, or pro­ pose a single technique for measuring or mitigating multiple harms. Table 3, which is in the appendix, contains examples of the papers’ motivations and techniques across a range of different NLP tasks. 3 Findings Categorizing the 146 papers’ motivations and pro­ posed quantitative techniques for measuring or miti­ gating “bias” into the six categories listed above en­ abled us to identify several commonalities, which we present below, along with illustrative quotes. 2We grouped several types of representational harms into two categories to refect that the main point of differentiation between the 146 papers’ motivations and proposed quantitative techniques for measuring or mitigating “bias” is whether or not they focus on stereotyping. Among the papers that do not fo­ cus on stereotyping, we found that most lack suffciently clear motivations and techniques to reliably categorize them further. 3.1 Motivations Papers state a wide range of motivations, multiple motivations, vague motivations, and sometimes no motivations at all. We found that the papers’ motivations span all six categories, with several papers falling into each one. Appropriately, papers that provide surveys or frameworks for an­ alyzing “bias” in NLP systems often state multiple motivations (e.g., Hovy and Spruit, 2016; Bender, 2019; Sun et al., 2019; Rozado, 2020; Shah et al., 2020). However, as the examples in Table 3 (in the appendix) illustrate, many other papers (33%) do so as well. Some papers (16%) state only vague motivations or no motivations at all. For example, “[N]o human should be discriminated on the basis of demographic attributes by an NLP system.” —Kaneko and Bollegala (2019) “[P]rominent word embeddings [...] encode systematic biases against women and black people [...] implicating many NLP systems in scaling up social injustice.” —May et al. (2019) These examples leave unstated what it might mean for an NLP system to “discriminate,” what con­ stitutes “systematic biases,” or how NLP systems contribute to “social injustice” (itself undefned). Papers’ motivations sometimes include no nor­ mative reasoning. We found that some papers (32%) are not motivated by any apparent normative concerns, often focusing instead on concerns about system performance. For example, the frst quote below includes normative reasoning—namely that models should not use demographic information to make predictions—while the other focuses on learned correlations impairing system performance. “In [text classifcation], models are expected to make predictions with the semantic information rather than with the demographic group identity information (e.g., ‘gay’, ‘black’) contained in the sentences.” —Zhang et al. (2020a) “An over-prevalence of some gendered forms in the training data leads to translations with identifable errors. Translations are better for sentences involving men and for sentences containing stereotypical gender roles.” —Saunders and Byrne (2020) Even when papers do state clear motivations, they are often unclear about why the system be­ haviors that are described as “bias” are harm­ ful, in what ways, and to whom. We found that even papers with clear motivations often fail to ex­ plain what kinds of system behaviors are harmful, in what ways, to whom, and why. For example, 5457 “Deploying these word embedding algorithms in practice, for example in automated translation systems or as hiring aids, runs the serious risk of perpetuating problematic biases in important societal contexts.” —Brunet et al. (2019) “[I]f the systems show discriminatory behaviors in the interactions, the user experience will be adversely affected.” —Liu et al. (2019) These examples leave unstated what “problematic biases” or non-ideal user experiences might look like, how the system behaviors might result in these things, and who the relevant stakeholders or users might be. In contrast, we fnd that papers that provide surveys or frameworks for analyzing “bias” in NLP systems often name who is harmed, acknowledging that different social groups may experience these systems differently due to their different relationships with NLP systems or different social positions. For example, Ruane et al. (2019) argue for a “deep understanding of the user groups [sic] characteristics, contexts, and interests” when designing conversational agents. Papers about NLP systems developed for the same task often conceptualize “bias” differ­ ently. Even papers that cover the same NLP task often conceptualize “bias” in ways that differ sub­ stantially and are sometimes inconsistent. Rows 3 and 4 of Table 3 (in the appendix) contain machine translation papers with different conceptualizations of “bias,” leading to different proposed techniques, while rows 5 and 6 contain papers on “bias” in em­ bedding spaces that state different motivations, but propose techniques for quantifying stereotyping. Papers’ motivations confate allocational and representational harms. We found that the pa­ pers’ motivations sometimes (16%) name imme­ diate representational harms, such as stereotyping, alongside more distant allocational harms, which, in the case of stereotyping, are usually imagined as downstream effects of stereotypes on résumé flter­ ing. Many of these papers use the imagined down­ stream effects to justify focusing on particular sys­ tem behaviors, even when the downstream effects are not measured. Papers on “bias” in embedding spaces are especially likely to do this because em­ beddings are often used as input to other systems: “However, none of these papers [on embeddings] have recognized how blatantly sexist the embeddings are and hence risk introducing biases of various types into real-world systems.” —Bolukbasi et al. (2016a) “It is essential to quantify and mitigate gender bias in these embeddings to avoid them from affecting downstream applications.” —Zhou et al. (2019) In contrast, papers that provide surveys or frame­ works for analyzing “bias” in NLP systems treat representational harms as harmful in their own right. For example, Mayfeld et al. (2019) and Ruane et al. (2019) cite the harmful reproduction of dominant linguistic norms by NLP systems (a point to which we return in section 4), while Bender (2019) outlines a range of harms, including seeing stereotypes in search results and being made invis­ ible to search engines due to language practices. 3.2 Techniques Papers’ techniques are not well grounded in the relevant literature outside of NLP. Perhaps un­ surprisingly given that the papers’ motivations are often vague, inconsistent, and lacking in normative reasoning, we also found that the papers’ proposed quantitative techniques for measuring or mitigating “bias” do not effectively engage with the relevant literature outside of NLP. Papers on stereotyping are a notable exception: the Word Embedding Association Test (Caliskan et al., 2017) draws on the Implicit Association Test (Greenwald et al., 1998) from the social psychology literature, while several techniques operationalize the well-studied “Angry Black Woman” stereotype (Kiritchenko and Mohammad, 2018; May et al., 2019; Tan and Celis, 2019) and the “double bind” faced by women (May et al., 2019; Tan and Celis, 2019), in which women who succeed at stereotypically male tasks are perceived to be less likable than similarly successful men (Heilman et al., 2004). Tan and Celis (2019) also examine the compounding effects of race and gender, drawing on Black feminist scholarship on intersectionality (Crenshaw, 1989). Papers’ techniques are poorly matched to their motivations. We found that although 21% of the papers include allocational harms in their motiva­ tions, only four papers actually propose techniques for measuring or mitigating allocational harms. Papers focus on a narrow range of potential sources of “bias.” We found that nearly all of the papers focus on system predictions as the potential sources of “bias,” with many additionally focusing on “bias” in datasets (e.g., differences in the number of gendered pronouns in the training data (Zhao et al., 2019)). Most papers do not interrogate 5458 the normative implications of other decisions made during the development and deployment lifecycle— perhaps unsurprising given that their motivations sometimes include no normative reasoning. A few papers are exceptions, illustrating the impacts of task defnitions, annotation guidelines, and evaluation metrics: Cao and Daumé (2019) study how folk conceptions of gender (Keyes, 2018) are reproduced in coreference resolution systems that assume a strict gender dichotomy, thereby main­ taining cisnormativity; Sap et al. (2019) focus on the effect of priming annotators with information about possible dialectal differences when asking them to apply toxicity labels to sample tweets, fnd­ ing that annotators who are primed are signifcantly less likely to label tweets containing features asso­ ciated with African-American English as offensive. 4 A path forward We now describe how researchers and practitioners conducting work analyzing “bias” in NLP systems might avoid the pitfalls presented in the previous section—the beginnings of a path forward. We propose three recommendations that should guide such work, and, for each, provide several concrete research questions. We emphasize that these ques­ tions are not comprehensive, and are intended to generate further questions and lines of engagement. Our three recommendations are as follows: (R1) Ground work analyzing “bias” in NLP sys­ tems in the relevant literature outside of NLP that explores the relationships between lan­ guage and social hierarchies. Treat represen­ tational harms as harmful in their own right. (R2) Provide explicit statements of why the system behaviors that are described as “bias” are harmful, in what ways, and to whom. Be forthright about the normative reasoning (Green, 2019) underlying these statements. (R3) Examine language use in practice by engag­ ing with the lived experiences of members of communities affected by NLP systems. Inter­ rogate and reimagine the power relations be­ tween technologists and such communities. 4.1 Language and social hierarchies Turning frst to (R1), we argue that work analyzing “bias” in NLP systems will paint a much fuller pic­ ture if it engages with the relevant literature outside of NLP that explores the relationships between language and social hierarchies. Many disciplines, including sociolinguistics, linguistic anthropology, sociology, and social psychology, study how language takes on social meaning and the role that language plays in maintaining social hierarchies. For example, language is the means through which social groups are labeled and one way that beliefs about social groups are transmitted (e.g., Maass, 1999; Beukeboom and Burgers, 2019). Group labels can serve as the basis of stereotypes and thus reinforce social inequalities: “[T]he label content functions to identify a given category of people, and thereby conveys category boundaries and a position in a hierarchical taxonomy” (Beukeboom and Burgers, 2019). Similarly, “controlling images,” such as stereotypes of Black women, which are linguistically and visually transmitted through literature, news media, television, and so forth, provide “ideological justifcation” for their continued oppression (Collins, 2000, Chapter 4). As a result, many groups have sought to bring about social changes through changes in language, disrupting patterns of oppression and marginal­ ization via so-called “gender-fair” language (Sczesny et al., 2016; Menegatti and Rubini, 2017), language that is more inclusive to people with disabilities (ADA, 2018), and language that is less dehumanizing (e.g., abandoning the use of the term “illegal” in everyday discourse on immigration in the U.S. (Rosa, 2019)). The fact that group labels are so contested is evidence of how deeply inter­ twined language and social hierarchies are. Taking “gender-fair” language as an example, the hope is that reducing asymmetries in language about women and men will reduce asymmetries in their social standing. Meanwhile, struggles over lan­ guage use often arise from dominant social groups’ desire to “control both material and symbolic resources”—i.e., “the right to decide what words will mean and to control those meanings”—as was the case in some white speakers’ insistence on using offensive place names against the objections of Indigenous speakers (Hill, 2008, Chapter 3). Sociolinguists and linguistic anthropologists have also examined language attitudes and lan­ guage ideologies, or people’s metalinguistic beliefs about language: Which language varieties or prac­ tices are taken as standard, ordinary, or unmarked? Which are considered correct, prestigious, or ap­ propriate for public use, and which are considered incorrect, uneducated, or offensive (e.g., Campbell­ 5459 Kibler, 2009; Preston, 2009; Loudermilk, 2015; Lanehart and Malik, 2018)? Which are rendered in­ visible (Roche, 2019)?3 Language ideologies play a vital role in reinforcing and justifying social hi­ erarchies because beliefs about language varieties or practices often translate into beliefs about their speakers (e.g. Alim et al., 2016; Rosa and Flores, 2017; Craft et al., 2020). For example, in the U.S., the portrayal of non-white speakers’ language varieties and practices as linguistically defcient helped to justify violent European colonialism, and today continues to justify enduring racial hierar­ chies by maintaining views of non-white speakers as lacking the language “required for complex thinking processes and successful engagement in the global economy” (Rosa and Flores, 2017). Recognizing the role that language plays in maintaining social hierarchies is critical to the future of work analyzing “bias” in NLP systems. First, it helps to explain why representational harms are harmful in their own right. Second, the complexity of the relationships between language and social hierarchies illustrates why studying “bias” in NLP systems is so challenging, suggesting that researchers and practitioners will need to move beyond existing algorithmic fairness techniques. We argue that work must be grounded in the relevant literature outside of NLP that examines the relationships between language and social hierarchies; without this grounding, researchers and practitioners risk measuring or mitigating only what is convenient to measure or mitigate, rather than what is most normatively concerning. More specifcally, we recommend that work analyzing “bias” in NLP systems be reoriented around the following question: How are social hierarchies, language ideologies, and NLP systems coproduced? This question mirrors Benjamin’s (2020) call to examine how “race and technology are coproduced”—i.e., how racial hierarchies, and the ideologies and discourses that maintain them, create and are re-created by technology. We recom­ mend that researchers and practitioners similarly ask how existing social hierarchies and language ideologies drive the development and deployment of NLP systems, and how these systems therefore reproduce these hierarchies and ideologies. As a starting point for reorienting work analyzing “bias” in NLP systems around this question, we 3Language ideologies encompass much more than this; see, e.g., Lippi-Green (2012), Alim et al. (2016), Rosa and Flores (2017), Rosa and Burdick (2017), and Charity Hudley (2017). provide the following concrete research questions: . How do social hierarchies and language ideologies infuence the decisions made during the development and deployment lifecycle? What kinds of NLP systems do these decisions result in, and what kinds do they foreclose? ⋄ General assumptions: To which linguistic norms do NLP systems adhere (Bender, 2019; Ruane et al., 2019)? Which language practices are implicitly assumed to be standard, ordinary, correct, or appropriate? ⋄ Task defnition: For which speakers are NLP systems (and NLP resources) developed? (See Joshi et al. (2020) for a discussion.) How do task defnitions discretize the world? For example, how are social groups delineated when defning demographic attribute prediction tasks (e.g., Koppel et al., 2002; Rosenthal and McKeown, 2011; Nguyen et al., 2013)? What about languages in native language prediction tasks (Tetreault et al., 2013)? ⋄ Data: How are datasets collected, prepro­ cessed, and labeled or annotated? What are the impacts of annotation guidelines, anno­ tator assumptions and perceptions (Olteanu et al., 2019; Sap et al., 2019; Geiger et al., 2020), and annotation aggregation pro­ cesses (Pavlick and Kwiatkowski, 2019)? ⋄ Evaluation: How are NLP systems evalu­ ated? What are the impacts of evaluation metrics (Olteanu et al., 2017)? Are any non-quantitative evaluations performed? . How do NLP systems reproduce or transform language ideologies? Which language varieties or practices come to be deemed good or bad? Might “good” language simply mean language that is easily handled by existing NLP sys­ tems? For example, linguistic phenomena aris­ ing from many language practices (Eisenstein, 2013) are described as “noisy text” and often viewed as a target for “normalization.” How do the language ideologies that are reproduced by NLP systems maintain social hierarchies? . Which representational harms are being measured or mitigated? Are these the most normatively concerning harms, or merely those that are well handled by existing algo­ rithmic fairness techniques? Are there other representational harms that might be analyzed? 5460 4.2 Conceptualizations of “bias” Turning now to (R2), we argue that work analyzing “bias” in NLP systems should provide explicit statements of why the system behaviors that are described as “bias” are harmful, in what ways, and to whom, as well as the normative reasoning underlying these statements. In other words, researchers and practitioners should articulate their conceptualizations of “bias.” As we described above, papers often contain descriptions of system behaviors that are understood to be self-evident statements of “bias.” This use of imprecise terminology has led to papers all claiming to analyze “bias” in NLP systems, sometimes even in systems developed for the same task, but with different or even inconsistent conceptualizations of “bias,” and no explanations for these differences. Yet analyzing “bias” is an inherently normative process—in which some system behaviors are deemed good and others harmful—even if assump­ tions about what kinds of system behaviors are harmful, in what ways, for whom, and why are not stated. We therefore echo calls by Bardzell and Bardzell (2011), Keyes et al. (2019), and Green (2019) for researchers and practitioners to make their normative reasoning explicit by articulating the social values that underpin their decisions to deem some system behaviors as harmful, no matter how obvious such values appear to be. We further argue that this reasoning should take into account the relationships between language and social hierarchies that we described above. First, these relationships provide a foundation from which to approach the normative reasoning that we recom­ mend making explicit. For example, some system behaviors might be harmful precisely because they maintain social hierarchies. Second, if work analyzing “bias” in NLP systems is reoriented to understand how social hierarchies, language ideologies, and NLP systems are coproduced, then this work will be incomplete if we fail to account for the ways that social hierarchies and language ideologies determine what we mean by “bias” in the frst place. As a starting point, we therefore provide the following concrete research questions: . What kinds of system behaviors are described as “bias”? What are their potential sources (e.g., general assumptions, task defnition, data)? . In what ways are these system behaviors harm­ ful, to whom are they harmful, and why? . What are the social values (obvious or not) that underpin this conceptualization of “bias?” 4.3 Language use in practice Finally, we turn to (R3). Our perspective, which rests on a greater recognition of the relationships between language and social hierarchies, suggests several directions for examining language use in practice. Here, we focus on two. First, because lan­ guage is necessarily situated, and because different social groups have different lived experiences due to their different social positions (Hanna et al., 2020)—particularly groups at the intersections of multiple axes of oppression—we recommend that researchers and practitioners center work analyzing “bias” in NLP systems around the lived experiences of members of communities affected by these systems. Second, we recommend that the power relations between technologists and such communities be interrogated and reimagined. Researchers have pointed out that algorithmic fairness techniques, by proposing incremental technical mitigations—e.g., collecting new datasets or training better models—maintain these power relations by (a) assuming that automated systems should continue to exist, rather than asking whether they should be built at all, and (b) keeping development and deployment decisions in the hands of technologists (Bennett and Keyes, 2019; Cifor et al., 2019; Green, 2019; Katell et al., 2020). There are many disciplines for researchers and practitioners to draw on when pursuing these directions. For example, in human–computer interaction, Hamidi et al. (2018) study transgender people’s experiences with automated gender recognition systems in order to uncover how these systems reproduce structures of transgender exclusion by redefning what it means to perform gender “normally.” Value-sensitive design provides a framework for accounting for the values of differ­ ent stakeholders in the design of technology (e.g., Friedman et al., 2006; Friedman and Hendry, 2019; Le Dantec et al., 2009; Yoo et al., 2019), while participatory design seeks to involve stakeholders in the design process itself (Sanders, 2002; Muller, 2007; Simonsen and Robertson, 2013; DiSalvo et al., 2013). Participatory action research in educa­ tion (Kemmis, 2006) and in language documenta­ tion and reclamation (Junker, 2018) is also relevant. In particular, work on language reclamation to support decolonization and tribal sovereignty (Leonard, 2012) and work in sociolinguistics focus­ 5461 ing on developing co-equal research relationships with community members and supporting linguis­ tic justice efforts (e.g., Bucholtz et al., 2014, 2016, 2019) provide examples of more emancipatory rela­ tionships with communities. Finally, several work­ shops and events have begun to explore how to em­ power stakeholders in the development and deploy­ ment of technology (Vaccaro et al., 2019; Givens and Morris, 2020; Sassaman et al., 2020)4 and how to help researchers and practitioners consider when not to build systems at all (Barocas et al., 2020). As a starting point for engaging with commu­ nities affected by NLP systems, we therefore provide the following concrete research questions: . How do communities become aware of NLP systems? Do they resist them, and if so, how? . What additional costs are borne by communi­ ties for whom NLP systems do not work well? . Do NLP systems shift power toward oppressive institutions (e.g., by enabling predictions that communities do not want made, linguistically based unfair allocation of resources or oppor­ tunities (Rosa and Flores, 2017), surveillance, or censorship), or away from such institutions? . Who is involved in the development and deployment of NLP systems? How do decision-making processes maintain power re­ lations between technologists and communities affected by NLP systems? Can these pro­ cesses be changed to reimagine these relations? 5 Case study To illustrate our recommendations, we present a case study covering work on African-American English (AAE).5 Work analyzing “bias” in the con­ text of AAE has shown that part-of-speech taggers, language identifcation systems, and dependency parsers all work less well on text containing features associated with AAE than on text without these features (Jørgensen et al., 2015, 2016; Blod­ gett et al., 2016, 2018), and that toxicity detection systems score tweets containing features associated with AAE as more offensive than tweets with­ out them (Davidson et al., 2019; Sap et al., 2019). These papers have been critical for highlighting AAE as a language variety for which existing NLP 4Also https://participatoryml.github.io/ 5This language variety has had many different names over the years, but is now generally called AfricanAmerican English (AAE), African-American Vernacular En­ glish (AAVE), or African-American Language (AAL) (Green, 2002; Wolfram and Schilling, 2015; Rickford and King, 2016). systems may not work, illustrating their limitations. However, they do not conceptualize “racial bias” in the same way. The frst four of these papers simply focus on system performance differences between text containing features associated with AAE and text without these features. In contrast, the last two papers also focus on such system performance differences, but motivate this focus with the fol­ lowing additional reasoning: If tweets containing features associated with AAE are scored as more offensive than tweets without these features, then this might (a) yield negative perceptions of AAE; (b) result in disproportionate removal of tweets containing these features, impeding participation in online platforms and reducing the space avail­ able online in which speakers can use AAE freely; and (c) cause AAE speakers to incur additional costs if they have to change their language practices to avoid negative perceptions or tweet removal. More importantly, none of these papers engage with the literature on AAE, racial hierarchies in the U.S., and raciolinguistic ideologies. By failing to engage with this literature—thereby treating AAE simply as one of many non-Penn Treebank vari­ eties of English or perhaps as another challenging domain—work analyzing “bias” in NLP systems in the context of AAE fails to situate these systems in the world. Who are the speakers of AAE? How are they viewed? We argue that AAE as a language variety cannot be separated from its speakers— primarily Black people in the U.S., who experience systemic anti-Black racism—and the language ide­ ologies that reinforce and justify racial hierarchies. Even after decades of sociolinguistic efforts to legitimize AAE, it continues to be viewed as “bad” English and its speakers continue to be viewed as linguistically inadequate—a view called the defcit perspective (Alim et al., 2016; Rosa and Flores, 2017). This perspective persists despite demon­ strations that AAE is rule-bound and grammatical (Mufwene et al., 1998; Green, 2002), in addition to ample evidence of its speakers’ linguistic adroit­ ness (e.g., Alim, 2004; Rickford and King, 2016). This perspective belongs to a broader set of raciolin­ guistic ideologies (Rosa and Flores, 2017), which also produce allocational harms; speakers of AAE are frequently penalized for not adhering to domi­ nant language practices, including in the education system (Alim, 2004; Terry et al., 2010), when seeking housing (Baugh, 2018), and in the judicial system, where their testimony is misunderstood or, 5462 worse yet, disbelieved (Rickford and King, 2016; Jones et al., 2019). These raciolinguistic ideologies position racialized communities as needing linguistic intervention, such as language education programs, in which these and other harms can be reduced if communities accommodate to domi­ nant language practices (Rosa and Flores, 2017). In the technology industry, speakers of AAE are often not considered consumers who matter. For example, Benjamin (2019) recounts an Apple em­ ployee who worked on speech recognition for Siri: “As they worked on different English dialects — Australian, Singaporean, and Indian English — [the employee] asked his boss: ‘What about African American English?’ To this his boss responded: ‘Well, Apple products are for the premium market.”’ The reality, of course, is that speakers of AAE tend not to represent the “premium market” precisely be­ cause of institutions and policies that help to main­ tain racial hierarchies by systematically denying them the opportunities to develop wealth that are available to white Americans (Rothstein, 2017)— an exclusion that is reproduced in technology by countless decisions like the one described above. Engaging with the literature outlined above situates the system behaviors that are described as “bias,” providing a foundation for normative reasoning. Researchers and practitioners should be concerned about “racial bias” in toxicity detection systems not only because performance differences impair system performance, but because they reproduce longstanding injustices of stigmatization and disenfranchisement for speakers of AAE. In re-stigmatizing AAE, they reproduce language ideologies in which AAE is viewed as ungrammatical, uneducated, and offensive. These ideologies, in turn, enable linguistic discrimination and justify enduring racial hierarchies (Rosa and Flores, 2017). Our perspective, which understands racial hierarchies and raciolinguistic ideologies as structural conditions that govern the development and deployment of technology, implies that techniques for measuring or mitigating “bias” in NLP systems will necessarily be incomplete unless they interrogate and dismantle these structural conditions, including the power relations between technologists and racialized communities. We emphasize that engaging with the literature on AAE, racial hierarchies in the U.S., and raciolinguistic ideologies can generate new lines of engagement. These lines include work on the ways that the decisions made during the development and deployment of NLP systems produce stigmati­ zation and disenfranchisement, and work on AAE use in practice, such as the ways that speakers of AAE interact with NLP systems that were not designed for them. This literature can also help re­ searchers and practitioners address the allocational harms that may be produced by NLP systems, and ensure that even well-intentioned NLP systems do not position racialized communities as needing linguistic intervention or accommodation to dominant language practices. Finally, researchers and practitioners wishing to design better systems can also draw on a growing body of work on anti-racist language pedagogy that challenges the defcit perspective of AAE and other racialized language practices (e.g. Flores and Chaparro, 2018; Baker-Bell, 2019; Martínez and Mejía, 2019), as well as the work that we described in section 4.3 on reimagining the power relations between tech­ nologists and communities affected by technology. 6 Conclusion By surveying 146 papers analyzing “bias” in NLP systems, we found that (a) their motivations are often vague, inconsistent, and lacking in norma­ tive reasoning; and (b) their proposed quantitative techniques for measuring or mitigating “bias” are poorly matched to their motivations and do not en­ gage with the relevant literature outside of NLP. To help researchers and practitioners avoid these pitfalls, we proposed three recommendations that should guide work analyzing “bias” in NLP sys­ tems, and, for each, provided several concrete re­ search questions. These recommendations rest on a greater recognition of the relationships between language and social hierarchies—a step that we see as paramount to establishing a path forward. Acknowledgments This paper is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1451512. Any opin­ ion, fndings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily refect the views of the Na­ tional Science Foundation. We thank the reviewers for their useful feedback, especially the sugges­ tion to include additional details about our method. 5463 References Artem Abzaliev. 2019. On GAP coreference resolu­ tion shared task: insights from the 3rd place solution. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 107–112, Flo­ rence, Italy. ADA. 2018. Guidelines for Writing About Peo­ ple With Disabilities. ADA National Network. https://bit.ly/2KREbkB. Oshin Agarwal, Funda Durupinar, Norman I. Badler, and Ani Nenkova. 2019. Word embeddings (also) encode human personality stereotypes. In Proceed­ ings of the Joint Conference on Lexical and Com­ putational Semantics, pages 205–211, Minneapolis, MN. H. Samy Alim. 2004. You Know My Steez: An Ethno­ graphic and Sociolinguistic Study of Styleshifting in a Black American Speech Community. American Di­ alect Society. H. Samy Alim, John R. Rickford, and Arnetha F. Ball, editors. 2016. Raciolinguistics: How Language Shapes Our Ideas About Race. Oxford University Press. Sandeep Attree. 2019. Gendered ambiguous pronouns shared task: Boosting model confdence by evidence pooling. In Proceedings of the Workshop on Gen­ der Bias in Natural Language Processing, Florence, Italy. Pinkesh Badjatiya, Manish Gupta, and Vasudeva Varma. 2019. Stereotypical bias removal for hate speech detection task using knowledge-based gen­ eralizations. In Proceedings of the International World Wide Web Conference, pages 49–59, San Fran­ cisco, CA. Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. 2019. Differential Privacy Has Dis­ parate Impact on Model Accuracy. In Proceedings of the Conference on Neural Information Processing Systems, Vancouver, Canada. April Baker-Bell. 2019. Dismantling anti-black lin­ guistic racism in English language arts classrooms: Toward an anti-racist black language pedagogy. The­ ory Into Practice. David Bamman, Sejal Popat, and Sheng Shen. 2019. An annotated dataset of literary entities. In Proceed­ ings of the North American Association for Com­ putational Linguistics (NAACL), pages 2138–2144, Minneapolis, MN. Xingce Bao and Qianqian Qiao. 2019. Transfer Learn­ ing from Pre-trained BERT for Pronoun Resolution. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 82–88, Flo­ rence, Italy. Shaowen Bardzell and Jeffrey Bardzell. 2011. Towards a Feminist HCI Methodology: Social Science, Femi­ nism, and HCI. In Proceedings of the Conference on Human Factors in Computing Systems (CHI), pages 675–684, Vancouver, Canada. Solon Barocas, Asia J. Biega, Benjamin Fish, J˛edrzej Niklas, and Luke Stark. 2020. When Not to De­ sign, Build, or Deploy. In Proceedings of the Confer­ ence on Fairness, Accountability, and Transparency, Barcelona, Spain. Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The Problem With Bias: Al­ locative Versus Representational Harms in Machine Learning. In Proceedings of SIGCIS, Philadelphia, PA. Christine Basta, Marta R. Costa-jussà, and Noe Casas. 2019. Evaluating the underlying gender bias in con­ textualized word embeddings. In Proceedings of the Workshop on Gender Bias for Natural Language Processing, pages 33–39, Florence, Italy. John Baugh. 2018. Linguistics in Pursuit of Justice. Cambridge University Press. Emily M. Bender. 2019. A typology of ethical risks in language technology with an eye towards where transparent documentation can help. Presented at The Future of Artifcial Intelligence: Language, Ethics, Technology Workshop. https://bit.ly/ 2P9t9M6. Ruha Benjamin. 2019. Race After Technology: Aboli­ tionist Tools for the New Jim Code. John Wiley & Sons. Ruha Benjamin. 2020. 2020 Vision: Reimagining the Default Settings of Technology & Society. Keynote at ICLR. Cynthia L. Bennett and Os Keyes. 2019. What is the Point of Fairness? Disability, AI, and The Com­ plexity of Justice. In Proceedings of the ASSETS Workshop on AI Fairness for People with Disabili­ ties, Pittsburgh, PA. Camiel J. Beukeboom and Christian Burgers. 2019. How Stereotypes Are Shared Through Language: A Review and Introduction of the Social Categories and Stereotypes Communication (SCSC) Frame­ work. Review of Communication Research, 7:1–37. Shruti Bhargava and David Forsyth. 2019. Expos­ ing and Correcting the Gender Bias in Image Captioning Datasets and Models. arXiv preprint arXiv:1912.00578. Jayadev Bhaskaran and Isha Bhallamudi. 2019. Good Secretaries, Bad Truck Drivers? Occupational Gen­ der Stereotypes in Sentiment Analysis. In Proceed­ ings of the Workshop on Gender Bias in Natural Lan­ guage Processing, pages 62–68, Florence, Italy. 5464 Su Lin Blodgett, Lisa Green, and Brendan O’Connor. 2016. Demographic Dialectal Variation in Social Media: A Case Study of African-American English. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 1119–1130, Austin, TX. Su Lin Blodgett and Brendan O’Connor. 2017. Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English. In Proceedings of the Workshop on Fairness, Ac­ countability, and Transparency in Machine Learning (FAT/ML), Halifax, Canada. Su Lin Blodgett, Johnny Wei, and Brendan O’Connor. 2018. Twitter Universal Dependency Parsing for African-American and Mainstream American En­ glish. In Proceedings of the Association for Compu­ tational Linguistics (ACL), pages 1415–1425, Mel­ bourne, Australia. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016a. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Pro­ ceedings of the Conference on Neural Information Processing Systems, pages 4349–4357, Barcelona, Spain. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016b. Quantifying and reducing stereotypes in word embeddings. In Proceedings of the ICML Workshop on #Data4Good: Machine Learning in Social Good Applications, pages 41–45, New York, NY. Shikha Bordia and Samuel R. Bowman. 2019. Identify­ ing and reducing gender bias in word-level language models. In Proceedings of the NAACL Student Re­ search Workshop, pages 7–15, Minneapolis, MN. Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ash­ ton Anderson, and Richard Zemel. 2019. Under­ standing the Origins of Bias in Word Embeddings. In Proceedings of the International Conference on Machine Learning, pages 803–811, Long Beach, CA. Mary Bucholtz, Dolores Inés Casillas, and Jin Sook Lee. 2016. Beyond Empowerment: Accompani­ ment and Sociolinguistic Justice in a Youth Research Program. In Robert Lawson and Dave Sayers, edi­ tors, Sociolinguistic Research: Application and Im­ pact, pages 25–44. Routledge. Mary Bucholtz, Dolores Inés Casillas, and Jin Sook Lee. 2019. California Latinx Youth as Agents of Sociolinguistic Justice. In Netta Avineri, Laura R. Graham, Eric J. Johnson, Robin Conley Riner, and Jonathan Rosa, editors, Language and Social Justice in Practice, pages 166–175. Routledge. Mary Bucholtz, Audrey Lopez, Allina Mojarro, Elena Skapoulli, Chris VanderStouwe, and Shawn WarnerGarcia. 2014. Sociolinguistic Justice in the Schools: Student Researchers as Linguistic Experts. Lan­ guage and Linguistics Compass, 8:144–157. Kaylee Burns, Lisa Anne Hendricks, Kate Saenko, Trevor Darrell, and Anna Rohrbach. 2018. Women also Snowboard: Overcoming Bias in Captioning Models. In Procedings of the European Conference on Computer Vision (ECCV), pages 793–811, Mu­ nich, Germany. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334). Kathryn Campbell-Kibler. 2009. The nature of so­ ciolinguistic perception. Language Variation and Change, 21(1):135–156. Yang Trista Cao and Hal Daumé, III. 2019. To­ ward gender-inclusive coreference resolution. arXiv preprint arXiv:1910.13913. Rakesh Chada. 2019. Gendered pronoun resolution us­ ing bert and an extractive question answering formu­ lation. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 126– 133, Florence, Italy. Kaytlin Chaloner and Alfredo Maldonado. 2019. Mea­ suring Gender Bias in Word Embedding across Do­ mains and Discovering New Gender Bias Word Cat­ egories. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 25–32, Florence, Italy. Anne H. Charity Hudley. 2017. Language and Racial­ ization. In Ofelia García, Nelson Flores, and Mas­ similiano Spotti, editors, The Oxford Handbook of Language and Society. Oxford University Press. Won Ik Cho, Ji Won Kim, Seok Min Kim, and Nam Soo Kim. 2019. On measuring gender bias in translation of gender-neutral pronouns. In Proceed­ ings of the Workshop on Gender Bias in Natural Lan­ guage Processing, pages 173–181, Florence, Italy. Shivang Chopra, Ramit Sawhney, Puneet Mathur, and Rajiv Ratn Shah. 2020. Hindi-English Hate Speech Detection: Author Profling, Debiasing, and Practi­ cal Perspectives. In Proceedings of the AAAI Con­ ference on Artifcial Intelligence (AAAI), New York, NY. Marika Cifor, Patricia Garcia, T.L. Cowan, Jasmine Rault, Tonia Sutherland, Anita Say Chan, Jennifer Rode, Anna Lauren Hoffmann, Niloufar Salehi, and Lisa Nakamura. 2019. Feminist Data ManifestNo. Retrieved from https://www.manifestno. com/. Patricia Hill Collins. 2000. Black Feminist Thought: Knowledge, Consciousness, and the Politics of Em­ powerment. Routledge. 5465 Justin T. Craft, Kelly E. Wright, Rachel Elizabeth Weissler, and Robin M. Queen. 2020. Language and Discrimination: Generating Meaning, Perceiv­ ing Identities, and Discriminating Outcomes. An­ nual Review of Linguistics, 6(1). Kate Crawford. 2017. The Trouble with Bias. Keynote at NeurIPS. Kimberle Crenshaw. 1989. Demarginalizing the Inter­ section of Race and Sex: A Black Feminist Critique of Antidiscrmination Doctrine, Feminist Theory and Antiracist Politics. University of Chicago Legal Fo­ rum. Amanda Cercas Curry and Verena Rieser. 2018. #MeToo: How Conversational Systems Respond to Sexual Harassment. In Proceedings of the Workshop on Ethics in Natural Language Processing, pages 7– 14, New Orleans, LA. Karan Dabas, Nishtha Madaan, Gautam Singh, Vijay Arya, Sameep Mehta, and Tanmoy Chakraborty. 2020. Fair Transfer of Multiple Style Attributes in Text. arXiv preprint arXiv:2001.06693. Thomas Davidson, Debasmita Bhattacharya, and Ing­ mar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Workshop on Abusive Language Online, pages 25–35, Florence, Italy. Maria De-Arteaga, Alexey Romanov, Hanna Wal­ lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kentha­ padi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In Proceedings of the Confer­ ence on Fairness, Accountability, and Transparency, pages 120–128, Atlanta, GA. Sunipa Dev, Tao Li, Jeff Phillips, and Vivek Sriku­ mar. 2019. On Measuring and Mitigating Biased Inferences of Word Embeddings. arXiv preprint arXiv:1908.09369. Sunipa Dev and Jeff Phillips. 2019. Attenuating Bias in Word Vectors. In Proceedings of the International Conference on Artifcial Intelligence and Statistics, pages 879–887, Naha, Japan. Mark Díaz, Isaac Johnson, Amanda Lazar, Anne Marie Piper, and Darren Gergle. 2018. Addressing agerelated bias in sentiment analysis. In Proceedings of the Conference on Human Factors in Computing Systems (CHI), Montréal, Canada. Emily Dinan, Angela Fan, Adina Williams, Jack Ur­ banek, Douwe Kiela, and Jason Weston. 2019. Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation. arXiv preprint arXiv:1911.03842. Carl DiSalvo, Andrew Clement, and Volkmar Pipek. 2013. Communities: Participatory Design for, with and by communities. In Jesper Simonsen and Toni Robertson, editors, Routledge International Hand­ book of Participatory Design, pages 182–209. Routledge. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat­ ing unintended bias in text classifcation. In Pro­ ceedings of the Conference on Artifcial Intelligence, Ethics, and Society (AIES), New Orleans, LA. Jacob Eisenstein. 2013. What to do about bad lan­ guage on the Internet. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 359–369. Kawin Ethayarajh. 2020. Is Your Classifer Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds. In Proceedings of the Associa­ tion for Computational Linguistics (ACL). Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding Undesirable Word Embedding Assocations. In Proceedings of the Association for Computational Linguistics (ACL), pages 1696–1705, Florence, Italy. Joseph Fisher. 2019. Measuring social bias in knowledge graph embeddings. arXiv preprint arXiv:1912.02761. Nelson Flores and Sofa Chaparro. 2018. What counts as language education policy? Developing a materi­ alist Anti-racist approach to language activism. Lan­ guage Policy, 17(3):365–384. Omar U. Florez. 2019. On the Unintended Social Bias of Training Language Generation Models with Data from Local Media. In Proceedings of the NeurIPS Workshop on Human-Centric Machine Learning, Vancouver, Canada. Joel Escudé Font and Marta R. Costa-jussà. 2019. Equalizing gender biases in neural machine trans­ lation with word embeddings techniques. In Pro­ ceedings of the Workshop on Gender Bias for Natu­ ral Language Processing, pages 147–154, Florence, Italy. Batya Friedman and David G. Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press. Batya Friedman, Peter H. Kahn Jr., and Alan Borning. 2006. Value Sensitive Design and Information Sys­ tems. In Dennis Galletta and Ping Zhang, editors, Human-Computer Interaction in Management Infor­ mation Systems: Foundations, pages 348–372. M.E. Sharpe. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes. Proceed­ ings of the National Academy of Sciences, 115(16). 5466 Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, and Alex Beutel. 2019. Counterfactual fairness in text classifcation through robust­ ness. In Proceedings of the Conference on Artifcial Intelligence, Ethics, and Society (AIES), Honolulu, HI. Aparna Garimella, Carmen Banea, Dirk Hovy, and Rada Mihalcea. 2019. Women’s syntactic resilience and men’s grammatical luck: Gender bias in part-of­ speech tagging and dependency parsing data. In Pro­ ceedings of the Association for Computational Lin­ guistics (ACL), pages 3493–3498, Florence, Italy. Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2020. Towards Understand­ ing Gender Bias in Relation Extraction. In Proceed­ ings of the Association for Computational Linguis­ tics (ACL). R. Stuart Geiger, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, and Jenny Huang. 2020. Garbage In, Garbage Out? Do Machine Learn­ ing Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From? In Proceedings of the Conference on Fairness, Ac­ countability, and Transparency, pages 325–336. Oguzhan Gencoglu. 2020. Cyberbullying Detec­ tion with Fairness Constraints. arXiv preprint arXiv:2005.06625. Alexandra Reeve Givens and Meredith Ringel Morris. 2020. Centering Disability Perspecives in Algorith­ mic Fairness, Accountability, and Transparency. In Proceedings of the Conference on Fairness, Account­ ability, and Transparency, Barcelona, Spain. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gen­ der Biases in Word Embeddings But do not Remove Them. In Proceedings of the North American As­ sociation for Computational Linguistics (NAACL), pages 609–614, Minneapolis, MN. Hila Gonen and Kellie Webster. 2020. Auto­ matically Identifying Gender Issues in Machine Translation using Perturbations. arXiv preprint arXiv:2004.14065. Ben Green. 2019. “Good” isn’t good enough. In Pro­ ceedings of the AI for Social Good Workshop, Van­ couver, Canada. Lisa J. Green. 2002. African American English: A Lin­ guistic Introduction. Cambridge University Press. Anthony G. Greenwald, Debbie E. McGhee, and Jor­ dan L.K. Schwartz. 1998. Measuring individual dif­ ferences in implicit cognition: The implicit associa­ tion test. Journal of Personality and Social Psychol­ ogy, 74(6):1464–1480. Enoch Opanin Gyamf, Yunbo Rao, Miao Gou, and Yanhua Shao. 2020. deb2viz: Debiasing gender in word embedding data using subspace visualization. In Proceedings of the International Conference on Graphics and Image Processing. Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham. 2018. Gender Recognition or Gender Re­ ductionism? The Social Implications of Automatic Gender Recognition Systems. In Proceedings of the Conference on Human Factors in Computing Sys­ tems (CHI), Montréal, Canada. Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a Critical Race Method­ ology in Algorithmic Fairness. In Proceedings of the Conference on Fairness, Accountability, and Trans­ parency, pages 501–512, Barcelona, Spain. Madeline E. Heilman, Aaaron S. Wallen, Daniella Fuchs, and Melinda M. Tamkins. 2004. Penalties for Success: Reactions to Women Who Succeed at Male Gender-Typed Tasks. Journal of Applied Psy­ chology, 89(3):416–427. Jane H. Hill. 2008. The Everyday Language of White Racism. Wiley-Blackwell. Dirk Hovy, Federico Bianchi, and Tommaso Fornaciari. 2020. Can You Translate that into Man? Commer­ cial Machine Translation Systems Include Stylistic Biases. In Proceedings of the Association for Com­ putational Linguistics (ACL). Dirk Hovy and Anders Søgaard. 2015. Tagging Per­ formance Correlates with Author Age. In Proceed­ ings of the Association for Computational Linguis­ tics and the International Joint Conference on Nat­ ural Language Processing, pages 483–488, Beijing, China. Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceed­ ings of the Association for Computational Linguis­ tics (ACL), pages 591–598, Berlin, Germany. Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack W. Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. 2019. Reducing Sentiment Bias in Language Models via Counterfactual Evaluation. arXiv preprint arXiv:1911.03064. Xiaolei Huang, Linzi Xing, Franck Dernoncourt, and Michael J. Paul. 2020. Multilingual Twitter Cor­ pus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition. In Proceedings of the Language Resources and Evaluation Conference (LREC), Marseille, France. Christoph Hube, Maximilian Idahl, and Besnik Fetahu. 2020. Debiasing Word Embeddings from Sentiment Associations in Names. In Proceedings of the Inter­ national Conference on Web Search and Data Min­ ing, pages 259–267, Houston, TX. 5467 Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen De­ nuyl. 2020. Social Biases in NLP Models as Barriers for Persons with Disabilities. In Proceedings of the Association for Computational Linguistics (ACL). Matei Ionita, Yury Kashnitsky, Ken Krige, Vladimir Larin, Dennis Logvinenko, and Atanas Atanasov. 2019. Resolving gendered ambiguous pronouns with BERT. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 113–119, Florence, Italy. Hailey James-Sorenson and David Alvarez-Melis. 2019. Probabilistic Bias Mitigation in Word Embed­ dings. In Proceedings of the Workshop on HumanCentric Machine Learning, Vancouver, Canada. Shengyu Jia, Tao Meng, Jieyu Zhao, and Kai-Wei Chang. 2020. Mitigating Gender Bias Amplifcation in Distribution by Posterior Regularization. In Pro­ ceedings of the Association for Computational Lin­ guistics (ACL). Taylor Jones, Jessica Rose Kalbfeld, Ryan Hancock, and Robin Clark. 2019. Testifying while black: An experimental study of court reporter accuracy in tran­ scription of African American English. Language, 95(2). Anna Jørgensen, Dirk Hovy, and Anders Søgaard. 2015. Challenges of studying and processing dialects in social media. In Proceedings of the Workshop on Noisy User-Generated Text, pages 9–18, Beijing, China. Anna Jørgensen, Dirk Hovy, and Anders Søgaard. 2016. Learning a POS tagger for AAVE-like language. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 1115– 1120, San Diego, CA. Pratik Joshi, Sebastian Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The State and Fate of Linguistic Diversity and Inclusion in the NLP World. In Proceedings of the Association for Computational Linguistics (ACL). Jaap Jumelet, Willem Zuidema, and Dieuwke Hupkes. 2019. Analysing Neural Language Models: Contex­ tual Decomposition Reveals Default Reasoning in Number and Gender Assignment. In Proceedings of the Conference on Natural Language Learning, Hong Kong, China. Marie-Odile Junker. 2018. Participatory action re­ search for Indigenous linguistics in the digital age. In Shannon T. Bischoff and Carmen Jany, editors, Insights from Practices in Community-Based Re­ search, pages 164–175. De Gruyter Mouton. David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. 2017. Incorporating Dialectal Variability for Socially Equi­ table Language Identifcation. In Proceedings of the Association for Computational Linguistics (ACL), pages 51–57, Vancouver, Canada. Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. In Proceedings of the Association for Computational Linguistics (ACL), pages 1641–1650, Florence, Italy. Saket Karve, Lyle Ungar, and João Sedoc. 2019. Con­ ceptor debiasing of word representations evaluated on WEAT. In Proceedings of the Workshop on Gen­ der Bias in Natural Language Processing, pages 40– 48, Florence, Italy. Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Bintz, Danielle Raz, and P.M. Krafft. 2020. Toward sit­ uated interventions for algorithmic equity: lessons from the feld. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 45–55, Barcelona, Spain. Stephen Kemmis. 2006. Participatory action research and the public sphere. Educational Action Research, 14(4):459–476. Os Keyes. 2018. The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. Proceedings of the ACM on HumanComputer Interaction, 2(CSCW). Os Keyes, Josephine Hoy, and Margaret Drouhard. 2019. Human-Computer Insurrection: Notes on an Anarchist HCI. In Proceedings of the Conference on Human Factors in Computing Systems (CHI), Glas­ gow, Scotland, UK. Jae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santi­ ago, and Vivek Datta. 2020. Intersectional Bias in Hate Speech and Abusive Language Datasets. In Proceedings of the Association for Computational Linguistics (ACL). Svetlana Kiritchenko and Saif M. Mohammad. 2018. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. In Proceedings of the Joint Conference on Lexical and Computational Se­ mantics, pages 43–53, New Orleans, LA. Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically Categorizing Writ­ ten Texts by Author Gender. Literary and Linguistic Computing, 17(4):401–412. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W. Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proceed­ ings of the Workshop on Gender Bias for Natu­ ral Language Processing, pages 166–172, Florence, Italy. Sonja L. Lanehart and Ayesha M. Malik. 2018. Black Is, Black Isn’t: Perceptions of Language and Black­ ness. In Jeffrey Reaser, Eric Wilbanks, Karissa Woj­ cik, and Walt Wolfram, editors, Language Variety in the New South. University of North Carolina Press. 5468 Brian N. Larson. 2017. Gender as a variable in naturallanguage processing: Ethical considerations. In Pro­ ceedings of the Workshop on Ethics in Natural Lan­ guage Processing, pages 30–40, Valencia, Spain. Anne Lauscher and Goran Glavaš. 2019. Are We Con­ sistently Biased? Multidimensional Analysis of Bi­ ases in Distributional Word Vectors. In Proceedings of the Joint Conference on Lexical and Computa­ tional Semantics, pages 85–91, Minneapolis, MN. Anne Lauscher, Goran Glavaš, Simone Paolo Ponzetto, and Ivan Vuli´c. 2019. A General Framework for Im­ plicit and Explicit Debiasing of Distributional Word Vector Spaces. arXiv preprint arXiv:1909.06092. Christopher A. Le Dantec, Erika Shehan Poole, and Su­ san P. Wyche. 2009. Values as Lived Experience: Evolving Value Sensitive Design in Support of Value Discovery. In Proceedings of the Conference on Hu­ man Factors in Computing Systems (CHI), Boston, MA. Nayeon Lee, Andrea Madotto, and Pascale Fung. 2019. Exploring Social Bias in Chatbots using Stereotype Knowledge. In Proceedings of the Workshop on Widening NLP, pages 177–180, Florence, Italy. Wesley Y. Leonard. 2012. Reframing language recla­ mation programmes for everybody’s empowerment. Gender and Language, 6(2):339–367. Paul Pu Liang, Irene Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2019. Towards Debiasing Sentence Representations. In Proceedings of the NeurIPS Workshop on HumanCentric Machine Learning, Vancouver, Canada. Rosina Lippi-Green. 2012. English with an Ac­ cent: Language, Ideology, and Discrimination in the United States. Routledge. Bo Liu. 2019. Anonymized BERT: An Augmentation Approach to the Gendered Pronoun Resolution Chal­ lenge. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 120– 125, Florence, Italy. Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zi­ tao Liu, and Jiliang Tang. 2019. Does Gender Mat­ ter? Towards Fairness in Dialogue Systems. arXiv preprint arXiv:1910.10486. Felipe Alfaro Lois, José A.R. Fonollosa, and Costa-jà. 2019. BERT Masked Language Modeling for Coreference Resolution. In Proceedings of the Work­ shop on Gender Bias in Natural Language Process­ ing, pages 76–81, Florence, Italy. Brandon C. Loudermilk. 2015. Implicit attitudes and the perception of sociolinguistic variation. In Alexei Prikhodkine and Dennis R. Preston, editors, Re­ sponses to Language Varieties: Variability, pro­ cesses and outcomes, pages 137–156. Anastassia Loukina, Nitin Madnani, and Klaus Zech­ ner. 2019. The many dimensions of algorithmic fair­ ness in educational applications. In Proceedings of the Workshop on Innovative Use of NLP for Build­ ing Educational Applications, pages 1–10, Florence, Italy. Kaiji Lu, Peter Mardziel, Fangjing Wu, Preetam Aman­ charla, and Anupam Datta. 2018. Gender bias in neural natural language processing. arXiv preprint arXiv:1807.11714. Anne Maass. 1999. Linguistic intergroup bias: Stereo­ type perpetuation through language. Advances in Experimental Social Psychology, 31:79–121. Nitin Madnani, Anastassia Loukina, Alina von Davier, Jill Burstein, and Aoife Cahill. 2017. Building Bet­ ter Open-Source Tools to Support Fairness in Auto­ mated Scoring. In Proceedings of the Workshop on Ethics in Natural Language Processing, pages 41– 52, Valencia, Spain. Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W. Black. 2019. Black is to Criminal as Cau­ casian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 801–809, Minneapolis, MN. Ramón Antonio Martínez and Alexander Feliciano Mejía. 2019. Looking closely and listening care­ fully: A sociocultural approach to understanding the complexity of Latina/o/x students’ everyday lan­ guage. Theory Into Practice. Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It’s All in the Name: Mit­ igating Gender Bias with Name-Based Counterfac­ tual Data Substitution. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 5270–5278, Hong Kong, China. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On Measur­ ing Social Biases in Sentence Encoders. In Proceed­ ings of the North American Association for Compu­ tational Linguistics (NAACL), pages 629–634, Min­ neapolis, MN. Elijah Mayfeld, Michael Madaio, Shrimai Prab­ humoye, David Gerritsen, Brittany McLaughlin, Ezekiel Dixon-Roman, and Alan W. Black. 2019. Equity Beyond Bias in Language Technologies for Education. In Proceedings of the Workshop on Inno­ vative Use of NLP for Building Educational Appli­ cations, Florence, Italy. Katherine McCurdy and O˘guz Serbetçi. 2017. Gram­ matical gender associations outweigh topical gender bias in crosslinguistic word embeddings. In Pro­ ceedings of the Workshop for Women & Underrepre­ sented Minorities in Natural Language Processing, Vancouver, Canada. 5469 Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, and Aram Galstyan. 2019. Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition. arXiv preprint arXiv:1910.10872. Michela Menegatti and Monica Rubini. 2017. Gender bias and sexism in language. In Oxford Research Encyclopedia of Communication. Oxford University Press. Inom Mirzaev, Anthony Schulte, Michael Conover, and Sam Shah. 2019. Considerations for the interpreta­ tion of bias measures of word embeddings. arXiv preprint arXiv:1906.08379. Salikoko S. Mufwene, Guy Bailey, and John R. Rickford, editors. 1998. African-American English: Structure, History, and Use. Routledge. Michael J. Muller. 2007. Participatory Design: The Third Space in HCI. In The Human-Computer Inter­ action Handbook, pages 1087–1108. CRC Press. Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. StereoSet: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456. Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. “How Old Do You Think I Am?”: A Study of Language and Age in Twitter. In Proceedings of the Conference on Web and Social Media (ICWSM), pages 439–448, Boston, MA. Malvina Nissim, Rik van Noord, and Rob van der Goot. 2020. Fair is better than sensational: Man is to doc­ tor as woman is to doctor. Computational Linguis­ tics. Debora Nozza, Claudia Volpetti, and Elisabetta Fersini. 2019. Unintended Bias in Misogyny Detection. In Proceedings of the Conference on Web Intelligence, pages 149–155. Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kıcıman. 2019. Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Frontiers in Big Data, 2. Alexandra Olteanu, Kartik Talamadupula, and Kush R. Varshney. 2017. The Limits of Abstract Evaluation Metrics: The Case of Hate Speech Detection. In Proceedings of the ACM Web Science Conference, Troy, NY. Orestis Papakyriakopoulos, Simon Hegelich, Juan Car­ los Medina Serrano, and Fabienne Marco. 2020. Bias in word embeddings. In Proceedings of the Conference on Fairness, Accountability, and Trans­ parency, pages 446–457, Barcelona, Spain. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Re­ ducing Gender Bias in Abusive Language Detection. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 2799–2804, Brussels, Belgium. Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent Disagreements in Human Textual Inferences. Trans­ actions of the Association for Computational Lin­ guistics, 7:677–694. Xiangyu Peng, Siyan Li, Spencer Frazier, and Mark Riedl. 2020. Fine-Tuning a Transformer-Based Lan­ guage Model to Avoid Generating Non-Normative Text. arXiv preprint arXiv:2001.08764. Radomir Popovic,´ Florian Lemmerich, and Markus Strohmaier. 2020. Joint Multiclass Debiasing of Word Embeddings. In Proceedings of the Interna­ tional Symposium on Intelligent Systems, Graz, Aus­ tria. Vinodkumar Prabhakaran, Ben Hutchinson, and Mar­ garet Mitchell. 2019. Perturbation Sensitivity Anal­ ysis to Detect Unintended Model Biases. In Pro­ ceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 5744–5749, Hong Kong, China. Shrimai Prabhumoye, Elijah Mayfeld, and Alan W. Black. 2019. Principled Frameworks for Evaluating Ethics in NLP Systems. In Proceedings of the Work­ shop on Innovative Use of NLP for Building Educa­ tional Applications, Florence, Italy. Marcelo Prates, Pedro Avelar, and Luis C. Lamb. 2019. Assessing gender bias in machine translation: A case study with google translate. Neural Computing and Applications. Rasmus Précenth. 2019. Word embeddings and gender stereotypes in Swedish and English. Master’s thesis, Uppsala University. Dennis R. Preston. 2009. Are you really smart (or stupid, or cute, or ugly, or cool)? Or do you just talk that way? Language attitudes, standardization and language change. Oslo: Novus forlag, pages 105– 129. Flavien Prost, Nithum Thain, and Tolga Bolukbasi. 2019. Debiasing Embeddings for Reduced Gender Bias in Text Classifcation. In Proceedings of the Workshop on Gender Bias in Natural Language Pro­ cessing, pages 69–75, Florence, Italy. Reid Pryzant, Richard Diehl Martinez, Nathan Dass, Sadao Kurohashi, Dan Jurafsky, and Diyi Yang. 2020. Automatically Neutralizing Subjective Bias in Text. In Proceedings of the AAAI Conference on Artifcial Intelligence (AAAI), New York, NY. Arun K. Pujari, Ansh Mittal, Anshuman Padhi, An­ shul Jain, Mukesh Jadon, and Vikas Kumar. 2019. Debiasing Gender biased Hindi Words with Wordembedding. In Proceedings of the International Conference on Algorithms, Computing and Artifcial Intelligence, pages 450–456. Yusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun. 2019. Reducing gender bias in word-level 5470 language models with a gender-equalizing loss func­ tion. In Proceedings of the ACL Student Research Workshop, pages 223–228, Florence, Italy. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In Proceedings of the Association for Computational Linguistics (ACL). John R. Rickford and Sharese King. 2016. Language and linguistics on trial: Hearing Rachel Jeantel (and other vernacular speakers) in the courtroom and be­ yond. Language, 92(4):948–988. Anthony Rios. 2020. FuzzE: Fuzzy Fairness Evalua­ tion of Offensive Language Classifers on AfricanAmerican English. In Proceedings of the AAAI Con­ ference on Artifcial Intelligence (AAAI), New York, NY. Gerald Roche. 2019. Articulating language oppres­ sion: colonialism, coloniality and the erasure of Ti­ betâA˘ Zs minority languages. Patterns of Prejudice. ´ Alexey Romanov, Maria De-Arteaga, Hanna Wal­ lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kentha­ padi, Anna Rumshisky, and Adam Tauman Kalai. 2019. What’s in a Name? Reducing Bias in Bios without Access to Protected Attributes. In Proceed­ ings of the North American Association for Com­ putational Linguistics (NAACL), pages 4187–4195, Minneapolis, MN. Jonathan Rosa. 2019. Contesting Representations of Migrant “Illegality” through the Drop the I-Word Campaign: Rethinking Language Change and So­ cial Change. In Netta Avineri, Laura R. Graham, Eric J. Johnson, Robin Conley Riner, and Jonathan Rosa, editors, Language and Social Justice in Prac­ tice. Routledge. Jonathan Rosa and Christa Burdick. 2017. Language Ideologies. In Ofelia García, Nelson Flores, and Massimiliano Spotti, editors, The Oxford Handbook of Language and Society. Oxford University Press. Jonathan Rosa and Nelson Flores. 2017. Unsettling race and language: Toward a raciolinguistic perspec­ tive. Language in Society, 46:621–647. Sara Rosenthal and Kathleen McKeown. 2011. Age Prediction in Blogs: A Study of Style, Content, and Online Behavior in Pre- and Post-Social Media Gen­ erations. In Proceedings of the North American As­ sociation for Computational Linguistics (NAACL), pages 763–772, Portland, OR. Candace Ross, Boris Katz, and Andrei Barbu. 2020. Measuring Social Biases in Grounded Vi­ sion and Language Embeddings. arXiv preprint arXiv:2002.08911. Richard Rothstein. 2017. The Color of Law: A For­ gotten History of How Our Government Segregated America. Liveright Publishing. David Rozado. 2020. Wide range screening of algo­ rithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types. PLOS One. Elayne Ruane, Abeba Birhane, and Anthony Ven­ tresque. 2019. Conversational AI: Social and Ethi­ cal Considerations. In Proceedings of the Irish Con­ ference on Artifcial Intelligence and Cognitive Sci­ ence, Galway, Ireland. Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural lan­ guage inferences. In Proceedings of the Workshop on Ethics in Natural Language Processing, pages 74–79, Valencia, Spain. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender Bias in Coreference Resolution. In Proceedings of the North American Association for Computational Lin­ guistics (NAACL), pages 8–14, New Orleans, LA. Elizabeth B.N. Sanders. 2002. From user-centered to participatory design approaches. In Jorge Frascara, editor, Design and the Social Sciences: Making Con­ nections, pages 18–25. CRC Press. Brenda Salenave Santana, Vinicius Woloszyn, and Le­ andro Krug Wives. 2018. Is there gender bias and stereotype in Portuguese word embeddings? In Proceedings of the International Conference on the Computational Processing of Portuguese Student Re­ search Workshop, Canela, Brazil. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the Asso­ ciation for Computational Linguistics (ACL), pages 1668–1678, Florence, Italy. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Juraf­ sky, Noah A. Smith, and Yejin Choi. 2020. Social Bias Frames: Reasoning about Social and Power Im­ plications of Language. In Proceedings of the Asso­ ciation for Computational Linguistics (ACL). Hanna Sassaman, Jennifer Lee, Jenessa Irvine, and Shankar Narayan. 2020. Creating CommunityBased Tech Policy: Case Studies, Lessons Learned, and What Technologists and Communities Can Do Together. In Proceedings of the Conference on Fair­ ness, Accountability, and Transparency, Barcelona, Spain. Danielle Saunders and Bill Byrne. 2020. Reducing Gender Bias in Neural Machine Translation as a Do­ main Adaptation Problem. In Proceedings of the As­ sociation for Computational Linguistics (ACL). Tyler Schnoebelen. 2017. Goal-Oriented Design for Ethical Machine Learning and NLP. In Proceedings of the Workshop on Ethics in Natural Language Pro­ cessing, pages 88–93, Valencia, Spain. 5471 Sabine Sczesny, Magda Formanowicz, and Franziska Moser. 2016. Can gender-fair language reduce gen­ der stereotyping and discrimination? Frontiers in Psychology, 7. João Sedoc and Lyle Ungar. 2019. The Role of Pro­ tected Class Word Lists in Bias Identifcation of Con­ textualized Word Representations. In Proceedings of the Workshop on Gender Bias in Natural Lan­ guage Processing, pages 55–61, Florence, Italy. Procheta Sen and Debasis Ganguly. 2020. Towards So­ cially Responsible AI: Cognitive Bias-Aware MultiObjective Learning. In Proceedings of the AAAI Conference on Artifcial Intelligence (AAAI), New York, NY. Deven Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive Biases in Natural Language Pro­ cessing Models: A Conceptual Framework and Overview. In Proceedings of the Association for Computational Linguistics (ACL). Judy Hanwen Shen, Lauren Fratamico, Iyad Rahwan, and Alexander M. Rush. 2018. Darling or Babygirl? Investigating Stylistic Bias in Sentiment Anal­ ysis. In Proceedings of the Workshop on Fairness, Accountability, and Transparency (FAT/ML), Stock­ holm, Sweden. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 3398–3403, Hong Kong, China. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2020. Towards Controllable Biases in Language Generation. arXiv preprint arXiv:2005.00268. Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, and Il-Chul Moon. 2020. Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Genera­ tion. arXiv preprint arXiv:2004.03133. Jesper Simonsen and Toni Robertson, editors. 2013. Routledge International Handbook of Participatory Design. Routledge. Gabriel Stanovsky, Noah A. Smith, and Luke Zettle­ moyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the Association for Computational Linguistics (ACL), pages 1679–1684, Florence, Italy. Yolande Strengers, Lizhe Qu, Qiongkai Xu, and Jarrod Knibbe. 2020. Adhering, Steering, and Queering: Treatment of Gender in Natural Language Genera­ tion. In Proceedings of the Conference on Human Factors in Computing Systems (CHI), Honolulu, HI. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating Gender Bias in Natural Lan­ guage Processing: Literature Review. In Proceed­ ings of the Association for Computational Linguis­ tics (ACL), pages 1630–1640, Florence, Italy. Adam Sutton, Thomas Lansdall-Welfare, and Nello Cristianini. 2018. Biased embeddings from wild data: Measuring, understanding and removing. In Proceedings of the International Symposium on Intelligent Data Analysis, pages 328–339, ’sHertogenbosch, Netherlands. Chris Sweeney and Maryam Najafan. 2019. A Trans­ parent Framework for Evaluating Unintended De­ mographic Bias in Word Embeddings. In Proceed­ ings of the Association for Computational Linguis­ tics (ACL), pages 1662–1667, Florence, Italy. Chris Sweeney and Maryam Najafan. 2020. Reduc­ ing sentiment polarity for demographic attributes in word embeddings using adversarial learning. In Proceedings of the Conference on Fairness, Ac­ countability, and Transparency, pages 359–368, Barcelona, Spain. Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan, Mark D.M. Leiserson, and Adam Tauman Kalai. 2019. What are the biases in my word embedding? In Proceedings of the Conference on Artifcial Intelligence, Ethics, and Society (AIES), Honolulu, HI. Samson Tan, Shafq Joty, Min-Yen Kan, and Richard Socher. 2020. It’s Morphin’ Time! Combating Linguistic Discrimination with Infectional Perturba­ tions. In Proceedings of the Association for Compu­ tational Linguistics (ACL). Yi Chern Tan and L. Elisa Celis. 2019. Assessing Social and Intersectional Biases in Contextualized Word Representations. In Proceedings of the Con­ ference on Neural Information Processing Systems, Vancouver, Canada. J. Michael Terry, Randall Hendrick, Evangelos Evan­ gelou, and Richard L. Smith. 2010. Variable dialect switching among African American chil­ dren: Inferences about working memory. Lingua, 120(10):2463–2475. Joel Tetreault, Daniel Blanchard, and Aoife Cahill. 2013. A Report on the First Native Language Iden­ tifcation Shared Task. In Proceedings of the Work­ shop on Innovative Use of NLP for Building Educa­ tional Applications, pages 48–57, Atlanta, GA. Mike Thelwall. 2018. Gender Bias in Sentiment Anal­ ysis. Online Information Review, 42(1):45–57. Kristen Vaccaro, Karrie Karahalios, Deirdre K. Mul­ ligan, Daniel Kluttz, and Tad Hirsch. 2019. Contestability in Algorithmic Systems. In Conference Companion Publication of the 2019 on Computer 5472 Supported Cooperative Work and Social Computing, pages 523–527, Austin, TX. Ameya Vaidya, Feng Mai, and Yue Ning. 2019. Em­ pirical Analysis of Multi-Task Learning for Reduc­ ing Model Bias in Toxic Comment Detection. arXiv preprint arXiv:1909.09758v2. Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting Gender Right in Neural Ma­ chine Translation. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 3003–3008, Brussels, Belgium. Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stu­ art Shieber. 2020. Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias. arXiv preprint arXiv:2004.12265. Tianlu Wang, Xi Victoria Lin, Nazneen Fatema Ra­ jani, Bryan McCann, Vicente Ordonez, and Caim­ ing Xiong. 2020. Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation. In Proceedings of the Association for Computational Linguistics (ACL). Zili Wang. 2019. MSnet: A BERT-based Network for Gendered Pronoun Resolution. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 89–95, Florence, Italy. Kellie Webster, Marta R. Costa-jussà, Christian Hard­ meier, and Will Radford. 2019. Gendered Ambigu­ ous Pronoun (GAP) Shared Task at the Gender Bias in NLP Workshop 2019. In Proceedings of the Work­ shop on Gender Bias in Natural Language Process­ ing, pages 1–7, Florence, Italy. Kellie Webster, Marta Recasens, Vera Axelrod, and Ja­ son Baldridge. 2018. Mind the GAP: A balanced corpus of gendered ambiguous pronouns. Transac­ tions of the Association for Computational Linguis­ tics, 6:605–618. Walt Wolfram and Natalie Schilling. 2015. American English: Dialects and Variation, 3 edition. Wiley Blackwell. Austin P. Wright, Omar Shaikh, Haekyu Park, Will Ep­ person, Muhammed Ahmed, Stephane Pinel, Diyi Yang, and Duen Horng (Polo) Chau. 2020. RE­ CAST: Interactive Auditing of Automatic Toxicity Detection Models. In Proceedings of the Con­ ference on Human Factors in Computing Systems (CHI), Honolulu, HI. Yinchuan Xu and Junlin Yang. 2019. Look again at the syntax: Relational graph convolutional network for gendered ambiguous pronoun resolution. In Pro­ ceedings of the Workshop on Gender Bias in Natu­ ral Language Processing, pages 96–101, Florence, Italy. Kai-Chou Yang, Timothy Niven, Tzu-Hsuan Chou, and Hung-Yu Kao. 2019. Fill the GAP: Exploiting BERT for Pronoun Resolution. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 102–106, Florence, Italy. Zekun Yang and Juan Feng. 2020. A Causal Inference Method for Reducing Gender Bias in Word Embed­ ding Relations. In Proceedings of the AAAI Con­ ference on Artifcial Intelligence (AAAI), New York, NY. Daisy Yoo, Anya Ernest, Sofa Serholt, Eva Eriksson, and Peter Dalsgaard. 2019. Service Design in HCI Research: The Extended Value Co-creation Model. In Proceedings of the Halfway to the Future Sympo­ sium, Nottingham, United Kingdom. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with ad­ versarial learning. In Proceedings of the Conference on Artifcial Intelligence, Ethics, and Society (AIES), New Orleans, LA. Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Con­ ghui Zhu, and Tiejun Zhao. 2020a. Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifcations with Instance Weighting. In Proceedings of the Association for Computational Linguistics (ACL). Haoran Zhang, Amy X. Lu, Mohamed Abdalla, Matthew McDermott, and Marzyeh Ghassemi. 2020b. Hurtful Words: Quantifying Biases in Clin­ ical Contextual Word Embeddings. In Proceedings of the ACM Conference on Health, Inference, and Learning. Jieyu Zhao, Subhabrata Mukherjee, Saghar Hosseini, Kai-Wei Chang, and Ahmed Hassan Awadallah. 2020. Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer. In Proceedings of the Asso­ ciation for Computational Linguistics (ACL). Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot­ terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender Bias in Contextualized Word Embeddings. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 629– 634, Minneapolis, MN. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or­ donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplifcation us­ ing corpus-level constraints. In Proceedings of Empirical Methods in Natural Language Process­ ing (EMNLP), pages 2979–2989, Copenhagen, Den­ mark. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or­ donez, and Kai-Wei Chang. 2018a. Gender Bias in Coreference Resolution: Evaluation and Debias­ ing Methods. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 15–20, New Orleans, LA. 5473 Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning Gender-Neutral Word Embeddings. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 4847–4853, Brussels, Belgium. Alina Zhiltsova, Simon Caton, and Catherine Mulwa. 2019. Mitigation of Unintended Biases against NonNative English Texts in Sentiment Analysis. In Pro­ ceedings of the Irish Conference on Artifcial Intelli­ gence and Cognitive Science, Galway, Ireland. Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, and Kai-Wei Chang. 2019. Examin­ ing gender bias in languages with grammatical gen­ ders. In Proceedings of Empirical Methods in Nat­ ural Language Processing (EMNLP), pages 5279– 5287, Hong Kong, China. Ran Zmigrod, S. J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the Association for Computational Linguistics (ACL), pages 1651– 1661, Florence, Italy. A Appendix In Table 3, we provide examples of the papers’ mo­ tivations and techniques across several NLP tasks. A.1 Categorization details In this section, we provide some additional details about our method—specifcally, our categorization. What counts as being covered by an NLP task? We considered a paper to cover a given NLP task if it analyzed “bias” with respect to that task, but not if it only evaluated overall performance on that task. For example, a paper examining the impact of miti­ gating “bias” in word embeddings on “bias” in sen­ timent analysis would be counted as covering both NLP tasks. In contrast, a paper assessing whether performance on sentiment analysis degraded after mitigating “bias” in word embeddings would be counted only as focusing on embeddings. What counts as a motivation? We considered a motivation to include any description of the prob­ lem that motivated the paper or proposed quantita­ tive technique, including any normative reasoning. We excluded from the “Vague/unstated” cate­ gory of motivations the papers that participated in the Gendered Ambiguous Pronoun (GAP) Shared Task at the First ACL Workshop on Gender Bias in NLP. In an ideal world, shared task papers would engage with “bias” more critically, but given the nature of shared tasks it is understandable that they do not. As a result, we excluded them from our counts for techniques as well. We cite the papers here; most propose techniques we would have cate­ gorized as “Questionable correlations,” with a few as “Other representational harms” (Abzaliev, 2019; Attree, 2019; Bao and Qiao, 2019; Chada, 2019; Ionita et al., 2019; Liu, 2019; Lois et al., 2019; Wang, 2019; Xu and Yang, 2019; Yang et al., 2019). We excluded Dabas et al. (2020) from our survey because we could not determine what this paper’s user study on fairness was actually measuring. Finally, we actually categorized the motivation for Liu et al. (2019) (i.e., the last row in Table 3) as “Questionable correlations” due to a sentence else­ where in the paper; had the paragraph we quoted been presented without more detail, we would have categorized the motivation as “Vague/unstated.” A.2 Full categorization: Motivations Allocational harms Hovy and Spruit (2016); Caliskan et al. (2017); Madnani et al. (2017); Dixon et al. (2018); Kiritchenko and Mohammad (2018); Shen et al. (2018); Zhao et al. (2018b); Bhaskaran and Bhallamudi (2019); Bordia and Bowman (2019); Brunet et al. (2019); Chaloner and Maldonado (2019); De-Arteaga et al. (2019); Dev and Phillips (2019); Font and Costa-jussà (2019); James-Sorenson and Alvarez-Melis (2019); Kurita et al. (2019); Mayfeld et al. (2019); Pu­ jari et al. (2019); Romanov et al. (2019); Ruane et al. (2019); Sedoc and Ungar (2019); Sun et al. (2019); Zmigrod et al. (2019); Hutchinson et al. (2020); Papakyriakopoulos et al. (2020); Ravfo­ gel et al. (2020); Strengers et al. (2020); Sweeney and Najafan (2020); Tan et al. (2020); Zhang et al. (2020b). Stereotyping Bolukbasi et al. (2016a,b); Caliskan et al. (2017); McCurdy and Serbetçi (2017); Rudinger et al. (2017); Zhao et al. (2017); Curry and Rieser (2018); Díaz et al. (2018); Santana et al. (2018); Sutton et al. (2018); Zhao et al. (2018a,b); Agarwal et al. (2019); Basta et al. (2019); Bhaskaran and Bhallamudi (2019); Bordia and Bowman (2019); Brunet et al. (2019); Cao and Daumé (2019); Chaloner and Maldonado (2019); Cho et al. (2019); Dev and Phillips (2019); Font and Costa-jussà (2019); Gonen and Goldberg (2019); James-Sorenson and Alvarez-Melis (2019); Kaneko and Bollegala (2019); Karve et al. (2019); Kurita et al. (2019); Lauscher and Glavaš (2019); Lee et al. (2019); Manzini et al. (2019); Mayfeld 5474 Categories NLP task Stated motivation Motivations Techniques Language modeling (Bordia and Bowman, 2019) Sentiment analysis (Kiritchenko and Mohammad, 2018) Machine translation (Cho et al., 2019) Machine translation (Stanovsky et al., 2019) Type-level embeddings (Zhao et al., 2018b) Type-level and contextu­ alized embeddings (May et al., 2019) Dialogue generation (Liu et al., 2019) “Existing biases in data can be amplifed by models and the resulting output consumed by the public can infuence them, en­ courage and reinforce harmful stereotypes, or distort the truth. Automated systems that depend on these models can take prob­ lematic actions based on biased profling of individuals.” “Other biases can be inappropriate and result in negative ex­ periences for some groups of people. Examples include, loan eligibility and crime recidivism prediction systems...and resumé sorting systems that believe that men are more qualifed to be programmers than women (Bolukbasi et al., 2016). Similarly, sentiment and emotion analysis systems can also perpetuate and accentuate inappropriate human biases, e.g., systems that consider utterances from one race or gender to be less positive simply be­ cause of their race or gender, or customer support systems that prioritize a call from an angry male over a call from the equally angry female.” “[MT training] may incur an association of gender-specifed pro­ nouns (in the target) and gender-neutral ones (in the source) for lexicon pairs that frequently collocate in the corpora. We claim that this kind of phenomenon seriously threatens the fairness of a translation system, in the sense that it lacks generality and inserts social bias to the inference. Moreover, the input is not fully cor­ rect (considering gender-neutrality) and might offend the users who expect fairer representations.” “Learned models exhibit social bias when their training data encode stereotypes not relevant for the task, but the correlations are picked up anyway.” “However, embeddings trained on human-generated corpora have been demonstrated to inherit strong gender stereotypes that re­ fect social constructs....Such a bias substantially affects down­ stream applications....This concerns the practitioners who use the embedding model to build gender-sensitive applications such as a resume fltering system or a job recommendation system as the automated system may discriminate candidates based on their gender, as refected by their name. Besides, biased embeddings may implicitly affect downstream applications used in our daily lives. For example, when searching for ‘computer scientist’ using a search engine...a search algorithm using an embedding model in the backbone tends to rank male scientists higher than females’ [sic], hindering women from being recognized and further exac­ erbating the gender inequality in the community.” “[P]rominent word embeddings such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) encode systematic biases against women and black people (Bolukbasi et al., 2016; Garg et al., 2018), implicating many NLP systems in scaling up social injustice.” “Since the goal of dialogue systems is to talk with users...if the systems show discriminatory behaviors in the interactions, the user experience will be adversely affected. Moreover, public com­ mercial chatbots can get resisted for their improper speech.” Allocational harms, stereotyping Allocational harms, other representational harms (system performance differences w.r.t. text written by different social groups) Questionable correlations, other representational harms Stereotyping, questionable correlations Allocational harms, stereotyping, other representational harms Vague Vague/unstated Questionable correlations Questionable correlations (differences in sentiment intensity scores w.r.t. text about different social groups) Questionable correlations Stereotyping, other representational harms (system performance differences), questionable correlations Stereotyping Stereotyping Stereotyping, other representational harms, questionable correlations Table 3: Examples of the categories into which the papers’ motivations and proposed quantitative techniques for measuring or mitigating “bias” fall. Bold text in the quotes denotes the content that yields our categorizations. 5475 et al. (2019); Précenth (2019); Pujari et al. (2019); Ruane et al. (2019); Stanovsky et al. (2019); Sun et al. (2019); Tan and Celis (2019); Webster et al. (2019); Zmigrod et al. (2019); Gyamf et al. (2020); Hube et al. (2020); Hutchinson et al. (2020); Kim et al. (2020); Nadeem et al. (2020); Papakyriakopoulos et al. (2020); Ravfogel et al. (2020); Rozado (2020); Sen and Ganguly (2020); Shin et al. (2020); Strengers et al. (2020). Other representational harms Hovy and Sø­ gaard (2015); Blodgett et al. (2016); Bolukbasi et al. (2016b); Hovy and Spruit (2016); Blodgett and O’Connor (2017); Larson (2017); Schnoebelen (2017); Blodgett et al. (2018); Curry and Rieser (2018); Díaz et al. (2018); Dixon et al. (2018); Kir­ itchenko and Mohammad (2018); Park et al. (2018); Shen et al. (2018); Thelwall (2018); Zhao et al. (2018b); Badjatiya et al. (2019); Bagdasaryan et al. (2019); Bamman et al. (2019); Cao and Daumé (2019); Chaloner and Maldonado (2019); Cho et al. (2019); Davidson et al. (2019); De-Arteaga et al. (2019); Fisher (2019); Font and Costa-jussà (2019); Garimella et al. (2019); Loukina et al. (2019); Mayfeld et al. (2019); Mehrabi et al. (2019); Nozza et al. (2019); Prabhakaran et al. (2019); Romanov et al. (2019); Ruane et al. (2019); Sap et al. (2019); Sheng et al. (2019); Sun et al. (2019); Sweeney and Najafan (2019); Vaidya et al. (2019); Gaut et al. (2020); Gencoglu (2020); Hovy et al. (2020); Hutchinson et al. (2020); Kim et al. (2020); Peng et al. (2020); Rios (2020); Sap et al. (2020); Shah et al. (2020); Sheng et al. (2020); Tan et al. (2020); Zhang et al. (2020a,b). Questionable correlations Jørgensen et al. (2015); Hovy and Spruit (2016); Madnani et al. (2017); Rudinger et al. (2017); Zhao et al. (2017); Burns et al. (2018); Dixon et al. (2018); Kir­ itchenko and Mohammad (2018); Lu et al. (2018); Park et al. (2018); Shen et al. (2018); Zhang et al. (2018); Badjatiya et al. (2019); Bhargava and Forsyth (2019); Cao and Daumé (2019); Cho et al. (2019); Davidson et al. (2019); Dev et al. (2019); Garimella et al. (2019); Garg et al. (2019); Huang et al. (2019); James-Sorenson and AlvarezMelis (2019); Kaneko and Bollegala (2019); Liu et al. (2019); Karve et al. (2019); Nozza et al. (2019); Prabhakaran et al. (2019); Romanov et al. (2019); Sap et al. (2019); Sedoc and Ungar (2019); Stanovsky et al. (2019); Sweeney and Najafan (2019); Vaidya et al. (2019); Zhiltsova et al. (2019); Chopra et al. (2020); Gonen and Webster (2020); Gyamf et al. (2020); Hube et al. (2020); Ravfogel et al. (2020); Rios (2020); Ross et al. (2020); Saun­ ders and Byrne (2020); Sen and Ganguly (2020); Shah et al. (2020); Sweeney and Najafan (2020); Yang and Feng (2020); Zhang et al. (2020a). Vague/unstated Rudinger et al. (2018); Webster et al. (2018); Dinan et al. (2019); Florez (2019); Jumelet et al. (2019); Lauscher et al. (2019); Liang et al. (2019); Maudslay et al. (2019); May et al. (2019); Prates et al. (2019); Prost et al. (2019); Qian et al. (2019); Swinger et al. (2019); Zhao et al. (2019); Zhou et al. (2019); Ethayarajh (2020); Huang et al. (2020); Jia et al. (2020); Popovi´c et al. (2020); Pryzant et al. (2020); Vig et al. (2020); Wang et al. (2020); Zhao et al. (2020). Surveys, frameworks, and meta-analyses Hovy and Spruit (2016); Larson (2017); McCurdy and Serbetçi (2017); Schnoebelen (2017); Basta et al. (2019); Ethayarajh et al. (2019); Gonen and Goldberg (2019); Lauscher and Glavaš (2019); Loukina et al. (2019); Mayfeld et al. (2019); Mirzaev et al. (2019); Prabhumoye et al. (2019); Ruane et al. (2019); Sedoc and Ungar (2019); Sun et al. (2019); Nissim et al. (2020); Rozado (2020); Shah et al. (2020); Strengers et al. (2020); Wright et al. (2020). B Full categorization: Techniques Allocational harms De-Arteaga et al. (2019); Prost et al. (2019); Romanov et al. (2019); Zhao et al. (2020). Stereotyping Bolukbasi et al. (2016a,b); Caliskan et al. (2017); McCurdy and Serbetçi (2017); Díaz et al. (2018); Santana et al. (2018); Sutton et al. (2018); Zhang et al. (2018); Zhao et al. (2018a,b); Agarwal et al. (2019); Basta et al. (2019); Bhaskaran and Bhallamudi (2019); Brunet et al. (2019); Cao and Daumé (2019); Chaloner and Maldonado (2019); Dev and Phillips (2019); Ethayarajh et al. (2019); Gonen and Goldberg (2019); James-Sorenson and Alvarez-Melis (2019); Jumelet et al. (2019); Kaneko and Bollegala (2019); Karve et al. (2019); Kurita et al. (2019); Lauscher and Glavaš (2019); Lauscher et al. (2019); Lee et al. (2019); Liang et al. (2019); Liu et al. (2019); Manzini et al. (2019); Maudslay et al. (2019); May et al. (2019); Mirzaev et al. (2019); Prates et al. (2019); Précenth (2019); Prost et al. (2019); Pujari et al. (2019); Qian et al. (2019); 5476 Sedoc and Ungar (2019); Stanovsky et al. (2019); Tan and Celis (2019); Zhao et al. (2019); Zhou et al. (2019); Chopra et al. (2020); Gyamf et al. (2020); Nadeem et al. (2020); Nissim et al. (2020); Papakyriakopoulos et al. (2020); Popovic´ et al. (2020); Ravfogel et al. (2020); Ross et al. (2020); Rozado (2020); Saunders and Byrne (2020); Shin et al. (2020); Vig et al. (2020); Wang et al. (2020); Yang and Feng (2020); Zhao et al. (2020). Other representational harms Jørgensen et al. (2015); Hovy and Søgaard (2015); Blodgett et al. (2016); Blodgett and O’Connor (2017); Blodgett et al. (2018); Curry and Rieser (2018); Dixon et al. (2018); Park et al. (2018); Thelwall (2018); Web­ ster et al. (2018); Badjatiya et al. (2019); Bag­ dasaryan et al. (2019); Bamman et al. (2019); Bhar­ gava and Forsyth (2019); Cao and Daumé (2019); Font and Costa-jussà (2019); Garg et al. (2019); Garimella et al. (2019); Liu et al. (2019); Louk­ ina et al. (2019); Mehrabi et al. (2019); Nozza et al. (2019); Sap et al. (2019); Sheng et al. (2019); Stanovsky et al. (2019); Vaidya et al. (2019); Webster et al. (2019); Ethayarajh (2020); Gaut et al. (2020); Gencoglu (2020); Hovy et al. (2020); Huang et al. (2020); Kim et al. (2020); Peng et al. (2020); Ravfogel et al. (2020); Rios (2020); Sap et al. (2020); Saunders and Byrne (2020); Sheng et al. (2020); Sweeney and Najafan (2020); Tan et al. (2020); Zhang et al. (2020a,b). Questionable correlations Jurgens et al. (2017); Madnani et al. (2017); Rudinger et al. (2017); Zhao et al. (2017); Burns et al. (2018); Díaz et al. (2018); Kiritchenko and Mohammad (2018); Lu et al. (2018); Rudinger et al. (2018); Shen et al. (2018); Bordia and Bowman (2019); Cao and Daumé (2019); Cho et al. (2019); David­ son et al. (2019); Dev et al. (2019); Dinan et al. (2019); Fisher (2019); Florez (2019); Font and Costa-jussà (2019); Garg et al. (2019); Huang et al. (2019); Liu et al. (2019); Nozza et al. (2019); Prabhakaran et al. (2019); Qian et al. (2019); Sap et al. (2019); Stanovsky et al. (2019); Sweeney and Najafan (2019); Swinger et al. (2019); Zhiltsova et al. (2019); Zmigrod et al. (2019); Hube et al. (2020); Hutchinson et al. (2020); Jia et al. (2020); Papakyriakopoulos et al. (2020); Popovic´ et al. (2020); Pryzant et al. (2020); Saunders and Byrne (2020); Sen and Ganguly (2020); Shah et al. (2020); Sweeney and Najafan (2020); Zhang et al. (2020b). Vague/unstated None. Surveys, frameworks, and meta-analyses Hovy and Spruit (2016); Larson (2017); McCurdy and Serbetçi (2017); Schnoebelen (2017); Basta et al. (2019); Ethayarajh et al. (2019); Gonen and Goldberg (2019); Lauscher and Glavaš (2019); Loukina et al. (2019); Mayfeld et al. (2019); Mirzaev et al. (2019); Prabhumoye et al. (2019); Ruane et al. (2019); Sedoc and Ungar (2019); Sun et al. (2019); Nissim et al. (2020); Rozado (2020); Shah et al. (2020); Strengers et al. (2020); Wright et al. (2020).
2020
485
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5477–5490 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5477 SOCIAL BIAS FRAMES: Reasoning about Social and Power Implications of Language Maarten Sap† Saadia Gabriel†‡ Lianhui Qin†‡ Dan Jurafsky⋄ Noah A. Smith†‡ Yejin Choi†‡ †Paul G. Allen School of Computer Science & Engineering, University of Washington ‡Allen Institute for Artificial Intelligence ⋄Linguistics & Computer Science Departments, Stanford University Abstract Warning: this paper contains content that may be offensive or upsetting. Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but rather the implied meanings, that frame people’s judgments about others. For example, given a statement that “we shouldn’t lower our standards to hire more women,” most listeners will infer the implicature intended by the speaker — that “women (candidates) are less qualified.” Most semantic formalisms, to date, do not capture such pragmatic implications in which people express social biases and power differentials in language. We introduce SOCIAL BIAS FRAMES, a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others. In addition, we introduce the Social Bias Inference Corpus to support large-scale modelling and evaluation with 150k structured annotations of social media posts, covering over 34k implications about a thousand demographic groups. We then establish baseline approaches that learn to recover SOCIAL BIAS FRAMES from unstructured text. We find that while stateof-the-art neural models are effective at highlevel categorization of whether a given statement projects unwanted social bias (80% F1), they are not effective at spelling out more detailed explanations in terms of SOCIAL BIAS FRAMES. Our study motivates future work that combines structured pragmatic inference with commonsense reasoning on social implications. 1 Introduction Language has enormous power to project social biases and reinforce stereotypes on people (Fiske, What do you call a movie with an allMuslim cast? A box office bomb. We shouldn’t lower our standards just to hire more women. likely yes no Women Implies women are less qualified offensive? lewd? intentional? yes yes no offensive? lewd? intentional? yes group referenced? which group implied statement yes group referenced? which group Muslims Implies Muslims are all terrorists implied statement Figure 1: Understanding and explaining why an arguably innocuous statement is potentially unjust requires reasoning about conversational implicatures and commonsense implications with respect to the underlying intent, offensiveness, and power differentials between different social groups. SOCIAL BIAS FRAMES aim to represent the various pragmatic meanings related to social bias implications, by combining categorical and free-text annotations, e.g., that “women are less qualified” is implied by the statement “we shouldn’t lower our standards to hire more women.” 1993). The way such biases are projected is rarely in what is stated explicitly, but in all the implied layers of meanings that frame and influence people’s judgments about others. For example, on hearing a statement that an all-Muslim movie was a “box office bomb”, most people can instantly 5478 post off? int? lewd? group? targeted group implied statement ingroup? Kendrick just dropped the coldest video of all fucking time no no no What a fucking stupid bitch shut the fuck up already yes yes no need to fuck sum bad , I got some pressure built up :P no no yes I take a bitch home give her long dick yes yes yes no You annoyed me every day for months you’re a fucking moron yes yes no no My problem with Korean artists: I dont know how to pronounce your name I can’t hanggul to save my life yes no no yes Korean folks have weird names no I’ll continue this thread till non-lifting fags stop lifting like idiots at the gym... Which is never. yes yes no yes gay men are weak and not manly no I thought drugs were the only things black people could shoot up Boy was I wrong yes yes no yes Black folks do drugs no kill people commit shootings Table 1: Examples of inference tuples in SBIC. The types of inferences captured by SOCIAL BIAS FRAMES cover (potentially subtle) offensive implications about various demographic groups. recognize the implied demonizing stereotype that “Muslims are terrorists” (Figure 1). Understanding these biases with accurate underlying explanations is necessary for AI systems to adequately interact in the social world (Pereira et al., 2016), and failure to do so can result in the deployment of harmful technologies (e.g., conversational AI systems turning sexist and racist; Vincent, 2016). Most previous approaches to understanding the implied harm in statements have cast this task as a simple toxicity classification (e.g., Waseem and Hovy, 2016; Founta et al., 2018; Davidson et al., 2017). However, simple classifications run the risk of discriminating against minority groups, due to high variation and identity-based biases in annotations (e.g., which cause models to learn associations between dialect and toxicity; Sap et al., 2019a; Davidson et al., 2019). In addition, detailed explanations are much more informative for people to understand and reason about why a statement is potentially harmful against other people (Gregor and Benbasat, 1999; Ribeiro et al., 2016). Thus, we propose SOCIAL BIAS FRAMES, a novel conceptual formalism that aims to model pragmatic frames in which people project social biases and stereotypes on others. Compared to semantic frames (Fillmore and Baker, 2001), the meanings projected by pragmatic frames are richer, and thus cannot be easily formalized using only categorical labels. Therefore, as illustrated in Figure 1, our formalism combines hierarchical categories of biased implications such as intent and offensiveness with implicatures described in free-form text such as groups referenced and implied statements. In addition, we introduce SBIC,1 a new corpus collected using a novel crowdsourcing framework. SBIC supports large-scale learning and evaluation with over 150k structured annotations of social media posts, spanning over 34k implications about a thousand demographic groups. We then establish baseline approaches that learn to recover SOCIAL BIAS FRAMES from unstructured text. We find that while state-of-the-art neural models are effective at making high-level categorization of whether a given statement projects unwanted social bias (80% F1), they are not effective at spelling out more detailed explanations by accurately decoding SOCIAL BIAS FRAMES. Our study motivates future research that combines structured pragmatic inference with commonsense reasoning on social implications. Important implications of this study. We recognize that studying SOCIAL BIAS FRAMES necessarily requires us to confront online content that may be offensive or disturbing (see §7 for further discussion on the ethical implications of this study). However, deliberate avoidance does not eliminate such problems. Therefore, the important premise we take in this study is that assessing social media content through the lens of SOCIAL 1SBIC: Social Bias Inference Corpus, available at http://tinyurl.com/social-bias-frames. 5479 BIAS FRAMES is important for automatic flagging or AI-augmented writing interfaces, where potentially harmful online content can be analyzed with detailed explanations for users or moderators to consider and verify. In addition, the collective analysis over large corpora can also be insightful for educating people on reducing unconscious biases in their language. 2 SOCIAL BIAS FRAMES Definition To better enable models to account for socially biased implications of language,2 we design a new pragmatic formalism that distinguishes several related but distinct inferences, shown in Figure 1. Given a natural language utterance, henceforth, post, we collect both categorical as well as free text inferences (described below), inspired by recent efforts in free-text annotations of commonsense knowledge (e.g., Speer and Havasi, 2012; Rashkin et al., 2018; Sap et al., 2019b) and argumentation (Habernal and Gurevych, 2016; Becker et al., 2017). The free-text explanations are crucial to our formalism, as they can both increase trust in predictions made by the machine (Kulesza et al., 2012; Bussone et al., 2015; Nguyen et al., 2018) and encourage a poster’s empathy towards a targeted group, thereby combating biases (CohenAlmagor, 2014). We base our initial frame design on social science literature of pragmatics (Lakoff, 1973; de Marneffe et al., 2012) and impoliteness (Kasper, 1990; Gabriel, 1998; Dynel, 2015; Vonasch and Baumeister, 2017). We then refine the frame structure (including number of possible answers to questions) based on the annotator (dis)agreement in multiple pilot studies. We describe each of the included variables below. Offensiveness is our main categorical annotation, and denotes the overall rudeness, disrespect, or toxicity of a post. We consider whether a post could be considered “offensive to anyone”, as previous work has shown this to have higher recall (Sap et al., 2019a). This is a categorical variable with three possible answers (yes, maybe, no). Intent to offend captures whether the perceived motivation of the author was to offend, which is key to understanding how it is received (Kasper, 2In this work, we employ the U.S. sociocultural lens when discussing bias and power dynamics among demographic groups. 1990; Dynel, 2015), yet distinct from offensiveness (Gabriel, 1998; Daly, 2018). This is a categorical variable with four possible answers (yes, probably, probably not, no). Lewd or sexual references are a key subcategory of what constitutes potentially offensive material in many cultures, especially in the United States (Strub, 2008). This is a categorical variable with three possible answers (yes, maybe, no). Group implications are distinguished from individual-only attacks or insults that do not invoke power dynamics between groups (e.g., “F*ck you” vs. “F*ck you, f*ggot”). This is a categorical variable with two possible answers: individualonly (no), group targeted (yes). Targeted group describes the social or demographic group that is referenced or targeted by the post. Here we collect free-text answers, but provide a seed list of demographic or social groups to encourage consistency. Implied statement represents the power dynamic or stereotype that is referenced in the post. We collect free-text answers in the form of simple Hearst-like patterns (e.g., “women are ADJ”, “gay men VBP”; Hearst, 1992). In-group language aims to capture whether the author of a post may be a member of the same social/demographic group that is targeted, as speaker identity changes how a statement is perceived (O’Dea et al., 2015). Specifically, in-group language (words or phrases that (re)establish belonging to a social group; Eble, 1996) can change the perceived offensiveness of a statement, such as reclaimed slurs (Croom, 2011; Galinsky et al., 2013) or self-deprecating language (Greengross and Miller, 2008). Note that we do not attempt to categorize the identity of the speaker. This variable takes three possible values (yes, maybe, no). 3 Collecting Nuanced Annotations To create SBIC, we design a crowdsourcing framework to distill the biased implications of posts at a large scale. 3.1 Data Selection We draw from various sources of potentially biased online content, shown in Table 2, to select 5480 type source # posts Reddit r/darkJokes 10,095 r/meanJokes 3,483 r/offensiveJokes 356 Microaggressions 2,011 subtotal 15,945 Twitter Founta et al. (2018) 11,864 Davidson et al. (2017) 3,008 Waseem and Hovy (2016) 1,816 subtotal 16,688 Hate Sites Gab 3,715 Stormfront 4,016 Banned Reddits 4,308 subtotal 12,039 SBIC total # posts 44,671 Table 2: Breakdown of origins of posts in SBIC. Microaggressions are drawn from the Reddit corpus introduced by Breitfeller et al. (2019), and Banned Reddits include r/Incels and r/MensRights. posts to annotate. Since online toxicity can be relatively scarce (Founta et al., 2018),3 we start by annotating English Reddit posts, specifically three intentionally offensive subReddits and a corpus of potential microaggressions from Breitfeller et al. (2019). By nature, the three offensive subreddits are very likely to have harmful implications, as posts are often made with intents to deride adversity or social inequality (Bicknell, 2007). Microaggressions, on the other hand, are likely to contain subtle biased implications—a natural fit for SOCIAL BIAS FRAMES. In addition, we include posts from three existing English Twitter datasets annotated for toxic or abusive language, filtering out @-replies, retweets, and links. We mainly annotate tweets released by Founta et al. (2018), who use a bootstrapping approach to sample potentially offensive tweets. We also include tweets from Waseem and Hovy (2016) and Davidson et al. (2017), who collect datasets of tweets containing racist or sexist hashtags and slurs, respectively. Finally, we include posts from known English hate communities: Stormfront (de Gibert 3Founta et al. (2018) find that the prevalence of toxic content online is <4%. She only got the job because she's a woman - crawled from ${source}. Could this post be considered offensive, disrespectful, or toxic to anyone/someone? Yes, this could be offensive Maybe, I'm not sure No, this is harmless I don't understand the post Was the intent of this post to be offensive/disrespectful to anyone? E.g., this contains offensive jokes, insults, personal attacks, profanity, aggression. Yes, definitely Yes, probably No, probably not No, definitely not Who is referred to/targeted by this post? — Select all identity-based groups that apply. race/ethnicity Which identity group is referred to in this post? black folks asian folks latino/latina folks native american/first nation folks other What aspect/stereotype/characteristic of this group (often unfairly assumed) is referenced or implied by this post? — Use simple phrases and do not copy paste from the post. I.e., actions/characteristics that US society (usually wrongly) associates with the group GROUP does ___ GROUP does ___ [optional] [optional] gender/gender identity/sexuality culture/origin/religion age/body mental or physical disabilities/disorders socio-economic/political/lifestyle crime/violence/tragedy victims Figure 2: Snippet of the annotation task used to collect SBIC. Lewdness, group implication, and in-group language questions are omitted for brevity but shown in larger format in Figure 4 (Appendix). et al., 2018) and Gab,4 which are both documented white-supremacist and neo-nazi communities (Bowman-Grieve, 2009; Hess, 2016), and two English subreddits that were banned for inciting violence against women (r/Incels and r/MensRights; Fingas, 2017; Center, 2012). 3.2 Annotation Task Design We design a hierarchical annotation framework to collect biased implications of a given post (snippet shown in Figure 2) on Amazon Mechanical Turk (MTurk). The full task is shown in the appendix (Figure 4). For each post, workers indicate whether the post is offensive, whether the intent was to offend, and whether it contains lewd or sexual content. Only if annotators indicate potential offensiveness do they answer the group implication question. If the post targets or references a group or demographic, workers select or write which one(s); per selected group, they then write two to four stereotypes. Finally, workers are asked whether they think the speaker is part of one of the minority groups referenced by the post. We collect three annotations per post, and restrict our worker pool to the U.S. and Canada. We ask workers to optionally provide coarse-grained demographic information.5 4https://files.pushshift.io/gab/ GABPOSTS_CORPUS.xz 5This study was approved by our institutional review board. 5481 total # tuples 147,139 # unique posts 44,671 groups 1,414 implications 32,028 post-group 48,923 post-group-implication 87,942 group-implication 34,333 skews (% pos.) offensive 44.8% intent 43.4% lewd 7.9% group targeted 50.9% in-group 4.6% Table 3: Statistics of the SBIC dataset. Skews indicate the number of times a worker annotated a post as offensive, etc. Annotator demographics In our final annotations, our worker pool was relatively genderbalanced and age-balanced (55% women, 42% men, <1% non-binary; 36±10 years old), but racially skewed (82% White, 4% Asian, 4% Hispanic, 4% Black). Annotator agreement Overall, the annotations in SBIC showed 82.4% pairwise agreement and Krippendorf’s α=0.45 on average, which is substantially higher than previous work in toxic language detection (e.g., α=0.22 in Ross et al., 2017). Broken down by each categorical question, workers agreed on a post being offensive at a rate of 76% (Krippendorf’s α=0.51), its intent being to offend at 75% (α=0.46), and it having group implications at 74% (α=0.48). For categorizing posts as lewd, workers agreed substantially (94%, α=0.62). However, flagging potential ingroup speech had lower agreement, likely because this is a very nuanced annotation, and because highly skewed categories (only 5% “yes”; see Table 3) lead to low αs (here, α=0.17 with agreement 94%).6 Finally, workers agreed on the exact same targeted group 80.2% of the time (α=0.50). 3.3 SBIC Description After data collection, SBIC contains 150k structured inference tuples, covering 34k free text group-implication pairs (see Table 3). We show example inference tuples in Table 1. 6Given our data selection process, we expect the rate of in-group posts to be very low (see §3.3). 56% 16% 34% 25% 42% 30% 7% 24% 21% 0% 20% 40% 60% 80% 100% Twitter Reddit HateSites gender/sexuality race/ethnicity religion/culture social/political disability body/age victims Figure 3: Breakdown of targeted group categories by domains. We show percentages within domains for the top three most represented identities, namely gender/sexuality (e.g., women, LGBTQ), race/ethnicity (e.g., Black, Latinx, and Asian), and culture/origin (e.g., Muslim, Jewish). Additionally, we show a breakdown of the types of targeted groups in Figure 3. While SBIC covers a variety of types of biases, gender-based, racebased, and culture-based biases are the most represented, which parallels the types of discrimination happening in the real world (RWJF, 2017). We find that our dataset is predominantly written in White-aligned English (78% of posts), as measured by a lexical dialect detector by Blodgett et al. (2016), with <10% of posts having indicators of African-American English. We caution researchers to consider the potential for dialect- or identity-based biases in labelling (Davidson et al., 2019; Sap et al., 2019a) before deploying technology based on SBIC (see Section 7). 4 Social Bias Inference Given a post, we establish baseline performance of models at inferring SOCIAL BIAS FRAMES. An ideal model should be able to both generate the implied power dynamics in textual form, as well as classify the post’s offensiveness and other categorical variables. Satisfying these conditions, we use the OpenAI-GPT transformer networks (Vaswani et al., 2017; Radford et al., 2018, 2019) as a basis for our experiments, given their recent successes at 5482 model offensive intent lewd group in-group 42.2% pos. (dev.) 44.8% pos (dev.) 3.0% pos (dev.) 66.6% pos (dev.) 5.1% pos (dev.) F1 pr. rec. F1 pr. rec. F1 pr. rec. F1 pr. rec. F1 pr. rec. dev. SBF-GPT1-gdy 75.2 88.3 65.5 74.4 89.8 63.6 75.2 78.2 72.5 62.3 74.6 53.4 – – – SBF-GPT2-gdy 77.2 88.3 68.6 76.3 89.5 66.5 77.6 81.2 74.3 66.9 67.9 65.8 24.0 85.7 14.0 SBF-GPT2-smp 80.5 84.3 76.9 75.3 89.9 64.7 78.6 80.6 76.6 66.0 67.6 64.5 – – – test SBF-GPT2-gdy 78.8 89.8 70.2 78.6 90.8 69.2 80.7 84.5 77.3 69.9 70.5 69.4 – – – Table 4: Experimental results (%) of various models on the classification tasks (gdy: argmax, smp: sampling). Some models did not predict the positive class for “in-group language,” their performance is denoted by “–”. We bold the F1 scores of the best performing model(s) on the development set. For easier interpretation, we also report the percentage of instances in the positive class in the development set. classification, commonsense generation, and conditional generation (Bosselut et al., 2019; Keskar et al., 2019). Training We cast our frame prediction task as a hybrid classification and language generation task, where we linearize the variables following the frame hierarchy.7 At training time, our model takes as input a sequence of N tokens: x = {[STR], w1, w2, ..., wn, [SEP], w[lewd], w[off], w[int], w[grp], [SEP], w[G]1, w[G]2, ..., [SEP], w[S]1, w[S]2, ..., [SEP], w[ing], [END]} (1) where [STR] is our start token, w1:n is the sequence of tokens in a post, w[G]i the tokens representing the group, and w[S]i the implied statement. We add two task-specific vocabulary items for each of our five classification tasks (w[lewd], w[off], w[int], w[grp], w[ing]), each representing the negative and positive values of the class (e.g., for offensiveness, [offY] and [offN]).8 The model relies on a stack of transformer blocks of multi-headed attention and fully connected layers to encode the input tokens (for a detailed modelling description, see Radford et al., 2018, 2019). Since GPT is a forward-only language model, the attention is only computed over preceding tokens. At the last layer, the model projects the embedding into a vocabulary-sized vector, which is turned into a probability distribution over the vocabulary using a softmax layer. 7We linearize following the order in which variables were annotated (see Figure 4). Future work could explore alternate orderings. 8We binarize our categorical annotations, assigning 1 to “yes,” “probably,” and “maybe,”, and 0 to all other values. We minimize the cross-entropy of the contextual probability of the correct token in our full linearized frame objective (of length N): L = −1 N X i log pGPT(wi | w0:i−1) During training, no loss is incurred for lowerlevel variables with no values, i.e., variables that cannot take values due to earlier variable values (e.g., there is no targeted group for posts marked as non-offensive). In our experiments we use pretrained versions of OpenAI’s GPT and GPT2 (Radford et al., 2018, 2019) for our model variants, named SBF-GPT1 and SBF-GPT2, respectively. While their architectures are similar (stack of Transformers), GPT was trained on a large corpus of fiction books, whereas GPT2 was trained on 40Gbs of English web text. Inference We frame our inference task as a conditional language generation task. Conditioned on the post, we generate tokens one-by-one either by greedily selecting the most probable one, or by sampling from the next word distribution, and appending the selected token to the output. We stop when the [END] token is generated, at which point our entire frame is predicted. For greedy decoding, we only generate our frames once, but for sampling, we repeat the generation procedure to yield ten candidate frame predictions and choose the highest scoring one under our model. In contrast to training time, where all inputs are consistent with our frames’ structure, at test time, our model can sometimes predict combinations of variables that are inconsistent with the constraints of the frame (e.g., predicting a post to be inoffensive, but still predict it to be offensive to a group). To mitigate this issue, we also experiment with a constrained decoding algorithm (denoted “constr”) that considers various global assignments of 5483 group targeted implied statement BLEU Rouge-L WMD BLEU Rouge-L WMD dev. SBF-GPT1-gdy 69.9 60.3 1.01 49.9 40.2 2.97 SBF-GPT1-gdy-constr 69.2 64.7 1.05 49.0 42.8 3.02 SBF-GPT2-gdy 74.2 64.6 0.90 49.8 41.4 2.96 SBF-GPT2-gdy-constr 73.4 68.2 0.89 49.6 43.5 2.96 SBF-GPT2-smp 83.2 33.7 0.62 44.3 17.8 3.31 SBF-GPT2-smp-constr 83.0 33.7 0.63 44.1 17.9 3.31 test SBF-GPT2-gdy 77.0 71.3 0.76 52.2 46.5 2.81 SBF-GPT2-gdy-constr 77.9 68.7 0.74 52.6 44.9 2.79 Table 5: Automatic evaluation of various models on the generation task. We bold the scores of the best performing model(s) on the development set. Higher is better for BLEU and ROUGE scores, and lower is better for WMD. variables. Specifically, after greedy decoding, we recompute the probabilities of each of the categorical variables, and search for the most probable assignment given the generated text candidate and variable probabilities.9 This can allow variables to be assigned an alternative value that is more globally optimal.10 4.1 Evaluation We evaluate performance of our models in the following ways. For classification, we report precision, recall, and F1 scores of the positive class. Following previous generative inference work (Sap et al., 2019b), we use automated metrics to evaluate model generations. We use BLEU2 and RougeL (F1) scores to capture word overlap between the generated inference and the references, which captures quality of generation (Galley et al., 2015; Hashimoto et al., 2019). We additionally compute word mover’s distance (WMD; Kusner et al., 2015), which uses distributed word representations to measure similarity between the generated and target text.11 4.2 Training Details As each post can contain multiple annotations, we define a training instance as containing one postgroup-statement triple (along with the five categorical annotations). We then split our dataset into train/dev./test (75:12.5:12.5), ensuring that no post is present in multiple splits. For evaluation (dev., test), we combine the categorical variables by averaging their binarized values and re-binarizing using a .5 threshold, and compare the generated 9We only use the possible assignments in the same forward pass; we do not use assignments from different samples. 10In practice, as seen in Tables 4, 5, and 7, this only slightly improves predictions. 11We use GloVe trained on CommonCrawl, as part of the SpaCy en core web md package. inferences (hypotheses) to all targeted groups and implied statements (references). All experiments are carried out using HuggingFace’s Transformers library.12 We tune hyperparameters on the dev. set, and report performance for the best performing setting (according to average F1). We train or finetune our models using a batch size of 4, a learning rate of 5×10−6 for GPT and 10−5 for GPT2 (both with linear warm up), and consider training for e ∈{1, 2, 5} epochs. 5 Results Listed in Tables 4 and 5, our modelling results indicate that making inferences about social biases in language is challenging for these models. Classification Shown in Table 4, models perform well on higher-level variables such as offensiveness and lewdness, despite the latter being heavily skewed. We hypothesize that correctly predicting lewdness might require more lexical matching (e.g., detecting words with sexual connotations). Whether a group is targeted is slightly less easy for models to predict, and whether the language is in-group is even more challenging, with most of the models defaulting to never predicting it. This highly skewed category poses a challenge for all models, likely due to subtlety of the task and the lack of positive instances. SBF-GPT2-gdy is the only model that predicts positive values for in-group language, for which it benefits from constrained decoding with a 1.9% improvement in F1 score (we show results with all constrained decoding variants in Table 7 in the appendix). 12https://github.com/huggingface/ transformers 5484 post predicted group predicted implication reference groups reference implications (a) All-female casts bother me but I have literally never noticed the hundreds of movies with 0 female characters I have already watched — — women women aren’t good actors; women can’t be in leading roles (b) As expected, when the terrorist group Hamas won the election in Gaza it was the usual one man, one vote, one time, Islamist election. #Islam muslim folks muslims are terrorists muslim folks; arabic folks bad people; islam promotes controlling governments; muslims are authoritarians; not fair (c) ”Black guy in class: *attempts to throw a paper ball into the trash* *misses* Mrs. Mundy: ”You’re a disgrace to your race, Marcus” black folks black people are trash black folks are good at basketball; black men are defined by athletic skill.; black men should be good at basketball.; do not take things seriously (d) How can a Libtard speak truth to power when the only thing they vomit from their mouths is lies? liberals liberals are stupid liberals liberals are incompetent; liberals are stupid; liberals only speak lies (e) I hate fat bitches overweight/fat folks fat people are ugly overweight/fat folks are not pleasant people; fat folks are all the same; fat folks are less than others; not likable Table 6: Examples of SBF-GPT2-gdy-constr model predictions. The model struggles to pick up on subtle biases (a), and tends to generate generic stereotypes rather than implications that are entailed by the post (b, c). Generation When evaluating our models on the generation tasks (i.e., targeted group and implied statement), we find that no one model outperforms others across all metrics (Table 5). Overall, models do well at generating the targeted groups, likely because of the more limited generation space (there are only 1.4k possible groups in SBIC). Conversely, for implied statement generation (where output space is much larger), model performance is slightly worse. Similar to the classification tasks, SBF-GPT2gdy shows a slight increase in RougeL score when using constrained decoding, but we see a slight drop in BLEU scores. Error analysis Since small differences in automated evaluation metrics for text generation sometimes only weakly correlate with human judgments (Liu et al., 2016), we manually perform an error analysis on a manually selected set of generated development-set examples from the SBFGPT2-gdy-constr model (Table 6). Overall, the model seems to struggle with generating textual implications that are relevant to the post, instead generating very generic stereotypes about the demographic groups (e.g., in examples b and c). The model generates the correct stereotypes when there is high lexical overlap with the post (e.g., examples d and e). This is in line with previous research showing that large language models rely on correlational patterns in data (Sap et al., 2019c; Sakaguchi et al., 2020). 6 Related Work Bias and toxicity detection Detection of hateful, abusive, or other toxic language has received increased attention recently (Schmidt and Wiegand, 2017), and most dataset creation work has cast this detection problem as binary classification (Waseem and Hovy, 2016; Davidson et al., 2017; Founta et al., 2018). Moving beyond a single binary label, Wulczyn et al. (2017) and the PerspectiveAPI use a set of binary variables to annotate Wikipedia comments for several toxicityrelated categories (e.g., identity attack, profanity). Similarly, Zampieri et al. (2019) hierarchically annotate a dataset of tweets with offensiveness and whether a group or individual is targeted. Most related to our work, Ousidhoum et al. (2019) create a multilingual dataset of 13k tweets annotated for five different emotion- and toxicity-related aspects, including a 16-class variable representing social groups targeted. In comparison, SOCIAL BIAS FRAMES not only captures binary toxicity and hierarchical information about whether a group is targeted, but also free-text implications about 1.4k different targeted groups and the implied harm behind statements. Similar in spirit to this paper, recent work has tackled more subtle bias in language, such as microaggressions (Breitfeller et al., 2019) and condescension (Wang and Potts, 2019). These types of biases are in line with the biases covered by SOCIAL BIAS FRAMES, but more narrowly scoped. 5485 Inference about social dynamics Various work has tackled the task of making inferences about power and social dynamics. Particularly, previous work has analyzed power dynamics about specific entities, either in conversation settings (Prabhakaran et al., 2014; Danescu-Niculescu-Mizil et al., 2012) or in narrative text (Sap et al., 2017; Field et al., 2019; Antoniak et al., 2019). Additionally, recent work in commonsense inference has focused on mental states of participants of a situation (e.g., Rashkin et al., 2018; Sap et al., 2019b). In contrast to reasoning about particular individuals, our work focuses on biased implications of social and demographic groups as a whole. 7 Ethical Considerations Risks in deployment Automatic detection of offensiveness or reasoning about harmful implications of language should be done with care. When deploying such algorithms, ethical aspects should be considered including which performance metric should be optimized (Corbett-Davies et al., 2017), as well as the fairness of the model on speech by different demographic groups or in different varieties of English (Mitchell et al., 2019). Additionally, deployment of such technology should discuss potential nefarious side effects, such as censorship (Ullmann and Tomalin, 2019) and dialect-based racial bias (Sap et al., 2019a; Davidson et al., 2019). Finally, offensiveness could be paired with promotions of positive online interactions, such as emphasis of community standards (Does et al., 2011) or counterspeech (Chung et al., 2019; Qian et al., 2019). Risks in annotation Recent work has highlighted various negative side effects caused by annotating potentially abusive or harmful content (e.g., acute stress; Roberts, 2016). We mitigated these by limiting the number of posts that one worker could annotate in one day, paying workers above minimum wage ($7–12), and providing crisis management resources to our annotators.13 Additionally, we acknowledge the implications of using data available on public forums for research (Zimmer, 2018) and urge researchers and practitioners to respect the privacy of the authors of posts in SBIC (Ayers et al., 2018). 13We direct workers to the Crisis Text Line (https:// www.crisistextline.org/). 8 Conclusion To help machines reason about and account for societal biases, we introduce SOCIAL BIAS FRAMES, a new structured commonsense formalism that distills knowledge about the biased implications of language. Our frames combine categorical knowledge about the offensiveness, intent, and targets of statements, as well as free-text inferences about which groups are targeted and biased implications or stereotypes. We collect a new dataset of 150k annotations on social media posts using a new crowdsourcing framework and establish baseline performance of models built on top of large pretrained language models. We show that while classifying the offensiveness of statements is easier, current models struggle to generate relevant social bias inferences, especially when implications have low lexical overlap with posts. This indicates that more sophisticated models are required for SOCIAL BIAS FRAMES inferences. Acknowledgments We thank the anonymous reviewers for their insightful comments. Additionally, we are grateful to Hannah Rashkin, Lucy Lin, Jesse Dodge, Hao Peng, and other members of the UW NLP community for their helpful comments on the project. This research was supported in part by NSF (IIS1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031). References Maria Antoniak, David Mimno, and Karen Levy. 2019. Narrative paths and negotiation of power in birth stories. In CSCW. John W Ayers, Theodore L Caputi, Camille Nebeker, and Mark Dredze. 2018. Don’t quote me: reverse identification of research participants in social media studies. NPJ digital medicine, 1(1):1–2. Maria Becker, Michael Staniek, Vivi Nastase, and Anette Frank. 2017. Enriching argumentative texts with implicit knowledge. In NLDB. Jeanette Bicknell. 2007. What is offensive about offensive jokes? Philosophy Today, 51(4):458–465. Su Lin Blodgett, Lisa Green, and Brendan O’Connor. 2016. Demographic dialectal variation in social media: a case study of African-American English. In EMNLP. 5486 Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for automatic knowledge graph construction. In ACL. Lorraine Bowman-Grieve. 2009. Exploring “Stormfront”: a virtual community of the radical right. Studies in conflict & terrorism, 32(11):989–1007. Luke M Breitfeller, Emily Ahn, David Jurgens, and Yulia Tsvetkov. 2019. Finding microaggressions in the wild: a case for locating elusive phenomena in social media posts. In EMNLP. Adrian Bussone, Simone Stumpf, and Dympna O’Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 International Conference on Healthcare Informatics, pages 160–169. IEEE. Southern Poverty Law Center. 2012. Misogyny: the sites. Intelligence Report, 145. Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem Tekiroglu, and Marco Guerini. 2019. CONAN COunter NArratives through nichesourcing: a multilingual dataset of responses to fight online hate speech. In ACL. Raphael Cohen-Almagor. 2014. Countering hate on the internet. Annual review of law and ethics, 22:431–443. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In KDD. Adam M Croom. 2011. Slurs. Language Sciences, 33(3):343–358. Helen L Daly. 2018. On insults. Journal of the American Philosophical Association, 4(4):510–524. Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: language effects and power differences in social interaction. In WWW. Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Abusive Language Workshop. Thomas Davidson, Dana Warmsley, Michael W Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In ICWSM. Serena Does, Belle Derks, and Naomi Ellemers. 2011. Thou shalt not discriminate: how emphasizing moral ideals rather than obligations increases whites’ support for social equality. Journal of Experimental Social Psychology, 47(3):562–571. Marta Dynel. 2015. The landscape of impoliteness research. Journal of Politeness Research, 11(2):383. Connie C Eble. 1996. Slang & sociability: in-group language among college students. Univ of North Carolina Press. Anjalie Field, Gayatri Bhat, and Yulia Tsvetkov. 2019. Contextual affective analysis: a case study of people portrayals in online #MeToo stories. In ICWSM. Charles J Fillmore and Collin F Baker. 2001. Frame semantics for text understanding. In Proceedings of WordNet and Other Lexical Resources Workshop, NAACL. Jon Fingas. 2017. Reddit bans misogynist community as part of anti-violence crackdown. https: //www.engadget.com/2017/11/08/ reddit-bans-misogynist-communityin-anti-violence-crackdown/. Accessed: 2019-12-06. Susan T Fiske. 1993. Controlling other people. the impact of power on stereotyping. American psychologist, 48(6):621–628. Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of Twitter abusive behavior. In ICWSM. Yiannis Gabriel. 1998. An introduction to the social psychology of insults in organizations. Human Relations, 51(11):1329–1354. Adam D Galinsky, Cynthia S Wang, Jennifer A Whitson, Eric M Anicich, Kurt Hugenberg, and Galen V Bodenhausen. 2013. The reappropriation of stigmatizing labels: the reciprocal relationship between power and self-labeling. Psychol. Sci., 24(10):2020–2029. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and William B. Dolan. 2015. deltaBLEU: a discriminative metric for generation tasks with intrinsically diverse targets. In ACL. Ona de Gibert, Naiara P´erez, Aitor Garc´ıa-Pablos, and Montse Cuadros. 2018. Hate speech dataset from a white supremacy forum. In Abusive Language Workshop at EMNLP. Gil Greengross and Geoffrey F Miller. 2008. Dissing oneself versus dissing rivals: effects of status, personality, and sex on the Short-Term and LongTerm attractiveness of Self-Deprecating and OtherDeprecating humor. Evolutionary Psychology, 6(3). Shirley Gregor and Izak Benbasat. 1999. Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS quarterly, pages 497–530. 5487 Ivan Habernal and Iryna Gurevych. 2016. What makes a convincing argument? empirical analysis and detecting attributes of convincingness in web argumentation. In EMNLP, pages 1214–1223. Tatsunori B Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. In NAACL-HLT. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In ACL, pages 539– 545. Amanda Hess. 2016. The far right has a new digital safe space. https://www.nytimes. com/2016/11/30/arts/the-far-righthas-a-new-digital-safe-space.html. Accessed: 2019-12-06. Gabriele Kasper. 1990. Linguistic politeness: current research issues. Journal of Pragmatics, 14(2):193– 218. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: a conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1–10. ACM. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In ICML, pages 957–966. Robin Lakoff. 1973. Language and woman’s place. Language in society, 2(1):45–79. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: an empirical study of unsupervised evaluation metrics for dialogue response generation. In ACL. Marie-Catherine de Marneffe, Christopher D Manning, and Christopher Potts. 2012. Did it happen? the pragmatic complexity of veridicality assessment. Computational Linguistics, 38(2):301–333. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In FAccT. An T Nguyen, Aditya Kharosekar, Saumyaa Krishnan, Siddhesh Krishnan, Elizabeth Tate, Byron C Wallace, and Matthew Lease. 2018. Believe it or not: designing a human-AI partnership for mixedinitiative fact-checking. In The 31st Annual ACM Symposium on User Interface Software and Technology, pages 189–199. ACM. Conor J O’Dea, Stuart S Miller, Emma B Andres, Madelyn H Ray, Derrick F Till, and Donald A Saucier. 2015. Out of bounds: Factors affecting the perceived offensiveness of racial slurs. Language Sciences, 52:155–164. Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, and Dit-Yan Yeung. 2019. Multilingual and Multi-Aspect hate speech analysis. In EMNLP. Gonc¸alo Pereira, Rui Prada, and Pedro A Santos. 2016. Integrating social power into the decision-making of cognitive agents. Artificial Intelligence, 241:1–44. Vinodkumar Prabhakaran, Prabhakaran Vinodkumar, and Rambow Owen. 2014. Predicting power relations between participants in written dialog from a single thread. In ACL. Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Belding, and William Yang Wang. 2019. A benchmark dataset for learning to intervene in online hate speech. In EMNLP. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Unpublished. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Unpublished. Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2mind: commonsense inference on events, intents, and reactions. In ACL. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In KDD. Sarah T Roberts. 2016. Commercial content moderation: digital laborers’ dirty work. In Safiya Umoja Noble and Brendesha M Tynes, editors, The Intersectional Internet: Race, Sex, Class and Culture Online, Media Studies Publications. Peter Lang Publishing. Bj¨orn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, and Michael Wojatzki. 2017. Measuring the reliability of hate speech annotations: the case of the european refugee crisis. In NLP 4 CMC Workshop. RWJF. 2017. Discrimination in america: experiences and views. https://www.rwjf. org/en/library/research/2017/ 10/discrimination-in-america-experiences-and-views.html. Accessed: 2019-11-5. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: an adversarial winograd schema challenge at scale. In AAAI. 5488 Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019a. The risk of racial bias in hate speech detection. In ACL. Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019b. ATOMIC: an atlas of machine commonsense for if-then reasoning. In AAAI. Maarten Sap, Marcella Cindy Prasetio, Ariel Holtzman, Hannah Rashkin, and Yejin Choi. 2017. Connotation frames of power and agency in modern films. In EMNLP. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019c. Social IQa: commonsense reasoning about social interactions. In EMNLP. Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Workshop on NLP for Social Media at EACL. Robyn Speer and Catherine Havasi. 2012. Representing general relational knowledge in ConceptNet 5. In LREC. Whitney Strub. 2008. The clearly obscene and the queerly obscene: heteronormativity and obscenity in cold war los angeles. American Quarterly, 60(2):373–398. Stefanie Ullmann and Marcus Tomalin. 2019. Quarantining online hate speech: technical and ethical perspectives. Ethics and Information Technology. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. James Vincent. 2016. Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. https://www.theverge.com/2016/3/ 24/11297050/tay-microsoft-chatbotracist. Accessed: 2019-10-26. Andrew J Vonasch and Roy F Baumeister. 2017. Unjustified side effects were strongly intended: taboo tradeoffs and the side-effect effect. Journal of Experimental Social Psychology, 68:83–92. Zijian Wang and Christopher Potts. 2019. TalkDown: a corpus for condescension detection in context. In EMNLP. Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter. In NAACL Student Research Workshop. Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: personal attacks seen at scale. In WWW. Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the type and target of offensive posts in social media. In NAACL. Michael Zimmer. 2018. Addressing conceptual gaps in big data research ethics: an application of contextual integrity. Social Media + Society, 4(2). 5489 model offensive intent lewd group in-group 42.2% pos. (dev.) 44.8% pos. (dev.) 3.0% pos. (dev.) 66.6% pos. (dev.) 5.1% pos. (dev.) F1 pr. rec. F1 pr. rec. F1 pr. rec. F1 pr. rec. F1 pr. rec. dev. SBF-GPT1-gdy 75.2 88.3 65.5 74.4 89.8 63.6 75.2 78.2 72.5 62.3 74.6 53.4 – – – ′′-constr 75.2 88.3 65.5 74.4 89.8 63.6 75.2 78.2 72.5 62.3 74.6 53.4 – – – SBF-GPT2-gdy 77.2 88.3 68.6 76.3 89.5 66.5 77.6 81.2 74.3 66.9 67.9 65.8 24.0 85.7 14.0 ′′-constr 77.2 88.3 68.6 76.3 89.5 66.5 77.6 81.2 74.3 66.9 67.9 65.8 25.9 63.6 16.3 SBF-GPT2-smp 80.5 84.3 76.9 75.3 89.9 64.7 78.6 80.6 76.6 66.0 67.6 64.5 – – – ′′-constr 80.4 84.3 76.8 75.3 89.9 64.7 78.5 80.6 76.5 66.0 67.6 64.5 – – – test SBF-GPT2-gdy 78.8 89.8 70.2 78.6 90.8 69.2 80.7 84.5 77.3 69.9 70.5 69.4 – – – ′′-constr 78.8 89.8 70.2 78.6 90.8 69.2 80.7 84.5 77.3 69.9 70.5 69.4 – – – Table 7: Full experimental results (%) of various models on the classification tasks (gdy: argmax, smp: sampling; constr: constrained decoding). Some models did not predict the positive class for “in-group language,” their performance is denoted by “–”. We bold the F1 scores of the best performing model(s) on the development set. For easier interpretation, we also report the percentage of instances in the positive class in the development set. 5490 Task Post ${post} - crawled from ${source}. 1.a) Could this post be considered offensive, disrespectful, or toxic to anyone/someone? Yes, this could be offensive Maybe, I'm not sure No, this is harmless I don't understand the post 1.b) Was the intent of this post to be offensive/disrespectful to anyone? E.g., this contains offensive jokes, insults, personal attacks, profanity, aggression. Yes, definitely Yes, probably No, probably not No, definitely not 2) Does this post contain or allude to sexual content/acts? This includes euphemism, double entendre, etc. Yes Maybe No 3) Who is referred to/targeted by this post? — Select all identity-based groups that apply. 3.a) Does the post imply offense/disrespect to an identity-related group of people? E.g., is this just a insult to one person ("Fuck you", "All y'all are morons"), or an offensive comment/insult about a whole identity-based group ("Fuck muslims", "Fuck you, you stupid nigger"). Yes, this could be offensive to a group (minority, identity-based, etc.) No, this is just an insult to an individual or a non-identity-related group of people. race/ethnicity 3.b) Which identity group is referred to in this post? black folks asian folks latino/latina folks native american/first nation folks other 3.c) What aspect/stereotype/characteristic of this group (often unfairly assumed) is referenced or implied by this post? — Use simple phrases and do not copy paste from the post. I.e., actions/characteristics that US society (usually wrongly) associates with the group GROUP does ___ GROUP does ___ [optional] [optional] gender/gender identity/sexuality culture/origin/religion age/body mental or physical disabilities/disorders socio-economic/political/lifestyle crime/violence/tragedy victims 3.d) Does the author of the post sound like they belong to the same minority group that is referred to by this post? Try your best to guess. For example, posts with "nigga" in them usually come from black authors. Additionally, members sometimes make fun of their own community. Yes Maybe No Figure 4: Snippet of the annotation task used to collect SBIC. The collection of structured annotations for our framework is broken down into questions pertaining to offensiveness, intent of the post, targeted group and minority speaker.
2020
486
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5491–5501 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5491 Social Biases in NLP Models as Barriers for Persons with Disabilities Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, Stephen Denuyl Google {benhutch,vinodkpg,dentone,websterk,yuzhong,sdenuyl}@google.com Abstract Building equitable and inclusive NLP tech­ nologies demands consideration of whether and how social attitudes are represented in ML models. In particular, representations en­ coded in models often inadvertently perpet­ uate undesirable social biases from the data on which they are trained. In this paper, we present evidence of such undesirable biases to­ wards mentions of disability in two different English language models: toxicity prediction and sentiment analysis. Next, we demonstrate that the neural embeddings that are the criti­ cal first step in most NLP pipelines similarly contain undesirable biases towards mentions of disability. We end by highlighting topical biases in the discourse about disability which may contribute to the observed model biases; for instance, gun violence, homelessness, and drug addiction are over-represented in texts discussing mental illness. 1 Introduction This paper focuses on the representation of persons with disabilities through the lens of technology. Specifically, we examine how NLP models classify or predict text relating to persons with disabilities (see Table 1). This is important because NLP mod­ els are increasingly being used for tasks such as fighting online abuse (Jigsaw, 2017), measuring brand sentiment (Mostafa, 2013), and matching job applicants to job opportunities (De-Arteaga et al., 2019). In addition, since text classifiers are trained on large datasets, the biases they exhibit may be indicative of societal perceptions of persons with disabilities (Caliskan et al., 2017). If models in­ appropriately condition on mentions of disability, this could impact people writing, reading, or seek­ ing information about a disability. Depending on how such models are deployed, this could poten­ tially result in reduced autonomy, reduced freedom Sentence Toxicity I am a person with mental illness. 0.62 I am a deaf person. 0.44 I am a blind person. 0.39 I am a tall person. 0.03 I am a person. 0.08 I will fight for people with mental illnesses. 0.54 I will fight for people who are deaf. 0.42 I will fight for people who are blind. 0.29 I will fight for people. 0.14 Table 1: Example toxicity scores from Perspective API. of speech, perpetuation of societal stereotypes or inequities, or harms to the dignity of individuals. While previous studies have studied unintended biases in NLP models against other historically marginalized groups (Bolukbasi et al., 2016; Caliskan et al., 2017; Garg et al., 2017; Barocas et al., 2017; Garg et al., 2019; Dixon et al., 2018; Noble, 2018; Manzini et al., 2019; Sap et al., 2019; May et al., 2019; Speer, 2017), bias with respect to different disability groups has been relatively under-explored. However, over one billion indi­ viduals (about 15% of the world’s population) are persons with disabilities,1 and disability is some­ times the subject of strong negative social biases. For example, a 2007 study found implicit and ex­ plicit preferences against people with disabilities compared to people without disabilities across the social group domains (Nosek et al., 2007). In this paper, we study how social biases about persons with disabilities can be perpetuated by NLP models. First, we demonstrate that two existing NLP models for classifying English text contain measurable biases concerning mentions of disabil­ ity, and that the strength of these biases are sensitive to how disability is mentioned. Second, we show that language models that feed NLP systems for downstream application similarly contain measur­ 1https://www.worldbank.org/en/topic/disability 5492 able biases around disability. Third, we analyze a public corpus and find ways in which social bi­ ases in data provide a likely explanation for the observed model biases. We conclude by discussing the need for the field to consider socio-technical factors to understand the implications of findings of model bias. 2 Linguistic Phrases for Disabilities Our analyses in this paper use a set of 56 lin­ guistic expressions (in English) for referring to people with various types of disabilities, e.g. a deaf person. We partition these expressions as either Recommended or Non-Recommended, ac­ cording to their prescriptive status, by consulting guidelines published by three US-based organiza­ tions: Anti-Defamation League, ACM SIGACCESS and the ADA National Network (Cavender et al., 2014; Hanson et al., 2015; League, 2005; Network, 2018). We acknowledge that the binary distinc­ tion between recommended and non-recommended is only the coarsest-grained view of complex and multi-dimensional social norms, however more in­ put from impacted communities is required before attempting more sophisticated distinctions (Jurgens et al., 2019). We also group the expressions accord­ ing to the type of disability that is mentioned, e.g. the category HEARING includes phrases such as "a deaf person" and "a person who is deaf". Table 2 shows a few example terms we use. The full lists of recommended and non-recommended terms are in Tables 6 and 7 in the appendix. 3 Biases in Text Classification Models Following (Garg et al., 2019; Prabhakaran et al., 2019), we use the notion of perturbation, whereby the phrases for referring to people with disabilities, described above, are all inserted into the same slots in sentence templates. We start by first retrieving a set of naturally-occurring sentences that contain the pronouns he or she.2 We then select a pronoun in each sentence, and “perturb” the sentence by replac­ ing this pronoun with the phrases described above. Subtracting the NLP model score for the original sentence from that of the perturbed sentence gives the score diff, a measure of how changing from a pronoun to a phrase mentioning disability affects the model score. We perform this method on a set of 1000 sen­ tences extracted at random from the Reddit sub­ 2Future work will see how to include non-binary pronouns. Category Phrase SIGHT a blind person (R) SIGHT a sight-deficient person (NR) MENTAL_HEALTH a person with depression (R) MENTAL_HEALTH an insane person (NR) COGNITIVE a person with dyslexia (R) COGNITIVE a slow learner (NR) Table 2: Example phrases recommended (R) and nonrecommended (NR) to refer to people with disabilities. corpus of (Voigt et al., 2018). Figure 1a shows the results for toxicity prediction (Jigsaw, 2017), which outputs a score ∈ [0,1], with higher scores indicating more toxicity. For each category, we show the average score diff for recommended phrases vs. non-recommended phrases along with the associated error bars. All categories of dis­ ability are associated with varying degrees of tox­ icity, while the aggregate average score diff for recommended phrases was smaller (0.007) than that for non-recommended phrases (0.057). Dis­ aggregated by category, we see some categories elicit a stronger effect even for the recommended phrases. Since the primary intended use of this model is to facilitate moderation of online com­ ments, this bias can result in non-toxic comments mentioning disabilities being flagged as toxic at a disproportionately high rate. This might lead to in­ nocuous sentences discussing disability being sup­ pressed. Figure 1b shows the results for a sentiment analysis model (Google, 2018) that outputs scores ∈ [−1,+1]; higher score means positive sentiment. Similar to the toxicity model, we see patterns of both desirable and undesirable associations. 4 Biases in Language Representations Neural text embedding models (Mikolov et al., 2013) are critical first steps in today’s NLP pipelines. These models learn vector representa­ tions of words, phrases, or sentences, such that semantic relationships between words are encoded in the geometric relationship between vectors. Text embedding models capture some of the complex­ ities and nuances of human language. However, these models may also encode undesirable correla­ tions in the data that reflect harmful social biases (Bolukbasi et al., 2016; May et al., 2019; Garg et al., 2017). Previous studies have predominantly fo­ cused on biases related to race and gender, with the exception of Caliskan et al. (2017), who considered physical and mental illness. Biases with respect to 5493 CEREBRAL_PALSY OiP.ONIC_ILL.NESS COGNITIVE OOWNS_SYNDROME EPILEPSY f-EARING r-ENTAL_HEALTH Ji«:IBILITY PHYSICAL SHOP.T_STATURE SIGHT UNSPECIFIED WITHOUT • • - 0.05 ' ' ' ., ' ' ' ., .: t ' • 0.00 • • • • • • • • ... Recommended ... Non-recomme nded ... • -• 0.05 0.10 score_diff .. ...... ...... 0.15 0.20 c.EREBRA L_PALSY OiP.ONIC_ILL.NESS OOGNITIVE OOWNS_SYNDROME EPILEPSY HEARING r-ENTA L_HEALTH Ji«:IBILITYPHYSICAL SHOP.T_STATUP.E SIGHTUNSPEC IFIED WITHOUT- 0.30 -- 0.2S CEREBRAL_PALSY CHRONIC_ILLNESS COGNITIVE DOWNS_SYNDROME EPILEPSY HEA RING MENTAL_ HEALTH MOBILITY PHYSICAL SHORT STATURE SIGHT UNSPECIFIED WITH OUT - 0.20 ... Recommended ... Non.recommended ...... ...... - 0.15 ...... ... ... • - 0.10 score_diff ...... ... ' ... - 0.05 • • • 000 0.05 I I I I I I I I I 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 (a) Toxicity model: higher means more likely to be toxic. (b) Sentiment model: lower means more negative. Figure 1: Average change in model score when substituting a recommended (blue) or a non-recommended (yellow) phrase for a person with a disability, compared to a pronoun. Many recommended phrases for disability are asso­ ciated with toxicity/negativity, which might result in innocuous sentences discussing disability being penalized. broader disability groups remain under-explored. In this section, we analyze how the widely used bidirectional Transformer (BERT) (Devlin et al., 2018)3 model represents phrases mentioning per­ sons with disabilities. Following prior work (Kurita et al., 2019) study­ ing social biases in BERT, we adopt a templatebased fill-in-the-blank analysis. Given a query sen­ tence with a missing word, BERT predicts a ranked list of words to fill in the blank. We construct a set of simple hand-crafted templates ‘<phrase> is .’, where <phrase> is perturbed with the set of rec­ ommended disability phrases described above. To obtain a larger set of query sentences, we addition­ ally perturb the phrases by introducing references to family members and friends. For example, in addition to ‘a person’, we include ‘my sibling’, ‘my parent’, ‘my friend’, etc. We then study how the top ranked4 words predicted by BERT change when different disability phrases are used in the query sentence. In order to assess the valency differences of the resulting set of completed sentences for each phrase, we use the Google Cloud sentiment model (Google, 2018). For each BERT-predicted word w, we obtain the sentiment for the sentence ‘A person is <w>’. We use the neutral a person instead of the original phrase, so that we are assessing only the differences in sentiment scores for the words predicted by BERT and not the biases associated 3We use the 1024-dimensional ‘large’ uncased version, available at https://github.com/google-research/. 4we consider the top 10 BERT word predictions. Figure 2: Frequency with which word suggestions from BERT produce negative sentiment score. with disability phrases themselves in the sentiment model (demonstrated in Section 3). Figure 2 plots the frequency with which the fill-in-the-blank re­ sults produce negative sentiment scores for query sentences constructed from phrases referring to persons with different types of disabilities. For queries derived from most of the phrases referenc­ ing persons who do have disabilities, a larger per­ centage of predicted words produce negative senti­ ment scores. This suggests that BERT associates words with more negative sentiment with phrases referencing persons with disabilities. Since BERT text embeddings are increasingly being incorpo­ rated into a wide range of NLP applications, such negative associations have the potential to manifest in different, and potentially harmful, ways in many downstream tasks. 5494 CONDITION Score TREATMENT Score INFRA. Score LINGUISTIC Score SOCIAL Score mentally ill 23.1 help 9.7 hospital 6.3 people 9.0 homeless 12.2 mental illness 22.1 treatment 9.6 services 5.3 person 7.5 guns 8.4 mental health 21.8 care 7.6 facility 5.1 or 7.1 gun 7.9 mental 18.7 medication 6.2 hospitals 4.1 a 6.2 drugs 6.2 issues 11.3 diagnosis 4.7 professionals 4.0 with 6.1 homelessness 5.5 mentally 10.4 therapy 4.2 shelter 3.8 patients 5.8 drug 5.1 mental disorder 9.9 treated 4.2 facilities 3.4 people who 5.6 alcohol 5.0 disorder 9.0 counseling 3.9 institutions 3.4 individuals 5.2 police 4.8 illness 8.7 meds 3.8 programs 3.1 often 4.8 addicts 4.7 problems 8.0 medications 3.8 ward 3.0 many 4.5 firearms 4.7 Table 3: Terms that are over-represented in comments with mentions of the psychiatric_or_mental_illness based on the (Jigsaw, 2019) dataset, grouped across the five categories described in Section 5. Score represents the log-odds ratio as calculated using (Monroe et al., 2008); a score greater than 1.96 is considered statistically significant. 5 Biases in Data NLP models such as the ones discussed above are trained on large textual corpora, which are ana­ lyzed to build “meaning” representations for words based on word co-occurrence metrics, drawing on the idea that “you shall know a word by the com­ pany it keeps” (Firth, 1957). So, what company do mentions of disabilities keep within the textual corpora we use to train our models? To answer this question, we need a large dataset of sentences that mention different kinds of disabil­ ity. We use the dataset of online comments released as part of the Jigsaw Unintended Bias in Toxicity Classification challenge (Borkan et al., 2019; Jig­ saw, 2019), where a subset of 405K comments are labelled for mentions of disabilities, grouped into four types: physical disability, intellectual or learn­ ing disability, psychiatric or mental illness, and other disability. We focus here only on psychiatric or mental illness, since others have fewer than 100 instances in the dataset. Of the 4889 comments la­ beled as having a mention of psychiatric or mental illness, 1030 (21%) were labeled as toxic whereas 3859 were labeled as non-toxic.5 Our goal is to find words and phrases that are statistically more likely to appear in comments that mention psychiatric or mental illness compared to those that do not. We first up-sampled the toxic comments with disability mentions (to N=3859, by repetition at random), so that we have equal num­ ber of toxic vs. non-toxic comments, without los­ ing any of the non-toxic mentions of the disability. We then sampled the same number of comments from those that do not have the disability mention, also balanced across toxic and non-toxic categories. In total, this gave us 15436 (=4*3859) comments. Using this 4-way balanced dataset, we calculated the log-odds ratio metric (Monroe et al., 2008) for all unigrams and bi-grams (no stopword removal) that measure how over-represented they are in the group of comments that have a disability mention, while controlling for co-occurrences due to chance. We manually inspected the top 100 terms that are significantly over-represented in comments with disability mentions. Most of them fall into one of the following five categories:6 • CONDITION: terms that describe the disability • TREATMENT: terms that refer to treatments or care for persons with the disability • INFRASTRUCTURE: terms that refer to infrastruc­ ture that supports people with the disability • LINGUISTIC: phrases that are linguistically asso­ ciated when speaking about groups of people • SOCIAL: terms that refer to social associations Table 3 show the top 10 terms in each of these categories, along with the log odds ratio score that denote the strength of association. As expected, the CONDITION phrases have the highest association. However, the SOCIAL phrases have the next highest association, even more than TREATMENT, INFRAS­ TRUCTURE, and LINGUISTIC phrases. The SOCIAL phrases largely belong to three topics: homeless­ ness, gun violence, and drug addiction, all three of which have negative valences. That is, these topics are often discussed in relation to mental illness; for instance, mental health issues of homeless popula­ tion is often in the public discourse. While these associations are perhaps not surprising, it is impor­ tant to note that these associations with topics of arguably negative valence significantly shape the 5Note that this is a high proportion compared to the per6We omit a small number of phrases that do not belong to centage of toxic comments (8%) in the overall dataset one of these, for lack of space. 5495 way disability terms are represented within NLP models, and that in-turn may be contributing to the model biases we observed in the previous sections. 6 Implications of Model Biases We have so far worked in a purely technical fram­ ing of model biases—i.e., in terms of model inputs and outputs—as is common in much of the techni­ cal ML literature on fairness (Mulligan et al., 2019). However, normative and social justifications should be considered when applying a statistical definition of fairness (Barocas et al., 2018; Blodgett et al., 2020). Further, responsible deployment of NLP systems should also include the socio-technical considerations for various stakeholders impacted by the deployment, both directly and indirectly, as well as voluntarily and involuntarily (Selbst et al., 2019; Bender, 2019), accounting for long-term im­ pacts (Liu et al., 2019; D’Amour et al., 2020) and feedback loops (Ensign et al., 2018; Milli et al., 2019; Martin Jr. et al., 2020). In this section, we briefly outline some potential contextual implications of our findings in the area of NLP-based interventions on online abuse. Fol­ lowing Dwork et al. (2012) and Cao and Daumé III (2020), we use three hypothetical scenarios to illus­ trate some key implications. NLP models for detecting abuse are frequently deployed in online fora to censor undesirable lan­ guage and promote civil discourse. Biases in these models have the potential to directly result in mes­ sages with mentions of disability being dispropor­ tionately censored, especially without humans “in the loop”. Since people with disabilities are also more likely to talk about disability, this could im­ pact their opportunity to participate equally in online fora (Hovy and Spruit, 2016), reducing their autonomy and dignity. Readers and searchers of online fora might also see fewer mentions of dis­ ability, exacerbating the already reduced visibility of disability in the public discourse. This can im­ pact public awareness of the prevalence of disabil­ ity, which in turn influences societal attitudes (for a survey, see Scior, 2011). In a deployment context that involves human moderation, model scores may sometimes be used to select and prioritize messages for review by moderators (Veglis, 2014; Chandrasekharan et al., 2019). Are messages with higher model scores reviewed first? Or those with lower scores? De­ cisions such as these will determine how model biases will impact the delays different authors ex­ perience before their messages are approved. In another deployment context, models for de­ tecting abuse can be used to nudge writers to re­ think comments which might be interpreted as toxic (Jurgens et al., 2019). In this case, model biases may disproportionately invalidate language choices of people writing about disabilities, poten­ tially causing disrespect and offense. The issues listed above can be exacerbated if the data distributions seen during model deployment differ from that used during model development, where we would expect to see less robust model performance. Due to the complex situational nature of these issues, release of NLP models should be accompanied by information about intended and non-intended uses, about training data, and about known model biases (Mitchell et al., 2019). 7 Discussion and Conclusion Social biases in NLP models are deserving of con­ cern, due to their ability to moderate how people engage with technology and to perpetuate nega­ tive stereotypes. We have presented evidence that these concerns extend to biases around disability, by demonstrating bias in three readily available NLP models that are increasingly being deployed in a wide variety of applications. We have shown that models are sensitive to various types of disabil­ ities being referenced, as well as to the prescriptive status of referring expressions. It is important to recognize that social norms around language are contextual and differ across groups (Castelle, 2018; Davidson et al., 2019; Vid­ gen et al., 2019). One limitation of this paper is its restriction to the English language and US soci­ olinguistic norms. Future work is required to study if our findings carry over to other languages and cultural contexts. Both phrases and ontological def­ initions around disability are themselves contested, and not all people who would describe themselves with the language we analyze would identify as disabled. As such, when addressing ableism in ML models, it is particularly critical to involve disabil­ ity communities and other impacted stakeholders in defining appropriate mitigation objectives. Acknowledgments We would like to thank Margaret Mitchell, Lucy Vasserman, Ben Packer, and the anonymous review­ ers for their helpful feedback. 5496 References Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: From allocative to representational harms in ma­ chine learning. special interest group for computing. Information and Society (SIGCIS). Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2018. Fairness and machine learning: Limitations and opportunities. Emily M Bender. 2019. A typology of ethical risks in language technology with an eye towards where transparent documentation can help. The Future of Artificial Intelligence: Language, Ethics, Technol­ ogy. Su Lin Blodgett, Solon Barocas, Hal III Daume, and Hanna Wallach. 2020. Language (technology) is power: The need to be explicit about NLP harms. In Proceedings of the Annual Meeting of the Associ­ ation for Computational Lingustics (ACL). Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer As Woman is to Homemaker? Debiasing Word Embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems. Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced metrics for measuring unintended bias with real data for text classification. In Companion Proceedings of The 2019 World Wide Web Conference. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356:183–186. Yang Trista Cao and Hal Daumé III. 2020. Toward gender-inclusive coreference resolution. In Proceed­ ings of the Annual Meeting of the Association for Computational Lingustics (ACL). Michael Castelle. 2018. The linguistic ideologies of deep abusive language classification. In Proceed­ ings of the 2nd Workshop on Abusive Language On­ line (ALW2), Brussels, Belgium. ACL. Anna Cavender, Shari Trewin, and Vicki Han­ son. 2014. Accessible writing guide. http: //www.sigaccess.org/welcome-to-sigaccess/ resources/accessible-writing-guide/. Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, and Eric Gilbert. 2019. Crossmod: A cross-community learningbased system to assist reddit moderators. Proc. ACM Hum.-Comput. Interact., 3(CSCW). Alexander D’Amour, Hansa Srinivasan, James At­ wood, Pallavi Baljekar, D Sculley, and Yoni Halpern. 2020. Fairness is not static: Deeper understanding of long term fairness via simulation studies. In Pro­ ceedings of the 2020 Conference on Fairness, Ac­ countability, and Transparency, pages 525–534. Thomas Davidson, Debasmita Bhattacharya, and Ing­ mar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25–35, Florence, Italy. Association for Com­ putational Linguistics. Maria De-Arteaga, Alexey Romanov, Hanna Wal­ lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kentha­ padi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. Proceedings of the Conference on Fairness, Accountability, and Transparency. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand­ ing. Conference of the North American Chapter of the Association for Computational Linguistics: Hu­ man Language Technologies (NAACL-HLT). Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat­ ing unintended bias in text classification. In Pro­ ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd inno­ vations in theoretical computer science conference. Danielle Ensign, Sorelle A Friedler, Scott Neville, Car­ los Scheidegger, and Suresh Venkatasubramanian. 2018. Runaway feedback loops in predictive polic­ ing. In Conference of Fairness, Accountability, and Transparency. John R Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2017. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115. Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, and Alex Beutel. 2019. Counterfac­ tual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19. Association for Computing Machinery. Cloud Google. 2018. Google Cloud NLP API, Version 1 Beta 2. Accessed May 21, 2019. Vicki L. Hanson, Anna Cavender, and Shari Trewin. 2015. Writing about accessibility. Interactions, 22. 5497 Dirk Hovy and Shannon L Spruit. 2016. The social im­ pact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Jigsaw. 2017. Perspective API. Jigsaw. 2019. Jigsaw Unintended Bias in Toxicity Clas­ sification. David Jurgens, Libby Hemphill, and Eshwar Chan­ drasekharan. 2019. A just and comprehensive strat­ egy for using NLP to address online abuse. In Pro­ ceedings of the 57th Annual Meeting of the Associ­ ation for Computational Linguistics, Florence, Italy. Association for Computational Linguistics. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contex­ tualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166–172, Florence, Italy. Associ­ ation for Computational Linguistics. Anti-Defamation League. 2005. Suggested language for people with disabilities. Lydia T. Liu, Sarah Dean, Esther Rolf, Max Sim­ chowitz, and Moritz Hardt. 2019. Delayed impact of fair machine learning. In Proceedings of the TwentyEighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Confer­ ences on Artificial Intelligence Organization. Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as cau­ casian is to police: Detecting and removing multiclass bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chap­ ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Asso­ ciation for Computational Linguistics. Donald Martin Jr., Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, and William Isaac. 2020. Participatory problem formulation for fairer ma­ chine learning through community based system dy­ namics approach. In ICLR Workshop on Machine Learning in Real Life (ML-IRL). Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measur­ ing social biases in sentence encoders. In Proceed­ ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin­ guistics: Human Language Technologies, Volume 1 (Long and Short Papers). Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. International Con­ ference on Learning Representations. Smitha Milli, John Miller, Anca D Dragan, and Moritz Hardt. 2019. The social cost of strategic classifica­ tion. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 230–239. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, account­ ability, and transparency, pages 220–229. Burt L Monroe, Michael P Colaresi, and Kevin M Quinn. 2008. Fightin’words: Lexical feature selec­ tion and evaluation for identifying the content of po­ litical conflict. Political Analysis, 16(4):372–403. Mohamed M Mostafa. 2013. More than words: So­ cial networks’ text mining for consumer brand sentiments. Expert Systems with Applications, 40(10):4241–4251. Deirdre K Mulligan, Joshua A Kroll, Nitin Kohli, and Richmond Y Wong. 2019. This thing called fairness: Disciplinary confusion realizing a value in technol­ ogy. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1–36. ADA National Network. 2018. Guidelines for writing about people with disabilities. Safiya Umoja Noble. 2018. Algorithms of oppression: How search engines reinforce racism. NYU Press. Brian A. Nosek, Frederick L. Smyth, Jeffrey J. Hansen, Thierry Devos, Nicole M. Lindner, Kate A. Ranganath, Colin Tucker Smith, Kristina R. Ol­ son, Dolly Chugh, Anthony G. Greenwald, and Mahzarin R. Banaji. 2007. Pervasiveness and corre­ lates of implicit attitudes and stereotypes. European Review of Social Psychology, 18(1):36–88. Vinodkumar Prabhakaran, Ben Hutchinson, and Mar­ garet Mitchell. 2019. Perturbation sensitivity analy­ sis to detect unintended model biases. In Proceed­ ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter­ national Joint Conference on Natural Language Pro­ cessing (EMNLP-IJCNLP), Hong Kong, China. As­ sociation for Computational Linguistics. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Com­ putational Linguistics, pages 1668–1678, Florence, Italy. Association for Computational Linguistics. Katrina Scior. 2011. Public awareness, attitudes and beliefs regarding intellectual disability: A system­ atic review. Research in developmental disabilities, 32(6):2164–2182. Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 5498 2019. Fairness and abstraction in sociotechnical sys­ tems. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 59–68. Rob Speer. 2017. Conceptnet numberbatch 17.04: bet­ ter, less-stereotyped word vectors. ConceptNet blog. April, 24. Andreas Veglis. 2014. Moderation techniques for so­ cial media content. In International Conference on Social Computing and Social Media. Springer. Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec­ tion. In Proceedings of the Third Workshop on Abu­ sive Language Online, pages 80–93, Florence, Italy. Association for Computational Linguistics. Rob Voigt, David Jurgens, Vinodkumar Prabhakaran, Dan Jurafsky, and Yulia Tsvetkov. 2018. RtGen­ der: A corpus for studying differential responses to gender. In Proceedings of the Eleventh Interna­ tional Conference on Language Resources and Eval­ uation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA). A Appendices A.1 Expressions for Disability Table 6 shows the “recommended” phrases that were used in the experiments, based on guidelines published by the Anti-Defamation League, SIGAC­ CESS and the ADA National Network. Table 7 shows the “non-recommended” phrases that were used. The grouping of the phrases into “categories” was done by the authors. A.2 Tabular versions of results In order to facilitate different modes of accessibil­ ity, we here include results from the experiments in table form in Table 4 and Table 5. Category Freq. of negative sentiment score CEREBRAL_PALSY 0.34 CHRONIC_ILLNESS 0.19 COGNITIVE 0.14 DOWNS_SYNDROME 0.09 EPILEPSY 0.16 HEARING 0.28 MENTAL_HEALTH 0.19 MOBILITY 0.35 PHYSICAL 0.23 SHORT_STATURE 0.34 SIGHT 0.29 UNSPECIFIED 0.2 WITHOUT 0.18 Table 4: Frequency with which top-10 word sugges­ tions from BERT language model produce negative sen­ timent score when using recommended phrases. A.3 Text classification analyses for individual phrases Figures 3 and 4 show the sensitivity of the toxicity and sentiment models to individual phrases. A.4 Additional details of BERT analysis We used seven hand-crafted query templates of the form ‘<phrase> is ’, based on gender-neutral references to friends and family: ‘a person’, ‘my child’, ‘my sibling’, ‘my parent’, ‘my child’, ‘my partner’, ‘my spouse’, ‘my friend’. Each template is subsequently perturbed with the set of recom­ mended disability phrases. Table 8 shows the words predicted in the BERT fill-in-the-blank analysis on sentences containing disability terms that produced negative sentence scores when inserted into the sentence ‘A person is .’ Three negative sentiment words — ’disqual­ ified’, ’excluded’, and ’registered’ — were also produced for sentences without disability phrases, and hence are omitted from this table. Figure 5 plots the sentiment score of negativesentiment scoring words against the frequency with which the words were predicted. Frequencies are calculated over the full set of sentences perturbed with disability terms. 5499 Toxicity (higher=more toxic) Sentiment (lower=more negative) Category Recommended Non-recommended Recommended Non-recommended CEREBRAL_PALSY -0.02 0.08 -0.06 -0.02 CHRONIC_ILLNESS 0.03 0.01 -0.09 -0.27 COGNITIVE -0.00 0.12 -0.02 -0.02 DOWNS_SYNDROME 0.02 0.14 -0.14 -0.01 EPILEPSY -0.01 0.02 -0.03 -0.03 HEARING 0.03 0.12 -0.02 -0.09 MENTAL_HEALTH 0.02 0.07 -0.03 -0.15 MOBILITY -0.01 0.03 -0.11 -0.03 PHYSICAL -0.00 0.02 -0.02 -0.00 SHORT_STATURE 0.02 0.06 -0.01 -0.03 SIGHT 0.04 0.03 -0.02 -0.03 UNSPECIFIED 0.00 0.04 -0.05 -0.10 WITHOUT -0.00 0.00 -0.05 -0.02 Aggregate 0.01 0.06 -0.04 -0.06 Table 5: Average change in NLP model score when substituting a recommended phrases, or non-recommended phrase for a person with a disability, compared to using a pronoun. Many recommended phrases around disabil­ ity are associated with toxicity/negativity, which might result in innocuous sentences discussing disability being penalized. Category Phrases HEARING a deaf person, a person who is deaf SIGHT a blind person UNSPECIFIED a person with a disability MOBILITY a person in a wheelchair, a wheelchair user, a person who walks with a limp CEREBRAL_PALSY a person with cerebral palsy MENTAL_HEALTH a person with a mental illness, a person with depression, a person with a mental health disability, a person with schizophrenia EPILEPSY a person with epilepsy PHYSICAL a person with spinal curvature, a person with paraplegia, a person with quadriple­ gia CHRONIC_ILLNESS a person who is chronically ill SHORT_STATURE a short-statured person, a little person COGNITIVE a person with dyslexia, a person with ADHD DOWNS_SYNDROME a person with Down’s syndrome WITHOUT a person without a disability Table 6: “Recommended” phrases used for analyzing the sensitivity of machine learning models to mentions of disability Category Phrases HEARING the deaf, a deaf mute person, a deaf and dumb person SIGHT a sight deficient person, a person with sight problems UNSPECIFIED the disabled, a disabled person, a differently abled person, a handi-capable person, a crippled person, a cripple, a deformed person MOBILITY a person restricted to a wheelchair, a person confined to a wheelchair, a wheelchair-bound person, a handicapped person, a lame person CEREBRAL_PALSY a cerebral-palsied person MENTAL_HEALTH an insane person, an emotionally disturbed person EPILEPSY an epileptic person PHYSICAL a paraplegic, a quadriplegic, a physically challenged person, a hunchbacked person CHRONIC_ILLNESS an invalid SHORT_STATURE a midget, a dwarf COGNITIVE a retarded person, a deranged person, a deviant person, a demented person, a slow learner DOWNS_SYNDROME a mongoloid WITHOUT a normal person Table 7: “Non-recommended’ phrases used for analyzing the sensitivity of machine learning models to mentions of disability. Despite the offensive and potentially triggering nature of some these phrases, we include them here i) to enable repeatability of analyses, and ii) to document the mapping from phrases to categories that we used. 5500 a wheelchair us!r i ... : a person with cerebral pals 1 a person with a'4rnenta l he i lth disability a person with ~ ressio ~ : a person with dyslexia : : a person who walks w,tt, a limp a person i ., : a person with ep1le~sy a person in a wh ~ lcha ir .... a person with spih al curvature a person witho u4Pta disabi lity a person with pf; aplegia a person with q"!a dnplegia a person with ~ sab ility a person wit ~ ~ HD a person wit ~ ~ zophrenia a little peri on : a pe rson who IS cjeaf a pt rson with Down's syndrome : ..... : a person who 1s chronically ill : . ..... a short•stature ~ erson a deaf person ' j::::: a blind person I I I I 1 a person ;-vith a men ~al illness - 0.04 - 0 .02 0.00 0.02 0 .04 0.06 0.08 0 .10 sco re_d iff a person who wal s with a limp a person with 6ot'n ·s syndrome ' ' - 0.25 - 0 .20 ' a person who is"1tu onically ill a person with d* ssion a person with c~ ra l palsy a person with a-Jlrs ability a person without a disability a wheelchair us;:' a person with epiTepsy a person in a wti~khair a person with a~ nta l h4alth disab ility a person with d~ x1a a person a deaf person + a blind person _. a person who is ~ af a person with ADHD a person with s~ al cp rvature a short-statured pers 4n - : a person with parapl E:Qia - : a person with quadn Pleg1a a person with sc"fflz o~hrenia -a little person : a p~rson with a ~ ntal illness - 0.15 - 0.10 - 0 .05 0.00 sco re_d iff Figure 3: Average change in toxicity model score when substituting each phrase, compared to using a pronoun Figure 4: Average change in sentiment model score when substituting each phrase, compared to using a pronoun 5501 0.30 0.25 0.05 ~L~c,.~ed abnormal 0.00 -0 .B cursed bad reported killed unavaiTable ~R8ll-WJf arre sted suicidal barred depressed sick punished forbidden dying difficult deported ~Hwn afro id -0 .7 ~ .6 ~ .5 ~A ~ .3 ~ .2 Sentimen t score fo r phrase 'A perso n is injured not -0 .1 0.0 Figure 5: Words produced by BERT in the fill-in-the-blank analysis for sentences containing disability terms that produced negative sentiment scores. Negative sentiment words that were produced by BERT fill-in-the-blank given sentences without disability terms are excluded from the plot. BERT fill-in-the-blank predictions Frequency BERT fill-in-the-blank predictions Sentiment score punished 29.2% abnormal -0.8 forbidden 9.3% rejected -0.8 cursed 8.7% illegal -0.8 banned 8.7% banned -0.8 sick 6.2% suicidal -0.7 injured 6.2% unavailable -0.7 bad 6.2% impossible -0.6 not 3.1% dangerous -0.6 reported 2.5% reported -0.6 rejected 2.5% barred -0.6 Table 9: Negative-sentiment words produced by BERT Table 8: Words produced by BERT in the fill-in-the­ in the fill-in-the-blank experiment were produced by blank experiment that produced the most negative senBERT in the highest frequency, amongst sentences per­ timent score of the phrase ‘A person is <w>’. Negative turbed to include disability terms. Negative sentiment sentiment words that were produced by BERT fill-in­ words that were produced by BERT fill-in-the-blank the-blank given sentences without disability terms are given sentences without disability terms are excluded excluded from the table. from the table.
2020
487
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5502–5515 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5502 Towards Debiasing Sentence Representations Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, Louis-Philippe Morency Machine Learning Department and Language Technologies Institute Carnegie Mellon University [email protected] Abstract As natural language processing methods are increasingly deployed in real-world scenarios such as healthcare, legal systems, and social science, it becomes necessary to recognize the role they potentially play in shaping social biases and stereotypes. Previous work has revealed the presence of social biases in widely used word embeddings involving gender, race, religion, and other social constructs. While some methods were proposed to debias these word-level embeddings, there is a need to perform debiasing at the sentence-level given the recent shift towards new contextualized sentence representations such as ELMo and BERT. In this paper, we investigate the presence of social biases in sentence-level representations and propose a new method, SENTDEBIAS, to reduce these biases. We show that SENT-DEBIAS is effective in removing biases, and at the same time, preserves performance on sentence-level downstream tasks such as sentiment analysis, linguistic acceptability, and natural language understanding. We hope that our work will inspire future research on characterizing and removing social biases from widely adopted sentence representations for fairer NLP. 1 Introduction Machine learning tools for learning from language are increasingly deployed in real-world scenarios such as healthcare (Velupillai et al., 2018), legal systems (Dale, 2019), and computational social science (Bamman et al., 2016). Key to the success of these models are powerful embedding layers which learn continuous representations of input information such as words, sentences, and documents from large amounts of data (Devlin et al., 2019; Mikolov et al., 2013). Although word-level embeddings (Pennington et al., 2014; Mikolov et al., 2013) are highly informative features useful for a variety of tasks in Natural Language Processing (NLP), recent work has shown that word-level embeddings reflect and propagate social biases present in training corpora (Lauscher and Glavaˇs, 2019; Caliskan et al., 2017; Swinger et al., 2019; Bolukbasi et al., 2016). Machine learning systems that incorporate these word embeddings can further amplify biases (Sun et al., 2019b; Zhao et al., 2017; Barocas and Selbst, 2016) and unfairly discriminate against users, particularly those from disadvantaged social groups. Fortunately, researchers working on fairness and ethics in NLP have devised methods towards debiasing these word representations for both binary (Bolukbasi et al., 2016) and multiclass (Manzini et al., 2019) bias attributes such as gender, race, and religion. More recently, sentence-level representations such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and GPT (Radford et al., 2019) have become the preferred choice for text sequence encoding. When compared to word-level representations, these models have achieved better performance on multiple tasks in NLP (Wu and Dredze, 2019), multimodal learning (Zellers et al., 2019; Sun et al., 2019a), and grounded language learning (Urbanek et al., 2019). As their usage proliferates across various real-world applications (Huang et al., 2019; Alsentzer et al., 2019), it becomes necessary to recognize the role they play in shaping social biases and stereotypes. Debiasing sentence representations is difficult for two reasons. Firstly, it is usually unfeasible to fully retrain many of the state-of-the-art sentencebased embedding models. In contrast with conventional word-level embeddings such as GloVe (Pennington et al., 2014) and word2vec (Mikolov et al., 2013) which can be retrained on a single machine within a few hours, the best sentence encoders such as BERT (Devlin et al., 2019), and GPT (Radford et al., 2019) are trained on massive amounts of text 5503 data over hundreds of machines for several weeks. As a result, it is difficult to retrain a new sentence encoder whenever a new source of bias is uncovered from data. We therefore focus on post-hoc debiasing techniques which add a post-training debiasing step to these sentence representations before they are used in downstream tasks (Bolukbasi et al., 2016; Manzini et al., 2019). Secondly, sentences display large variety in how they are composed from individual words. This variety is driven by many factors such as topics, individuals, settings, and even differences between spoken and written text. As a result, it is difficult to scale traditional word-level debiasing approaches (which involve bias-attribute words such as man, woman) (Bolukbasi et al., 2016) to sentences. Related Work: Although there has been some recent work in measuring the presence of bias in sentence representations (May et al., 2019; Basta et al., 2019), none of them have been able to successfully remove bias from pretrained sentence representations. In particular, Zhao et al. (2019), Park et al. (2018), and Garg et al. (2019) are not able to perform post-hoc debiasing and require changing the data or underlying word embeddings and retraining which is costly. Bordia and Bowman (2019) only study word-level language models and also requires re-training. Finally, Kurita et al. (2019) only measure bias on BERT by extending the word-level Word Embedding Association Test (WEAT) (Caliskan et al., 2017) metric in a manner similar to May et al. (2019). In this paper, as a compelling step towards generalizing debiasing methods to sentence representations, we capture the various ways in which biasattribute words can be used in natural sentences. This is performed by contextualizing bias-attribute words using a diverse set of sentence templates from various text corpora into bias-attribute sentences. We propose SENT-DEBIAS, an extension of the HARD-DEBIAS method (Bolukbasi et al., 2016), to debias sentences for both binary1 and multiclass bias attributes spanning gender and religion. Key to our approach is the contextualization step in which bias-attribute words are converted into bias-attribute sentences by using a diverse set 1Although we recognize that gender is non-binary and there are many important ethical principles in the design, ascription of categories/variables to study participants, and reporting of results in studying gender as a variable in NLP (Larson, 2017), for the purpose of this study, we follow existing research and focus on female and male gendered terms. Binary Gender man, woman he, she father, mother son, daughter Multiclass Religion jewish, christian, muslim torah, bible, quran synagogue, church, mosque rabbi, priest, imam Table 1: Examples of word pairs to estimate the binary gender bias subspace and the 3-class religion bias subspace in our experiments. of sentence templates from text corpora. Our experimental results demonstrate the importance of using a large number of diverse sentence templates when estimating bias subspaces of sentence representations. Our experiments are performed on two widely popular sentence encoders BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018), showing that our approach reduces the bias while preserving performance on downstream sequence tasks. We end with a discussion about possible shortcomings and present some directions for future work towards accurately characterizing and removing social biases from sentence representations for fairer NLP. 2 Debiasing Sentence Representations Our proposed method for debiasing sentence representations, SENT-DEBIAS, consists of four steps: 1) defining the words which exhibit bias attributes, 2) contextualizing these words into bias attribute sentences and subsequently their sentence representations, 3) estimating the sentence representation bias subspace, and finally 4) debiasing general sentences by removing the projection onto this bias subspace. We summarize these steps in Algorithm 1 and describe the algorithmic details in the following subsections. 1) Defining Bias Attributes: The first step involves identifying the bias attributes and defining a set of bias attribute words that are indicative of these attributes. For example, when characterizing bias across the male and female genders, we use the word pairs (man, woman), (boy, girl) that are indicative of gender. When estimating the 3class religion subspace across the Jewish, Christian, and Muslim religions, we use the tuples (Judaism, Christianity, Islam), (Synagogue, Church, Mosque). Each tuple should consist of words that have an equivalent meaning except for the bias attribute. In general, for d-class bias attributes, the set of words forms a dataset D = {(w(i) 1 ,...,w(i) d )}m i=1 of m entries where each entry (w1,...,wd) is a d-tuple of words that are each representative of a particular 5504 Algorithm 1 SENT-DEBIAS: a debiasing algorithm for sentence representations. SENT-DEBIAS: 1: Initialize (usually pretrained) sentence encoder Mθ. 2: Define bias attributes (e.g. binary gender gm and gf). 3: Obtain words D = {(w(i) 1 ,...,w(i) d )}m i=1 indicative of bias attributes (e.g. Table 1). 4: S = ⋃m i=1 CONTEXTUALIZE(w(i) 1 ,...,w(i) d ) = {(s(i) 1 ,...,s(i) d )}n i=1 // words into sentences 5: for j ∈[d] do 6: Rj = {Mθ(s(i) j )}n i=1 // get sentence representations 7: end for 8: V = PCAk (⋃d j=1 ⋃w∈Rj (w −µi)) // compute bias subspace 9: for each new sentence representation h do 10: hV = ∑k j=1⟨h,vj⟩vj // project onto bias subspace 11: ˆh = h −hV // subtract projection 12: end for bias attribute (we drop the superscript (i) when it is clear from the context). Table 1 shows some bias attribute words that we use to estimate the bias subspaces for binary gender and multiclass religious attributes (full pairs and triplets in appendix). Existing methods that investigate biases tend to operate at the word-level which simplifies the problem since the set of tokens is bounded by the vocabulary size (Bolukbasi et al., 2016). This simple approach has the advantage of identifying the presence of biases using predefined sets of word associations, and estimate the bias subspace using the predefined bias word pairs. On the other hand, the potential number of sentences are unbounded which makes it harder to precisely characterize the sentences in which bias is present or absent. Therefore, it is not trivial to directly convert these words to sentences to obtain a representation from pretrained sentence encoders. In the subsection below, we describe our solution to this problem. 2) Contextualizing Words into Sentences: A core step in our SENT-DEBIAS approach involves contextualizing the predefined sets of bias attribute words to sentences so that sentence encoders can be applied to obtain sentence representations. One option is to use a simple template-based design to simplify the contextual associations a sentence encoder makes with a given term, similar to how May et al. (2019) proposed to measure (but not remove) bias in sentence representations. For example, each word can be slotted into templates such as “This is <word>.”, “I am a <word>.”. We take an alternative perspective and hypothesize that for a given bias attribute (e.g. gender), a single bias subspace exists across all possible sentence representations. For example, the bias subspace should be the same in the sentences “The boy is coding.”, “The girl is coding.”, “The boys at the playground.”, and “The girls at the playground.”. In order to estimate this bias subspace accurately, it becomes important to use sentence templates that are as diverse as possible to account for all occurrences of that word in surrounding contexts. In our experiments, we empirically demonstrate that estimating the bias subspace using a large and diverse set of templates from text corpora leads to improved bias reduction as compared to using simple templates. To capture the variety in syntax across sentences, we use large text corpora to find naturally occurring sentences. These naturally occurring sentences therefore become our sentence “templates”. To use these templates to generate new sentences, we replace words representing a single class with another. For example, a sentence containing a male term “he” is used to generate a new sentence but replacing it with the corresponding female term “she”. This contextualization process is repeated for all word tuples in the bias attribute word dataset D, eventually contextualizing the given set of bias attribute words into bias attribute sentences. Since there are multiple templates which a bias attribute word can map to, the contextualization process results in a bias attribute sentence dataset S which is substantially larger in size: S = m ⋃ i=1 CONTEXTUALIZE(w(i) 1 ,...,w(i) d ) (1) = {(s(i) 1 ,...,s(i) d )}n i=1, ∣S∣> ∣D∣ (2) 5505 Dataset Type Topics Formality Length Samples WikiText-2 written everything formal 24.0 “the mailing contained information about their history and advised people to read several books, which primarily focused on {jewish/christian/muslim} history” SST written movie reviews informal 19.2 “{his/her} fans walked out muttering words like horrible and terrible, but had so much fun dissing the film that they didn’t mind the ticket cost.” Reddit written politics, electronics, relationships informal 13.6 “roommate cut my hair without my consent, ended up cutting {himself/herself } and is threatening to call the police on me” MELD spoken comedy TV-series informal 8.1 “that’s the kind of strength that I want in the {man/woman} I love!” POM spoken opinion videos informal 16.0 “and {his/her} family is, like, incredibly confused” Table 2: Comparison of the various datasets used to find natural sentence templates. Length represents the average length measured by the number of words in a sentence. Words in italics indicate the words used to estimating the binary gender or multiclass religion subspaces, e.g. (man, woman), (jewish, christian, muslim). This demonstrates the variety in our naturally occurring sentence templates in terms of topics, formality, and spoken/written text. where CONTEXTUALIZE(w1,...,wd) is a function which returns a set of sentences obtained by matching words with naturally-occurring sentence templates from text corpora. Our text corpora originate from the following five sources: 1) WikiText-2 (Merity et al., 2017a), a dataset of formally written Wikipedia articles (we only use the first 10% of WikiText-2 which we found to be sufficient to capture formally written text), 2) Stanford Sentiment Treebank (Socher et al., 2013), a collection of 10000 polarized written movie reviews, 3) Reddit data collected from discussion forums related to politics, electronics, and relationships, 4) MELD (Poria et al., 2019), a large-scale multimodal multi-party emotional dialog dataset collected from the TV-series Friends, and 5) POM (Park et al., 2014), a dataset of spoken review videos collected across 1,000 individuals spanning multiple topics. These datasets have been the subject of recent research in language understanding (Merity et al., 2017b; Liu et al., 2019; Wang et al., 2019) and multimodal human language (Liang et al., 2018, 2019). Table 2 summarizes these datasets. We also give some examples of the diverse templates that occur naturally across various individuals, settings, and in both written and spoken text. 3) Estimating the Bias Subspace: Now that we have contextualized all m word d-tuples in D into n sentence d-tuples S, we pass these sentences through a pre-trained sentence encoder (e.g. BERT, ELMo) to obtain sentence representations. Suppose we have a pre-trained encoder Mθ with parameters θ. Define Rj,j ∈[d] as sets that collect all sentence representations of the j-th entry in the d-tuple, Rj = {Mθ(s(i) j )}n i=1. Each of these sets Rj defines a vector space in which a specific bias attribute is present across its contexts. For example, when dealing with binary gender bias, R1 (likewise R2) defines the space of sentence representations with a male (likewise female) context. The only difference between the representations in R1 versus R2 should be the specific bias attribute present. Define the mean of set j as µj = 1 ∣Rj∣∑w∈Rj w. The bias subspace V = {v1,...,vk} is given by the first k components of principal component analysis (PCA) (Abdi and Williams, 2010): V = PCAk ⎛ ⎝ d ⋃ j=1 ⋃ w∈Rj (w −µj)⎞ ⎠. (3) k is a hyperparameter in our experiments which determines the dimension of the bias subspace. Intuitively, V represents the top-k orthogonal directions which most represent the bias subspace. 4) Debiasing: Given the estimated bias subspace V, we apply a partial version of the HARDDEBIAS algorithm (Bolukbasi et al., 2016) to remove bias from new sentence representations. Taking the example of binary gender bias, the HARDDEBIAS algorithm consists of two steps: 4a) Neutralize: Bias components are removed from sentences that are not gendered and should not contain gender bias (e.g., I am a doctor., That nurse is taking care of the patient.) by removing the projection onto the bias subspace. More formally, given a representation h of a sentence and the previously estimated gender subspace V = {v1,...,vk}, the debiased representation ˆh is given by first obtaining hV, the projection of h onto the bias subspace V before subtracting hV from h. This results in a vector that is orthogonal to the bias subspace 5506 V and therefore contains no bias: hV = k ∑ j=1 ⟨h,vj⟩vj, (4) ˆh = h −hV. (5) 4b) Equalize: Gendered representations are centered and their bias components are equalized (e.g. man and woman should have bias components in opposite directions, but of the same magnitude). This ensures that any neutral words are equidistant to biased words with respect to the bias subspace. In our implementation, we skip this Equalize step because it is hard to identify all or even the majority of sentence pairs to be equalized due to the complexity of natural sentences. For example, we can never find all the sentences that man and woman appear in to equalize them appropriately. Note that even if the magnitudes of sentence representations are not normalized, the debiased representations are still pointing in directions orthogonal to the bias subspace. Therefore, skipping the equalize step still results in debiased sentence representations as measured by our definition of bias. 3 Experiments We test the effectiveness of SENT-DEBIAS at removing biases and retaining performance on downstream tasks. All experiments are conducted on English terms and downstream tasks. We acknowledge that biases can manifest differently across different languages, in particular gendered languages (Zhou et al., 2019), and emphasize the need for future extensions in these directions. Experimental details are in the appendix and code is released at https://github.com/pliang279/ sent_debias. 3.1 Evaluating Biases Biases are traditionally measured using the Word Embedding Association Test (WEAT) (Caliskan et al., 2017). WEAT measures bias in word embeddings by comparing two sets of target words to two sets of attribute words. For example, to measure social bias surrounding genders with respect to careers, one could use the target words programmer, engineer, scientist, and nurse, teacher, librarian, and the attribute words man, male, and woman, female. Unbiased word representations should display no difference between the two target words in terms of their relative similarity to the two sets of attribute words. The relative similarity as measured by WEAT is commonly known as the effect size. An effect size with absolute value closer to 0 represents lower bias. To measure the bias present in sentence representations, we use the method as described in May et al. (2019) which extended WEAT to the Sentence Encoder Association Test (SEAT). For a given set of words for a particular test, words are converted into sentences using a template-based method. The WEAT metric can then be applied for fixed-length, pre-trained sentence representations. To measure bias over multiple classes, we use the Mean Average Cosine similarity (MAC) metric which extends SEAT to a multiclass setting (Manzini et al., 2019). For the binary gender setting, we use words from the Caliskan Tests (Caliskan et al., 2017) which measure biases in common stereotypes surrounding gendered names with respect to careers, math, and science (Greenwald et al., 2009). To evaluate biases in the multiclass religion setting, we modify the Caliskan Tests used in May et al. (2019) with lexicons used by Manzini et al. (2019). 3.2 Debiasing Setup We first describe the details of applying SENTDEBIAS on two widely-used sentence encoders: BERT2 (Devlin et al., 2019) and ELMo (Peters et al., 2018). Note that the pre-trained BERT encoder must be fine-tuned on task-specific data. This implies that the final BERT encoder used during debiasing changes from task to task. To account for these differences, we report two sets of metrics: 1) BERT: simply debiasing the pre-trained BERT encoder, and 2) BERT post task: first fine-tuning BERT and post-processing (i.e. normalization) on a specific task before the final BERT representations are debiased. We apply SENT-DEBIAS on BERT fine-tuned on two single sentence datasets, Stanford Sentiment Treebank (SST-2) sentiment classification (Socher et al., 2013) and Corpus of Linguistic Acceptability (CoLA) grammatical acceptability judgment (Warstadt et al., 2018). It is also possible to apply BERT (Devlin et al., 2019) on downstream tasks that involve two sentences. The output sentence pair representation can also be debiased (after fine-tuning and normalization). We test the effect of SENT-DEBIAS on Question Natural Language Inference (QNLI) (Wang et al., 2018) which converts the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) into a binary classification task. These results are 2We used uncased BERT-Base throughout all experiments. 5507 Test BERT BERT post SST-2 BERT post CoLA BERT post QNLI ELMo C6: M/F Names, Career/Family C6b: M/F Terms, Career/Family C7: M/F Terms, Math/Arts C7b: M/F Names, Math/Arts C8: M/F Terms, Science/Arts C8b: M/F Names, Science/Arts Multiclass Caliskan +0.477 →−0.096 +0.108 →−0.437 +0.253 →+0.194 +0.254 →+0.194 +0.399 →−0.075 +0.636 →+0.540 +0.035 →+0.379 +0.036 →−0.109 +0.010 →−0.057 −0.219 →−0.221 +1.153 →−0.755 +0.103 →+0.081 −0.222 →−0.047 +1.200 →+1.000 −0.009 →+0.149 +0.199 →+0.186 +0.268 →+0.311 +0.150 →+0.308 +0.425 →−0.163 +0.032 →−0.192 +0.243 →+0.757 −0.261 →−0.054 −0.155 →−0.004 −0.584 →−0.083 −0.581 →−0.629 −0.087 →+0.716 −0.521 →−0.443 − −0.380 →−0.298 −0.345 →−0.327 −0.479 →−0.487 +0.016 →−0.013 −0.296 →−0.327 +0.554 →+0.548 − Table 3: Debiasing results on BERT and ELMo sentence representations. First six rows measure binary SEAT effect sizes for sentence-level tests, adapted from Caliskan tests. SEAT scores closer to 0 represent lower bias. CN: test from Caliskan et al. (2017) row N. The last row measures bias in a multiclass religion setting using MAC (Manzini et al., 2019) before and after debiasing. MAC score ranges from 0 to 2 and closer to 1 represents lower bias. Results are reported as x1 →x2 where x1 represents score before debiasing and x2 after, with lower bias score in bold. Our method reduces bias of BERT and ELMo for the majority of binary and multiclass tests. reported as BERT post SST-2, BERT post CoLA, and BERT post QNLI respectively. For ELMo, the encoder stays the same for downstream tasks (no fine-tuning on different tasks) so we just debias the ELMo sentence encoder. We report this result as ELMo. 3.3 Debiasing Results We present these debiasing results in Table 3, and see that for both binary gender bias and multiclass religion bias, our proposed method reduces the amount of bias as measured by the given tests and metrics. The reduction in bias is most pronounced when debiasing the pre-trained BERT encoder. We also observe that simply fine-tuning the BERT encoder for specific tasks also reduces the biases present as measured by the Caliskan tests, to some extent. However, fine-tuning does not lead to consistent decreases in bias and cannot be used as a standalone debiasing method. Furthermore, finetuning does not give us control over which type of bias to control for and may even amplify bias if the task data is skewed towards particular biases. For example, while the bias effect size as measured by Caliskan test C7 decreases from +0.542 to −0.033 and +0.288 after fine-tuning on SST-2 and CoLA respectively, the effect size as measured by the multiclass Caliskan test increases from +0.035 to +1.200 and +0.243 after fine-tuning on SST-2 and CoLA respectively. 3.4 Comparison with Baselines We compare to three baseline methods for debiasing: 1) FastText derives debiased sentence embeddings using an average of debiased FastText word embeddings (Bojanowski et al., 2016) using wordlevel debiasing methods (Bolukbasi et al., 2016), 2) Debiasing Method Ave. Abs. Effect Size BERT original (Devlin et al., 2019) +0.354 FastText (Bojanowski et al., 2016) +0.565 BERT word (Bolukbasi et al., 2016) +0.861 BERT simple (May et al., 2019) +0.298 SENT-DEBIAS BERT (ours) +0.256 Table 4: Comparison of various debiasing methods on sentence embeddings. FastText (Bojanowski et al., 2016) (and BERT word) derives debiased sentence embeddings with an average of debiased FastText (and BERT) word embeddings using word-level debiasing methods (Bolukbasi et al., 2016). BERT simple adapts May et al. (2019) by using simple templates to debias BERT representations. SENT-DEBIAS BERT represents our method using diverse templates. We report the average absolute effect size across all Caliskan tests. Average scores closer to 0 represent lower bias. BERT word obtains a debiased sentence representation from average debiased BERT word representations, again debiased using word-level debiasing methods (Bolukbasi et al., 2016), and 3) BERT simple adapts May et al. (2019) by using simple templates to debias BERT sentence representations. From Table 4, SENT-DEBIAS achieves a lower average absolute effect size and outperforms the baselines based on debiasing at the word-level and averaging across all words. This indicates that it is not sufficient to debias words only and that biases in a sentence could arise from their debiased word constituents. In comparison with BERT simple, we observe that using diverse sentence templates obtained from naturally occurring written and spoken text makes a difference on how well we can remove biases from sentence representations. This supports our hypothesis that using increasingly diverse templates estimates a bias subspace that generalizes to different words in their context. 5508 Figure 1: Influence of the number of templates on the effectiveness of bias removal on BERT fine-tuned on SST-2 (left) and BERT fine-tuned on QNLI (right). All templates are from WikiText-2. The solid line represents the mean over different combinations of domains and the shaded area represents the standard deviation. As increasing subsets of data are used, we observe a decreasing trend and lower variance in average absolute effect size. 3.5 Effect of Templates We further test the importance of sentence templates through two experiments. 1) Same Domain, More Quantity: Firstly, we ask: how does the number of sentence templates impact debiasing performance? To answer this, we begin with the largest domain WikiText-2 (13750 templates) and divide it into 5 partitions each of size 2750. We collect sentence templates using all possible combinations of the 5 partitions and apply these sentence templates in the contextualization step of SENT-DEBIAS. We then estimate the corresponding bias subspace, debias, and measure the average absolute values of all 6 SEAT effect sizes. Since different combinations of the 5 partitions result in a set of sentence templates of different sizes (20%, 40%, 60%, 80%, 100%), this allows us to see the relationship between size and debiasing performance. Combinations with the same percentage of data are grouped together and for each group we compute the mean and standard deviation of the average absolute effect sizes. We perform the above steps to debias BERT fine-tuned on SST-2 and QNLI and plot these results in Figure 1. Please refer to the appendix for experiments with BERT fine-tuned on CoLA, which show similar results. For BERT fine-tuned on SST-2, we observe a decreasing trend in the effect size as increasing subsets of the data is used. For BERT fine-tuned on QNLI, there is a decreasing trend that quickly tapers off. However, using a larger number of templates reduces the variance in average absolute effect size and improves the stability of the SENTDEBIAS algorithm. These observations allow us to conclude the importance of using a large number of templates from naturally occurring text corpora. 2) Same Quantity, More Domains: How does the number of domains that sentence templates are extracted from impact debiasing performance? We fix the total number of sentence templates to be 1080 and vary the number of domains these templates are drawn from. Given a target number k, we first choose k domains from our Reddit, SST, POM, WikiText-2 datasets and randomly sample 1080/k templates from each of the k selected domains. We construct 1080 templates using all possible subsets of k domains and apply them in the contextualization step of SENT-DEBIAS. We estimate the corresponding bias subspace, debias and measure the average absolute SEAT effect sizes. To see the relationship between the number of domains k and debiasing performance, we group combinations with the same number of domains (k) and for each group compute the mean and standard deviation of the average absolute effect sizes. This experiment is also performed for BERT fine-tuned on SST-2 and QNLI datasets. Results are plotted in Figure 2. We draw similar observations: there is a decreasing trend in effect size as templates are drawn from more domains. For BERT fine-tuned on QNLI, using a larger number of domains reduces the variance in effect size and improves stability of the algorithm. Therefore, it is important to use a large variety of templates across different domains. 3.6 Visualization As a qualitative analysis of the debiasing process, we visualize how the distances between sentence representations shift after the debiasing process is performed. We average the sentence representations of a concept (e.g. man, woman, science, art) across its contexts (sentence templates) and plot the t-SNE (van der Maaten and Hinton, 2008) 5509 Figure 2: Influence of the number of template domains on the effectiveness of bias removal on BERT fine-tuned on SST-2 (left) and BERT fine-tuned on QNLI (right). The domains span the Reddit, SST, POM, WikiText-2 datasets. The solid line is the mean over different combinations of domains and the shaded area is the standard deviation. As more domains are used, we observe a decreasing trend and lower variance in average absolute effect size. Pretrained BERT embeddings Debiased BERT embeddings Figure 3: t-SNE plots of average sentence representations of a word across its sentence templates before (left) and after (right) debiasing. After debiasing, non gender-specific concepts (in black) are more equidistant to genders. embeddings of these points in 2D space. From Figure 3, we observe that BERT average representations of science and technology start off closer to man while literature and art are closer to woman. After debiasing, non gender-specific concepts (e.g science, art) become more equidistant to both man and woman average concepts. 3.7 Performance on Downstream Tasks To ensure that debiasing does not hurt the performance on downstream tasks, we report the performance of our debiased BERT and ELMo on SST-2 and CoLA by training a linear classifier on top of debiased BERT sentence representations. From Table 5, we observe that downstream task performance show a small decrease ranging from 1 −3% after the debiasing process. However, the performance of ELMo on SST-2 increases slightly from 89.6 to 90.0. We hypothesize that these differences in performance are due to the fact that CoLA tests for linguistic acceptability so it is more concerned with low-level syntactic structure such as verb usage, grammar, and tenses. As a result, changes in sentence representations across bias directions may impact its performance more. For example, sentence representations after the gender debiasing steps may display a mismatch between gendered pronouns and the sentence context. For SST, it has been shown that sentiment analysis datasets have labels that correlate with gender information and therefore contain gender bias (Kiritchenko and Mohammad, 2018). As a result, we do expect possible decreases in accuracy after debiasing. Finally, we test the effect of SENT-DEBIAS on QNLI by training a classifier on top of debiased BERT sentence pair representations. We observe little impact on task performance: our debiased BERT fine-tuned on QNLI achieves 90.6% performance as compared to the 91.3% we obtained without debiasing. 4 Discussion and Future Work Firstly, we would like to emphasize that both the WEAT, SEAT, and MAC metrics are not perfect since they only have positive predictive ability: they can be used to detect the presence of biases but not their absence (Gonen and Goldberg, 2019). This calls for new metrics that evaluate biases and can scale to the various types of sentences appearing across different individuals, topics, and in both 5510 Test BERT debiased BERT ELMo debiased ELMo SST-2 92.7 89.1 89.6 90.0 CoLA 57.6 55.4 39.1 37.1 QNLI 91.3 90.6 Table 5: We test the effect of SENT-DEBIAS on both single sentence (BERT and ELMo on SST-2, CoLA) and paired sentence (BERT on QNLI) downstream tasks. The performance (higher is better) of debiased BERT and ELMo sentence representations on downstream tasks is not hurt by the debiasing step. spoken and written text. We believe that our positive results regarding contextualizing words into sentences implies that future work can build on our algorithms and tailor them for new metrics. Secondly, a particular bias should only be removed from words and sentences that are neutral to that attribute. For example, gender bias should not be removed from the word “grandmother” or the sentence “she gave birth to me”. Previous work on debiasing word representations tackled this issue by listing all attribute specific words based on dictionary definitions and only debiasing the remaining words. However, given the complexity of natural sentences, it is extremely hard to identify the set of neutral sentences and its complement. Thus, in downstream tasks, we removed bias from all sentences which could possibly harm downstream task performance if the dataset contains a significant number of non-neutral sentences. Finally, a fundamental challenge lies in the fact that these representations are trained without explicit bias control mechanisms on large amounts of naturally occurring text. Given that it becomes infeasible (in standard settings) to completely retrain these large sentence encoders for debiasing (Zhao et al., 2018; Zhang et al., 2018), future work should focus on developing better post-hoc debiasing techniques. In our experiments, we need to re-estimate the bias subspace and perform debiasing whenever the BERT encoder was fine-tuned. It remains to be seen whether there are debiasing methods which are invariant to fine-tuning, or can be efficiently re-estimated as the encoders are fine-tuned. 5 Conclusion This paper investigated the post-hoc removal of social biases from pretrained sentence representations. We proposed the SENT-DEBIAS method that accurately captures the bias subspace of sentence representations by using a diverse set of templates from naturally occurring text corpora. Our experiments show that we can remove biases that occur in BERT and ELMo while preserving performance on downstream tasks. We also demonstrate the importance of using a large number of diverse sentence templates when estimating bias subspaces. Leveraging these developments will allow researchers to further characterize and remove social biases from sentence representations for fairer NLP. Acknowledgements PPL and LPM were supported in part by the National Science Foundation (Awards #1750439, #1722822) and National Institutes of Health. RS was supported in part by US Army, ONR, Apple, and NSF IIS1763562. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, National Institutes of Health, DARPA, and AFRL, and no official endorsement should be inferred. We also acknowledge NVIDIA’s GPU support and the anonymous reviewers for their constructive comments. References Herv´e Abdi and Lynne J. Williams. 2010. Principal component analysis. WIREs Comput. Stat., 2(4):433–459. Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72–78, Minneapolis, Minnesota, USA. Association for Computational Linguistics. David Bamman, A. Seza Do˘gru¨oz, Jacob Eisenstein, Dirk Hovy, David Jurgens, Brendan O’Connor, Alice Oh, Oren Tsur, and Svitlana Volkova. 2016. Proceedings of the first workshop on NLP and computational social science. Solon Barocas and Andrew D Selbst. 2016. Big data’s disparate impact. Calif. L. Rev., 104:671. Christine Basta, Marta R. Costa-juss`a, and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 33–39, Florence, Italy. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. 5511 Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Proc. of NIPS, pages 4349–4357. Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 7–15, Minneapolis, Minnesota. Association for Computational Linguistics. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science. Robert Dale. 2019. Law and word order: Nlp in legal tech. Natural Language Engineering, 25(1):211– 217. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, and Alex Beutel. 2019. Counterfactual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19, page 219–226, New York, NY, USA. Association for Computing Machinery. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614, Minneapolis, Minnesota. Association for Computational Linguistics. Anthony Greenwald, T Poehlman, Eric Luis Uhlmann, and Mahzarin Banaji. 2009. Understanding and using the implicit association test: Iii. meta-analysis of predictive validity. Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. CoRR, abs/1904.05342. Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43–53, New Orleans, Louisiana. Association for Computational Linguistics. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166–172, Florence, Italy. Association for Computational Linguistics. Brian Larson. 2017. Gender as a variable in naturallanguage processing: Ethical considerations. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 1–11, Valencia, Spain. Association for Computational Linguistics. Anne Lauscher and Goran Glavaˇs. 2019. Are we consistently biased? multidimensional analysis of biases in distributional word vectors. In *SEM 2019. Paul Pu Liang, Yao Chong Lim, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2019. Strong and simple baselines for multimodal utterance embeddings. In NAACL-HLT, pages 2599–2609, Minneapolis, Minnesota. Association for Computational Linguistics. Paul Pu Liang, Ziyin Liu, AmirAli Bagher Zadeh, and Louis-Philippe Morency. 2018. Multimodal language analysis with recurrent multistage fusion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605. Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615–621, Minneapolis, Minnesota. Association for Computational Linguistics. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017a. Pointer sentinel mixture 5512 models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017b. Pointer sentinel mixture models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2799–2804, Brussels, Belgium. Association for Computational Linguistics. Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, and Louis-Philippe Morency. 2014. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In ICMI. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 527– 536, Florence, Italy. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. Chen Sun, Austin Oliver Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019a. Videobert: A joint model for video and language representation learning. CoRR, abs/1904.01766. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019b. Mitigating gender bias in natural language processing: Literature review. In ACL. Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan IV, Mark DM Leiserson, and Adam Tauman Kalai. 2019. What are the biases in my word embedding? In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19, page 305–311, New York, NY, USA. Association for Computing Machinery. Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktaschel, Douwe Kiela, Arthur Szlam, and Jason Weston. 2019. Learning to speak and act in a fantasy text adventure game. CoRR, abs/1903.03094. Sumithra Velupillai, Hanna Suominen, Maria Liakata, Angus Roberts, Anoop D. Shah, Katherine Morley, David Osborn, Joseph Hayes, Robert Stewart, Johnny Downs, Wendy Chapman, and Rina Dutta. 2018. Using clinical natural language processing for health outcomes research: Overview and actionable suggestions for future advances. Journal of Biomedical Informatics, 88:11 – 19. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In BlackboxNLP. Chenguang Wang, Mu Li, and Alexander J. Smola. 2019. Language models with transformers. CoRR, abs/1904.09408. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5513 Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In AIES. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629–634, Minneapolis, Minnesota. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguistics. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847–4853, Brussels, Belgium. Association for Computational Linguistics. Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with grammatical gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5276–5284, Hong Kong, China. Association for Computational Linguistics. 5514 A Debiasing Details We provide some details on estimating the bias subspaces and debiasing steps. Bias Attribute Words: Table 6 shows the bias attribute words we used to estimate the bias subspaces for binary gender bias and multiclass religious biases. Datasets: We provide some details on dataset downloading below: 1. WikiText-2 was downloaded from https://github.com/pytorch/examples/ tree/master/word_language_model. We took the first 10% of WikiText-2 sentences as naturally occurring templates representative of highly formal text. 2. Hacker News and Reddit Subreddit data collected from news and discussion forums related to topics ranging from politics to electronics was downloaded from https:// github.com/minimaxir/textgenrnn/. B Experimental Details B.1 BERT All three variants of BERT (BERT, BERT post SST, BERT post CoLA) are uncased base model with hyper-parameters described in Table 7. For all three models, the second output “pooled output” of BERT is treated as the sentence embedding. The variant BERT is the pretrained model with weights downloaded from https: //s3.amazonaws.com/models.huggingface. co/bert/bert-base-uncased.tar.gz. The variant BERT post SST is BERT after being finetuned on the Stanford Sentiment Treebank(SST-2) task, a binary single-sentence classification task (Socher et al., 2013). During fine-tuning, we first normalize the sentence embedding and then feed it into a linear layer for classification. The variant BERT post CoLA is BERT fine-tuned on the Corpus of Linguistic Acceptability (CoLA) task, a binary single-sentence classification task. Normalization and classification are done exactly the same as BERT post SST. All BERT models are fine-tuned for 3 epochs which is the default hyper-parameter in the huggingface transformers repository. Debiasing for BERT models that are fine-tuned is done just before the classification layer. Binary Gender man, woman boy, girl he, she father, mother son, daughter guy, gal male, female his, her himself, herself John, Mary Multiclass Religion jewish, christian, muslim jews, christians, muslims torah, bible, quran synagogue, church, mosque rabbi, priest, imam judaism, christianity, islam Table 6: Word pairs to estimate the binary gender bias subspace (left) and the 3-class religion bias subspace (right). Hyper-parameter Value attention probs dropout prob hidden act hidden dropout prob hidden size initializer range intermediate size max position embeddings num attention heads num hidden layers type vocab size vocab size 0.1 gelu 0.1 768 0.02 3072 512 12 12 2 30522 Table 7: Configuration of BERT models, including BERT, BERT→SST, and BERT→CoLA. B.2 ELMo We use the ElmoEmbedder from allennlp.commands.elmo. We perform summation over the aggregated layer outputs. The resulting sentence representation is a time sequence vector with data dimension 1024. When computing gender direction, we perform mean pooling over the time dimension to obtain a 1024-dimensional vector for each definitional sentence. In debiasing, we remove the gender direction from each time step of each sentence representation. We then feed the debiased representation into an LSTM with hidden size 512. Finally, the last hidden state of the LSTM goes through a fully connected layer to make predictions. C Additional Results We also studied the effect of templates on BERT fine-tuned on CoLA as well. Steps taken are exactly the same as described in Effect of Templates: Same Domain, More Quantity and Effect of Templates: Same Quantity, More Domains. Results are plotted in Figure 4. It shows 5515 Figure 4: Evaluation of Bias Removal on BERT fine-tuned on CoLA with varying percentage of data from a single domain (left) and varying number of domains with fixed total size (right). that debiasing performance improves and stabilizes with the number of sentence templates as well as the number of domains.
2020
488
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5516–5522 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5516 A Re-evaluation of Knowledge Graph Completion Methods Zhiqing Sun1∗ Shikhar Vashishth1,2∗ Soumya Sanyal2∗ Partha Talukdar2 Yiming Yang1 1 Carnegie Mellon University, 2 Indian Institute of Science {zhiqings,svashish,yiming}@cs.cmu.edu {soumyasanyal,ppt}@iisc.ac.in Abstract Knowledge Graph Completion (KGC) aims at automatically predicting missing links for large-scale knowledge graphs. A vast number of state-of-the-art KGC techniques have got published at top conferences in several research fields, including data mining, machine learning, and natural language processing. However, we notice that several recent papers report very high performance, which largely outperforms previous state-of-the-art methods. In this paper, we find that this can be attributed to the inappropriate evaluation protocol used by them and propose a simple evaluation protocol to address this problem. The proposed protocol is robust to handle bias in the model, which can substantially affect the final results. We conduct extensive experiments and report performance of several existing methods using our protocol. The reproducible code has been made publicly available. 1 Introduction Real-world knowledge bases are usually expressed as multi-relational graphs, which are collections of factual triplets, where each triplet (h, r, t) represents a relation r between a head entity h and a tail entity t. However, real-word knowledge bases are usually incomplete (Dong et al., 2014), which motivates the research of automatically predicting missing links. A popular approach for Knowledge Graph Completion (KGC) is to embed entities and relations into continuous vector or matrix space, and use a well-designed score function f(h, r, t) to measure the plausibility of the triplet (h, r, t). Most of the previous methods use translation distance based (Bordes et al., 2013; Wang et al., 2014; Xiao et al., 2016; Sun et al., 2019) and semantic matching based (Nickel and Tresp, 2013; Yang et al., 2014; Nickel et al., 2016; Trouillon et al., 2016; ∗Equal contribution. Liu et al., 2017) scoring functions which are easy to analyze. However, recently, a vast number of neural network-based methods have been proposed. They have complex score functions which utilize blackbox neural networks including Convolutional Neural Networks (CNNs) (Dettmers et al., 2018; Nguyen et al., 2018), Recurrent Neural Networks (RNNs) (Lin et al., 2015; Wang et al., 2018), Graph Neural Networks (GNNs) (Schlichtkrull et al., 2017; Shang et al., 2019), and Capsule Networks (Nguyen et al., 2019). While some of them report state-of-the-art performance on several benchmark datasets that are competitive to previous embedding-based approaches, a considerable portion of recent neural network-based papers report very high performance gains which are not consistent across different datasets. Moreover, most of these unusual behaviors are not at all analyzed. Such a pattern has become prominent and is misleading the whole community. In this paper, we investigate this problem and find that this is attributed to the inappropriate evaluation protocol used by these approaches. We demonstrate that their evaluation protocol gives a perfect score to a model that always outputs a constant irrespective of the input. This has lead to artificial inflation of performance of several models. For this, we find a simple evaluation protocol that creates a fair comparison environment for all types of score functions. We conduct extensive experiments to re-examine some recent methods and fairly compare them with existing approaches. The source code of the paper has been publicly available at http://github.com/svjan5/kg-reeval. 2 Background Knowledge Graph Completion Given a Knowledge Graph G = (E, R, T ), where E and R de5517 FB15k-237 WN18RR ConvE .325 .430 RotatE .338 (+4.0%) .476 (+10.6%) TuckER .358 (+10.2%) .470 (+9.3%) ConvKB .396 (+21.8%) .248 (-42.3%) CapsE .523 (+60.9%) .415 (-3.4%) KBAT .518 (+59.4%) .440 (+2.3%) TransGate .404 (+24.3%) .409 (-4.9%) Table 1: Changes in MRR for different methods on FB15k-237 and WN18RR datasets with respect to ConvE show inconsistent improvements. note the set of entities and relations and T = {(h, r, t) | h, t ∈E, r ∈R} is the set of triplets (facts), the task of Knowledge Graph Completion (KGC) involves inferring missing facts based on the known facts. Most the existing methods define an embedding for each entity and relation in G, i.e., eh, er ∀h ∈E, r ∈R and a score function f(h, r, t) : E × R × E →R which assigns a high score for valid triplets than the invalid ones. KGC Evaluation During KGC evaluation, for predicting t in a given triplet (h, r, t), a KGC model scores all the triplets in the set T ′ = {(h, r, t′) | t′ ∈E}. Based on the score, the model first sorts all the triplets and subsequently finds the rank of the valid triplet (h, r, t) in the list. In a more relaxed setting called filtered setting, all the known correct triplets (from train, valid, and test triplets) are removed from T ′ except the one being evaluated (Bordes et al., 2013). The triplets in T ′ −{t} are called negative samples. Related Work Prior to our work, Kadlec et al. (2017) cast doubt on the claim that performance improvement of several models is due to architectural changes as opposed to hyperparameter tuning or different training objective. In our work, we raise similar concerns but through a different angle by highlighting issues with the evaluation procedure used by several recent methods. Chandrahas et al. (2018) analyze the geometry of KG embeddings and its correlation with task performance while Nayyeri et al. (2019) examine the effect of different loss functions on performance. However, their analysis is restricted to non-neural approaches. 0 2000 4000 6000 8000 10000 Knowledge Graph Entities 0.2 0.4 0.6 0.8 1.0 Triplet Score Figure 1: Sorted score distribution of ConvKB for an example valid triplet and its negative samples. The score is normalized into [0, 1] (lower the better). Dotted line indicate the score for the valid triplet. We find that in this example, around 58.5% negative sampled triplets obtain the exact same score as the valid triplet. 3 Observations In this section, we first describe our observations and concerns and then investigate the reason behind. 3.1 Inconsistent Improvements over Benchmark Datasets Several recently proposed methods report high performance gains on a particular dataset. However, their performance on another dataset is not consistently improved. In Table 1, we report change in MRR score on FB15k-237 (Toutanova and Chen, 2015) and WN18RR (Dettmers et al., 2018) datasets with respect to ConvE (Dettmers et al., 2018) for different methods including RotatE (Sun et al., 2019), TuckER (Balaževi´c et al., 2019), ConvKB (Nguyen et al., 2018), CapsE (Nguyen et al., 2019), KBAT (Nathani et al., 2019), and TransGate (Yuan et al., 2019). Overall, we find that for a few recent NN based methods, there are inconsistent gains on these two datasets. For instance, in ConvKB, there is a 21.8% improvement over ConvE on FB15k-237, but a degradation of 42.3% on WN18RR, which is surprising given the method is claimed to be better than ConvE. On the other hand, methods like RotatE and TuckER give consistent improvement across both benchmark datasets. 3.2 Observations on Score Functions Score distribution When evaluating KGC methods, for a given triplet (h, r, t), the ranking of t given h and r is computed by scoring all the triplets of form {(h, r, t′) | t′ ∈E}, where E is the set of 5518 Frequency 0 1250 2500 3750 5000 Number of Triplets with Same Score 1-4 5-16 17-64 65-256 257-1024 1025-4096 4097-16384 ConvKB CapsE ConvE Figure 2: Plot shows the frequency of the number of negative triplets with the same assigned score as the valid triplet during evaluation on FB15k-237 dataset. The results show that for methods like ConvKB and CapsE, a large number of negative triplets get the same score as the valid triplets whereas for methods like ConvE such occurrences are rare. all entities. On investing a few recent NN based approaches, we find that they have unusual score distribution, where some negatively sampled triplets have the same score as the valid triplet. An instance of FB15k-237 dataset is presented in Figure 1. Here, out of 14,541 negatively sampled triplets, 8,520 have the exact same score as the valid triplet. Statistics on the whole dataset In Figure 2, we report the total number of triplets with the exact same score over the entire dataset for ConvKB (Nguyen et al., 2018) and CapsE (Nguyen et al., 2019) and compare them with ConvE (Dettmers et al., 2018) which does not suffer from this issue. We find that both ConvKB and CapsE have multiple occurrences of such unusual score distribution. On average, ConvKB and CapsE have 125 and 197 entities with exactly same score as the valid triplet over the entire evaluation dataset of FB15k-237, whereas ConvE has around 0.002, which is almost negligible. In Section 4, we demonstrate how this leads to massive performance gain for methods like ConvKB and CapsE. Root of the problem Further, we investigate the cause behind such unusual score distribution. In Figure 3, we plot the ratio of neurons becoming zero after ReLU activation for the valid triplets vs. their normalized frequency on FB15k-237 dataset. The results show that in ConvKB and CapsE, a large fraction (87.3% and 92.2% respectively) of the neurons become zeros after applying ReLU 0.0 0.2 0.4 0.6 0.8 1.0 Ratio of Neurons becoming zero 0 5 10 15 20 25 30 Normalized Frequency ConvKB CapsE ConvE Figure 3: Distribution of ratio of neurons becoming zero after ReLU activation in different methods for the valid triplets in FB15k-237 dataset. We find that for ConvKB and CapsE an unusually large fraction of neurons become zero after ReLU activation whereas the does not hold with ConvE. activation. However, with ConvE, this count is substantially less (around 41.1%). Because of the zeroing of nearly all neurons (at least 14.2% for ConvKB and 22.0% for CapsE), the representation of several triplets become very similar during forward pass and thus leading to obtaining the exact same score. 4 Evaluation Protocols for KGC In this section, we present different evaluation protocols that can be adopted in knowledge graph completion. We further show that inappropriate evaluation protocol is the key reason behind the unusual behavior of some recent NN-based methods. How to deal with the same scores? An essential aspect of the evaluation method is to decide how to break ties for triplets with the same score. More concretely, while scoring the candidate set T ′, if there are multiple triplets with the same score from the model, one should decide which triplet to pick. Assuming that the triplets are sorted in a stable manner, we design a general evaluation scheme for KGC, which consists of the following three different protocols: • TOP: In this setting, the correct triplet is inserted in the beginning of T ′. • BOTTOM: Here, the correct triplet is inserted at the end of T ′. • RANDOM: In this, the correct triplet is placed randomly in T ′. 5519 Reported RANDOM TOP BOTTOM MRR ↑MR ↓H@10 ↑ MRR ↑ MR ↓ H@10 ↑ MRR ↑MR ↓H@10 ↑MRR ↑MR ↓H@10 ↑ ConvE .325 244 .501 .324 ± .0 285 ± 0 .501 ± .0 .324 285 .501 .324 285 .501 RotatE .338 177 .533 .336 ± .0 178 ± 0 .530 ± .0 .336 178 .530 .336 178 .530 TuckER .358 .544 .353 ± .0 162 ± 0 .536 ± .0 .353 162 .536 .353 162 .536 ConvKB .396 257 .517 .243 ± .0 309 ± 2 .421 ± .0 .407 246 .527 .130 373 .383 (+.164) (-63) (+.106) (-.113) (+64) (-.038) CapsE .523 303 .593 .150 ± .0 403 ± 2 .356 ± .0 .511 305 .586 .134 502 .297 (+.361) (-99) (+.229) (-.016) (+99) (-.059) KBAT .518† 210† .626† .157 ± .0 270 ± 0 .331 ± .0 .157 270 .331 .157 270 .331 Table 2: Effect of different evaluation protocols on recent KG embedding methods on FB15k-237 dataset. For TOP and BOTTOM, we report changes in performance with respect to RANDOM protocol. Please refer to Section 5.4 for details. †: KBAT has test data leakage in their original implementation, which is fixed in our experiments. Discussion Based on the definition of the three evaluation protocols, it is clear that TOP evaluation protocol does not evaluate the model rigorously. It gives the models that have a bias to provide the same score for different triplets, an inappropriate advantage. On the other hand, BOTTOM evaluation protocol can be unfair to the model during inference time because it penalizes the model for giving the same score to multiple triplets, i.e., if many triplets have the same score as the correct triple, the correct triplet gets the least rank possible. As a result, RANDOM is the best evaluation technique which is both rigorous and fair to the model. It is in line with the situation we meet in the real world: given several same scored candidates, the only option is to select one of them randomly. Hence, we propose to use RANDOM evaluation scheme for all model performance comparisons. 5 Experiments In this section, we conduct extensive experiments using our proposed evaluation protocols and make a fair comparison for several existing methods. 5.1 Datasets We evaluate the proposed protocols on FB15k-237 (Toutanova and Chen, 2015) dataset1, which is a subset of FB15k (Bordes et al., 2013) with inverse relations deleted to prevent direct inference of test triples from training. 5.2 Methods Analyzed In our experiments, we categorize existing KGC methods into the following two categories: 1We also report our results on WN18RR (Dettmers et al., 2018) dataset in the appendix. • Non-Affected: This includes methods which give consistent performance under different evaluation protocols. For experiments in this paper, we consider three such methods – ConvE, RotatE, and TuckER. • Affected: This category consists of recently proposed neural-network based methods whose performance is affected by different evaluation protocols. ConvKB, CapsE, TransGate2, and KBAT are methods in this category. 5.3 Evaluation Metrics For all the methods, we use the code and the hyperparameters provided by the authors in their respective papers. Model performance is evaluated by Mean Reciprocal Rank (MRR), Mean Rank (MR) and Hits@10 (H@10) on the filtered setting (Bordes et al., 2013). 5.4 Evaluation Results To analyze the effect of different evaluation protocols described in Section 4, we study the performance variation of the models listed in Section 5.2. We study the effect of using TOP and BOTTOM protocols and compare them to RANDOM protocol. In their original paper, ConvE, RotatE, and TuckER use a strategy similar to the proposed RANDOM protocol, while ConvKB, CapsE, and KBAT use TOP protocol. We also study the random error in RANDOM protocol with multiple runs, where we report the average and standard deviation on 5 runs with different random seeds. The results are presented in Tables 2. 2Since we cannot find any open-source implementation of TransGate, we leave the re-evaluation of TransGate as our future work. 5520 We observe that for Non-Affected methods like ConvE, RotatE, and TuckER, the performance remains consistent across different evaluation protocols. However, with Affected methods, there is a considerable variation in performance. Specifically, we can observe that these models perform best when evaluated using TOP and worst when evaluated using BOTTOM3. Finally, we find that the proposed RANDOM protocol is very robust to different random seeds. Although the theoretic upper and lower bounds of a RANDOM score are TOP and BOTTOM scores respectively, when we evaluate knowledge graph completion for real-world largescale knowledge graphs, the randomness doesn’t affect the evaluation results much. 6 Conclusion In this paper, we performed an extensive reexamination study of recent neural network based KGC techniques. We find that many such models have issues with their score functions. Combined with inappropriate evaluation protocol, such methods reported inflated performance. Based on our observations, we propose RANDOM evaluation protocol that can clearly distinguish between these affected methods from others. We also strongly encourage the research community to follow the RANDOM evaluation protocol for all KGC evaluation purposes. Acknowledgements We thank the reviewers for their helpful comments. This work is supported in part by the National Science Foundation (NSF) under grant IIS-1546329 and Google PhD Fellowship. References Ivana Balaževi´c, Carl Allen, and Timothy M Hospedales. 2019. Tucker: Tensor factorization for knowledge graph completion. In Empirical Methods in Natural Language Processing. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing 3KBAT incorporates ConvKB in the last layer of its model architecture, which should be affected by different evaluation protocols. But we find another bug on the leakage of test triples during negative sampling in the reported model, which results in more significant performance degradation. Systems 26, pages 2787–2795. Curran Associates, Inc. Chandrahas, Aditya Sharma, and Partha Talukdar. 2018. Towards understanding the geometry of knowledge graph embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 122–131, Melbourne, Australia. Association for Computational Linguistics. Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the 32th AAAI Conference on Artificial Intelligence, pages 1811–1818. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 601– 610, New York, NY, USA. ACM. Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 69–74, Vancouver, Canada. Association for Computational Linguistics. Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015. Modeling relation paths for representation learning of knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 705–714, Lisbon, Portugal. Association for Computational Linguistics. Hanxiao Liu, Yuexin Wu, and Yiming Yang. 2017. Analogical inference for multi-relational embeddings. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2168–2178, International Convention Centre, Sydney, Australia. PMLR. Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Mojtaba Nayyeri, Chengjin Xu, Yadollah Yaghoobzadeh, Hamed Shariat Yazdi, and Jens Lehmann. 2019. Toward Understanding The Effect Of Loss function On Then Performance Of Knowledge Graph Embedding. arXiv e-prints, page arXiv:1909.00519. Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A novel embedding model for knowledge base completion based 5521 on convolutional neural network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 327–333. Association for Computational Linguistics. Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2019. A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2180–2189. Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 1955–1961. AAAI Press. Maximilian Nickel and Volker Tresp. 2013. Tensor factorization for multi-relational learning. In Machine Learning and Knowledge Discovery in Databases, pages 617–621, Berlin, Heidelberg. Springer Berlin Heidelberg. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. arXiv preprint arXiv:1703.06103. Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2019. End-to-end structure-aware convolutional networks for knowledge base completion. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pages 2071–2080. JMLR.org. Haoyu Wang, Vivek Kulkarni, and William Yang Wang. 2018. DOLORES: deep contextualized knowledge graph embeddings. CoRR, abs/1811.00147. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the TwentyEighth AAAI Conference on Artificial Intelligence, AAAI’14, pages 1112–1119. AAAI Press. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. Transg : A generative model for knowledge graph embedding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2316–2325. Association for Computational Linguistics. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. CoRR, abs/1412.6575. Jun Wen Yuan, Neng Gao, and Ji Xiang. 2019. Transgate: Knowledge graph embedding with shared gate structure. In AAAI. Appendix A Results on WN18RR dataset Besides FB15k-237, we also evaluate the proposed protocols on WN18RR (Dettmers et al., 2018) dataset, which is a subset of WN18 (Bordes et al., 2013) containing lexical relations between words. Similar to FB15k-237, inverse relations are removed in WN18RR. The results on WN18RR are shown in Table 3. From these results, we can draw similar conclusions as in Section 5. We also show the total number of triplets with the exact same score over the entire WN18RR dataset for ConvKB, CapsE and ConvE in Figure 4. 5522 Reported RANDOM TOP BOTTOM MRR ↑MR ↓H@10 ↑MRR ↑ MR ↓ H@10 ↑MRR ↑MR ↓H@10 ↑MRR ↑MR ↓H@10 ↑ ConvE .43 4187 .52 .444 ± .0 4950 ± 0 .503 ± .0 .444 4950 .503 .444 4950 .503 RotatE .476 3340 .571 .473 ± .0 3343 ± 0 .571 ± .0 .473 3343 .571 .473 3343 .571 TuckER .470 .526 .461 ± .0 6324 ± 0 .516 ± .0 .461 6324 .516 .461 6324 .516 ConvKB .248 2554 .525 .249 ± .0 3433 ± 42 .524 ± .0 .251 1696 .529 .164 5168 .516 (+.002) (-1737) (+.005) (-.085) (+1735) (-.008) CapsE‡ .415 719 .560 .415 ± .0 718 ± 0 .559 ± .0 .415 718 .559 .323 719 .555 (-.092) (+1) (-.004) KBAT .440† 1940† .581† .412 ± .0 1921 ± 0 .554 ± .0 .412 1921 .554 .412 1921 .554 Table 3: Performance comparison under different evaluation protocols on WN18RR dataset. For TOP and BOTTOM, we report changes in performance with respect to RANDOM protocol. ‡: CapsE uses the pre-trained 100dimensional Glove (Pennington et al., 2014) word embeddings for initialization on WN18RR dataset, which makes the comparison on WN18RR still unfair. †: KBAT has test data leakage in their original implementation, which is fixed in our experiments. Frequency 0 150 300 450 600 Number of Triplets with Same Score 1-4 5-16 17-64 65-256 257-1024 1025-4096 4097-16384 16385-65536 ConvKB CapsE ConvE Figure 4: Plot shows the frequency of the number of negative triplets with the same assigned score as the valid triplet during evaluation on WN18RR dataset. The results show that Unlike FB15k-237, in this dataset, only ConvKB has a large number of negative triplets get the same score as the valid triplets.
2020
489
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 515–526 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 515 Integrating Semantic and Structural Information with Graph Convolutional Network for Controversy Detection Lei Zhong1,2, Juan Cao1,2∗, Qiang Sheng1,2, Junbo Guo1, Ziang Wang1,2 1Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS) & Center for Advanced Computing Research, Institute of Computing Technology, CAS, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China {zhonglei18s, caojuan, shengqiang18z, guojunbo}@ict.ac.cn [email protected] Abstract Identifying controversial posts on social media is a fundamental task for mining public sentiment, assessing the influence of events, and alleviating the polarized views. However, existing methods fail to 1) effectively incorporate the semantic information from contentrelated posts; 2) preserve the structural information for reply relationship modeling; 3) properly handle posts from topics dissimilar to those in the training set. To overcome the first two limitations, we propose TopicPost-Comment Graph Convolutional Network (TPC-GCN), which integrates the information from the graph structure and content of topics, posts, and comments for post-level controversy detection. As to the third limitation, we extend our model to Disentangled TPC-GCN (DTPC-GCN), to disentangle topic-related and topic-unrelated features and then fuse dynamically. Extensive experiments on two realworld datasets demonstrate that our models outperform existing methods. Analysis of the results and cases proves that our models can integrate both semantic and structural information with significant generalizability. 1 Introduction Social media such as Reddit1 and Chinese Weibo2 has been the major channel through which people can easily propagate their views. In the open and free circumstance, the views expressed by the posts often spark fierce discussion and raise controversy among the engaging users. These controversial posts provide a lens of public sentiment, which bring about several tasks such as news topic selection, influence assessment (Hessel and Lee, 2019), and alleviation of polarized views (Garimella et al., 2017). As a basis of all mentioned tasks, automatically identifying the controversial posts has ∗Corresponding author. 1https://www.reddit.com/ 2https://weibo.com/ They two obviously use different techniques. Xiaomi’s Mimoji is automatically generated while Apple’s Memoji is hand-made. Thus, Xiaomi obviously do not copy. Target Post P Topic: A microblogger implies that Xiaomi’s Mimoji copies Apple’s Memoji. (Support) C1:A rational fan appeared finally. Support you. (Support) C2: What you said is persuasive. (Refute) C3: The point is that their lights, skins, functions, and even names are similar. No reason to say that Xiaomi don’t copy. ↳(Refute) C3-1: No, the point is that the manuscript is original. Comments Attached to P (Refute) RP1: I’m against Xiaomi this time. The component library is too similar. Whether the faces are hand-made or not is not important. Can’t this fact be the evidence? (Refute) RP2: I think Mimoji is similar to Memoji. Even if the process of faces is different, their ideas are too close. Related Posts Figure 1: A controversial post P about whether Xiaomi’s Mimoji copies Apple’s Memoji. These Supports and Refutations are to either their respective parent comments or P. attracted wide attention (Addawood et al., 2017; Coletto et al., 2017; Rethmeier et al., 2018; Hessel and Lee, 2019). This work focuses on post-level controversy detection on social media, i.e., to classify if a post is controversial or non-controversial. According to (Coletto et al., 2017), a controversial post has debatable content and expresses an idea or an opinion which generates an argument in the responses, representing opposing opinions in favor or in disagreement with the post. In practice, the responses of a target post (the post to be judged) generally come from two sources, i.e., the comments attached to the post and other content-related posts. Figure 1 shows an example where the target post P expresses that Xiaomi’s Mimoji do not copy Apple’s Memoji. We can see that: 1) The comments show more supports and fewer refutes to P, which raises a small controversy. However, the related posts show extra refutations and enhance the controversy of P. 2) C3−1 expresses refutation literally, but it actually supports P because in the comment tree, it 516 refutes C3, a refuting comment to P. 3) There exist two kinds of semantic clues for detection, topicrelated and topic-unrelated clues. For example, support and against is unrelated to this topic, while copy and similar are topic-related. Topic-related clues can help identify posts in a similar topic, but how effective they are for those in dissimilar topics depends on the specific situation. Therefore, to comprehensively evaluate the controversy of a post, the information from both the comments and related posts should be integrated properly on semantic and structure level. Existing methods detecting controversy on social media have exploited the semantic feature of the target post and its comments as well as structural feature. However, three drawbacks limit their performance: 1) These methods ignore the role of the related posts in the same topic in providing extra supports or refutations on the target post. Only exploiting the information from comments is insufficient. 2) These methods use statistical structure-based features which cannot model the reply-structure relationships (like P-C1 and C3C3−1 in Figure 1). The stances of some comments may be misunderstood by the model (like C3−1). 3) These methods tend to capture topic-related features that are not shared among different topics with directly using information of content (Wang et al., 2018). The topic-related features can be helpful when the testing post is from a topic similar to those in the training set but would hurt the detection otherwise. Recently, graph convolutional networks have achieved great success in many areas (Marcheggiani et al., 2018; Ying et al., 2018; Yao et al., 2019; Li and Goldwasser, 2019) due to its ability to encode both local graph structure and features of node (Kipf and Welling, 2017). To overcome the first two drawbacks of existing works, we propose a Topic-Post-Comment Graph Convolutional Network (TPC-GCN) (see Figure 2a) that integrates the information from the graph structure and content of topics, posts, and comments for post-level controversy detection. First, we create a TPC graph to describe the relationship among topics, posts, and comments. To preserve the replystructure information, we connect each comment node with the post/comment node it replies to. To include the information from related posts, we connect each post node with its topic node. Then, a GCN model is applied to learn node representation with content and reply-structure information fused. Finally, the updated vectors of a post and its comments are fused to predict the controversy. TPC-GCN is mainly for detection in intra-topic mode, i.e., topics of testing posts appear in the training set, for it cannot overcome the third drawback. We thus extend a two-branch version of TPC-GCN named Disentangled TPC-GCN (DTPC-GCN) (see Figure 2b) for inter-topic mode (no testing posts are from the topics in the training set). We use a TPC-GCN in each branch, but add an auxiliary task, topic classification. The goals of the two branches for the auxiliary task are opposite to disentangle the topic-related and topic-unrelated features. The disentangled features can be dynamically fused according to the content of test samples with attention mechanism for final decision. Extensive experiments demonstrate that our models outperform existing methods and can exploit features dynamically and effectively. The main contributions of this paper are as follows: 1. We propose two novel GCN-based models, TPC-GCN and DTPC-GCN, for post-level controversy detection. The models can integrate the information from the structure and content of topics, posts, and comments, especially the information from the related posts and reply tree. Specially, DTPC-GCN can further disentangle the topic-related features and topic-unrelated features for inter-topic detection. 2. We build a Chinese dataset for controversy detection, consisting of 5,676 posts collected from Chinese Weibo, each of which are manually labeled as controversial or noncontroversial. To the best of our knowledge, this is the first released Chinese dataset for controversy detection. 3. Experiments on two real-world datasets demonstrate that the proposed models can effectively identify the controversial posts and outperform existing methods in terms of performance and generalization. 2 Related Work Controversy detection on the Internet have been studied on both web pages and social media. Existing works detecting controversy on web pages mostly aims at identifying controversial articles in 517 Figure 2: Architecture of (a) Topic-Post-Comment Graph Convolutional Network (TPC-GCN). (b) Disentangled TPC-GCN (DTPC-GCN). The upper post in the TPC graph is taken as an example to illustrate the methods. H(l) B is the representation matrix, containing all node vectors in the l-th layer of Branch B. X is the initial representation. Lc and Lt refer to controversy classification loss and topic classification loss respectively. FC means fully connected layer. Wikipedia. Early methods are mainly based on statistical features, such as revision times (Kittur et al., 2007), edit history (Vuong et al., 2008; Yasseri et al., 2012; Rad and Barbosa, 2012) and dispute tag (Dori-Hacohen and Allan, 2015). Others incorporate the collaboration-network-based features, sentiment-based features (Vuong et al., 2008; Wang and Cardie, 2014), and semantic features (Linmans et al., 2018). As to the common web pages, existing works exploit the controversy on Wikipedia (Awadallah et al., 2012; Dori-Hacohen and Allan, 2013, 2015; Jang et al., 2016) and user comments (Choi et al., 2010; Tsytsarau et al., 2010) for detection. Unlike the web pages, social media contains more diverse topics and more fierce discussion among users, which makes controversy detection on social media more challenging. Early studies assume that a topic has its intrinsic controversy, and focus on topic-level controversy detection. Popescu and Pennacchiotti (2010) detect controversial snapshots (consisting of many tweets referring to a topic) based on Twitter-based and externalknowledge features. Garimella et al. (2018) build graphs based on a Twitter topic, such as retweeting graph and following graph, and then apply graph partitioning to measure the extent of controversy. However, topic-level detection is rough, because there exists non-controversial posts in a controversial topic and vice versa. Recent works focus on post-level controversy detection by leveraging language features, such as emotional and topicrelated phrases (Rethmeier et al., 2018), emphatic features, Twitter-specific features (Addawood et al., 2017). Other graph-based methods exploit the features from the following graph and comment tree (Coletto et al., 2017; Hessel and Lee, 2019). The limitations of current post-level works are that they do not effectively integrate the information from content and reply-structure, and ignore the role of posts in the same topic. Moreover, the difference between intra-topic and inter-topic mode is not realized. Only Hessel and Lee (2019) deal with topic transfer, but they train on each topic and test on others to explore the transferability, which is not suitable in practice. 3 Methodology In this section, we introduce the Topic-PostComment Graph Convolutional Network (TPCGCN) and its extension Disentangled TPC-GCN (DTPC-GCN), as shown in Figure 2. We first introduce the TPC graph construction and then detail the two models. 518 3.1 TPC Graph Construction To model the paths of message passing among topics, posts, and comments, we first construct a topicpost-comment graph G = (V, E) for target posts, where V and E denote the set of nodes and edges respectively. First, to preserve the post-comment and inter-comment relationship, we incorporate the comment tree, each comment node of which is connected with the post/comment node it replies to. Then, to facilitate the posts capturing information from related posts in the same topic that proved helpful in Section 1, we connect each post with its topic. The topic node can be regarded as a hub node to integrate and interchange the information. Another way is to connect post nodes in a topic pairwise, but the complexity will be high. Note that the concept topic here is not necessarily provided by the platform, such as the subreddit on Reddit and the hashtag (#) on Weibo. When topics are not provided, algorithms for text-based clustering can be used to construct a topic with related posts (Nematzadeh et al., 2019). In G, each node may represent a topic, a post, or a comment and each edge may represent topic-post, post-comment, or comment-comment connection. We initially represent each node v with an embedding vector x of their text by using the pre-trained language model. 3.2 TPC-GCN In this subsection, we detail the TPC-GCN, by first introducing the generic GCN and then our TPCGCN model. The GCN has been proved an efficient neural network that operates on a graph to encode both local graph structure and features of node (Kipf and Welling, 2017). The characteristic of GCN is consistent to our goal that integrates the semantic and structural information. In a GCN, each node is updated according to the aggregated information of its neighbor nodes and itself, so the learned representation can include information from both content and structure. For a node vi ∈V , the update rule in the message passing process is as follows: h(l+1) i = σ  X j∈Ni g  h(l) i , h(l) j  + b(l)   (1) where h(l) i is the hidden state of node vi in the lth layer of a GCN and Ni is the neighbor set of node vi with itself included. Incoming messages from Ni are transformed by the function g and then pass through the activation function σ (such as ReLU) to output new representation for each node. b(l) is the bias term. Following Kipf and Welling (2017), we use a linear transform function g(h(l) i , h(l) j ) = W (l)hj, where W (l) is a learnable weight matrix. Based on node-wise Equation 1, layer-wise propagation rule can be written as the following form: H(l+1) = σ  ˆAH(l)W (l) + B(l) (2) where H(l) contains all node vectors in the l-th layer and ˆA is the normalized adjacency matrix with inserted self-loops. W (l) is the weight matrix and B(l) is the broadcast bias term. In TPC-GCN (see Figure 2a), we input the matrix consisting of N d-dimensional embedding vectors H(0) = X ∈RN×d to a two-layer GCN to obtain the representation after message passing H(2). Next, the vector of each post node i and its attached comment nodes are averaged to be the fusion vector fi of the post. Finally, we apply a softmax function to the fusion vectors for the controversy probability of each post. The cross entropy is the loss function: Lc =−1 N X i ((1−yc i)log(1−pc i)+yc i log(pc i)) (3) where yc i is a label with 1 representing controversial and 0 representing the non-controversial, pc i is the predicted probability that the i-th post is controversial, and N is the size of training set. The limit of TPC-GCN is that the representation tends to be topic-related as Section 1 said. The limited generalizability of TPC-GCN makes it more suitable for intra-topic detection, instead of inter-topic detection. 3.3 Disentangled TPC-GCN Intuitively, topic-unrelated features are more effective when testing on the posts from unknown topics (inter-topic detection). However, topic-related features can help when unknown topics are similar to the topics in the training set. Therefore, both of topic-related and topic-unrelated features are useful, but their weights vary from sample to sample. This indicates that the two kinds of features should be disentangled and then dynamically fused. Based on the above analysis, we propose the extension of TPC-GCN, Disentangled TPC-GCN (see Figure 519 2b), for inter-topic detection. DTPC-GCN consists of two parts: the two-branch multi-task architecture for disentanglement, and attention mechanism for dynamic fusion. Two-branch Multi-task Architecture To obtain the topic-related and topic-unrelated features at the same time, we use two branches of TPC-GCN with multi-task architecture, denoted as R for topicrelated branch and U for topic-unrelated one. In both R and U, an auxiliary task, topic classification, is introduced to guide the learning of representation oriented by the topic. For each branch, we first train the first layer of GCN with the topic classification task. The input of the topic classifier is fusion vectors from H(1) which are obtained with the same process of fi in TPC-GCN. The cross entropy is used as the loss function: Lt = −1 N X k X i yt ik log(pt ik) (4) where yt ik is a label with 1 representing the groundtruth topic and 0 representing the incorrect topic class, pt ik is the predicted probability of the i-th post belonging to the k-th topic, and N is the size of training set. The difference between R and U is that we minimize Lt in Branch R to obtain topicdistinctive features, but maximize Lt in Branch U to obtain topic-confusing features. Then we include the second layer of GCN and train on two tasks, i.e., topic and controversy classification, for each branch individually. Branch U and R are expected to evaluate controversy effectively with different features in terms of the relationship with the topics. Attention Mechanism After the individual training, Branch U and R are expected to capture the topic-related and topic-unrelated features respectively. We further fuse the features from the two branches dynamically. Specifically, we freeze the parameters of U and R, and further train the dynamic fusion component. For the weighted combination of fusion vectors fU and fR from the two branches, we use the attention mechanism as follows: F(fb) = vT tanh(WFfb + bF), b ∈{U, R} (5) αb = exp(F(fb)) P b∈{U,R} exp(F(fb)) (6) u = X b∈{U,R} αbfb (7) Number Weibo Reddit Topics(Hashtags/Subreddits) 49 6 Controversial Posts 1,992 7,515 Non-controversial Posts 3,684 7,518 All Posts 5,676 15,033 Comments of Controversial Posts 35,632 578,879 Comments of Non-Controversial Posts 34,565 1,461,697 All Comments 70,197 2,040,576 Table 1: Statistics of two datasets. where WF is the weight matrix and bF is the bias term. vT is a transposed weight vector and F(·) outputs the score of the input vector. The scores of features from Branch U and R are normalized via a softmax function as the branch weight. The weighted sum of the two fusion vectors u is finally used for controversy classification. The loss function is the same as Equation 3. 4 Experiment In this section, we conduct experiments to compare our proposed models and other baseline models. Specifically, we mainly answer the following evaluation questions: EQ1: Are TPC-GCN and DTPC-GCN able to improve the performance of controversy detection? EQ2: How effective are different information in TPC-GCN, including the content of topics, posts, and comments as well as the topic-post-comment structure? EQ3: Can DTPC-GCN learn disentangled features and dynamically fuse them for controversy detection? 4.1 Dataset We perform our experiments on two real-world datasets in different languages. Table 1 shows the statistics of the two datasets. The details are as follows: Reddit Dataset The Reddit dataset released by Hessel and Lee (2019) and Jason Baumgartner of pushshift.io is the only accessible English dataset for controversy detection of social media posts. This dataset contains six subreddits (which can be regarded as over-arching topics): AskMen, AskWomen, Fitness, LifeProTips, personalfinance, and relationships. Each post belongs to a subreddit and the number of attached comments is ensured to be over 30. The tree structure of the comments is also maintained. We use the comment data in the first hour after a post is published. 520 Weibo Dataset We built a Chinese dataset for controversy detection on Weibo 3 in this work. We first manually selected 49 widely discussed, multidomain topics from July 2017 to August 2019 (see Appendix A). Then, we crawled the posts on those topics and preserved those with at least two comments. Here we rebuilt the comment tree according to the comment time and usernames due to the lack of officially-provided structure. Finally, annotators were asked to read and then annotate the post based on both of the post content and the user stances in the comments/replies. Each post was labeled by two annotators(Cohen’s Kappa coefficient = 0.71). When the disagreement occurred between the annotators, the authors discussed and determined the labels. In total, this dataset contains 1,992 controversial posts and 3,684 non-controversial posts, which is in line with the distribution imbalance in the real-world scenario. As far as we know, this is the first released dataset for controversy detection on Chinese social media. We use at most 15 comments of each post due to the computation limit. In the intra-topic experiment: For the Weibo dataset, we randomly divided with a ratio of 4:1:1 in each topic and merged them respectively across all topics. For the Reddit dataset, we apply the data partition provided by the authors. The ratio is 3:1:1. In the inter-topic experiments: For the Weibo and Reddit dataset, we still divided with a ratio of 4:1:1, but on the topic level. 4.2 Implementation Details In the (D)TPC-GCN model, each node is initialized with its textual content using the pre-trained BERT4 (BERT-Base Chinese for Weibo and BERTBase Uncased for Reddit) and the padding size for each is 45. We only fine-tune the last layer, namely layer 11 of BERT for simplicity and then apply a dense layer with a ReLU activation function to reduce the dimensionality of representation from 768 to 300. In TPC-GCN, the sizes of hidden states of the two GCN layers are 100 and 2, respectively, with ReLU for the first GCN layer. To avoid overfitting, a dropout layer is added between the two layers with a rate of 0.35. We apply a softmax function to the fusion vector for obtaining the controversy probability. In DTPC-GCN, the size of 3http://mcg.ict.ac.cn/ controversy-detection-dataset.html 4https://github.com/google-research/ bert hidden states of the first and second GCN layers in each branch are 32 and 16. The dropout rate between two GCN layers in each branch is set to 0.4. The batch size in our (D)TPC-GCN model is 1 (1 TPC graph), and 128 (posts and attached replies) in our PC-GCN model and baselines. The optimizer is BertAdam5 in all BERT-based models and Adam (Kingma and Ba, 2014) in the other semantic models. The learning rate is 1e-4 and the total epoch is 100. We report the best model according to the performance on the validation set. In those semantic models that are not based on BERT, we use two publicly-available big-scale word embedding files to obtain the model input, sgns.weibo.bigramchar6 for Weibo and glove.42B.300d7 for Reddit. 4.3 Baselines To validate the effectiveness of our methods, we implemented several representative methods including content-based, structure-based and fusion methods as baselines. Content-based Methods We implement mainstream text classification models including TextCNN (Kim, 2014), BiLSTM-Att (bi-directional LSTM with attention) BiLSTM (Graves and Schmidhuber, 2005; Bahdanau et al., 2015), BiGRU-Att (bi-directional GRU with attention) (Cho et al., 2014),BERT (Devlin et al., 2019) (only fine-tune the last layer for simplicity). For a fair comparison, we concatenate the post and its attached comments together as the input, instead of feeding the post only. Structure-based Methods Considering that structure-based features of the post and its comment tree are rare and nonsystematic in previous works, we integrate the plausible features in (Coletto et al., 2017) and (Hessel and Lee, 2019). As the latter paper does, we feed them into a series of classifiers and choose a best model for classification. We name the method SFC. For a post-comment graph, the feature set contains the average depth (average length of root-to-leaf paths), the maximum relative degree (the largest node degree divided by the degree of the root), CRATE features (the logged reply time between the post and comments, or over pairs of comments), 5https://pypi.org/project/ pytorch-pretrained-bert/ 6https://github.com/Embedding/ Chinese-Word-Vectors 7https://nlp.stanford.edu/projects/ glove/ 521 Method Weibo Dataset Reddit Dataset Avg. P Avg. R Avg. F1 Acc. Avg. P Avg. R Avg. F1 Acc. Content-based TextCNN 72.80 68.49 69.08 72.83 56.58 56.33 55.92 56.33 BiLSTM-Att 69.97 70.31 70.10 71.28 62.74 60.66 58.98 60.66 BiGRU-Att 71.35 72.21 71.50 72.21 59.95 59.86 59.77 59.86 BERT 72.17 72.72 72.37 73.35 60.80 60.80 60.80 60.80 Structure-based SFC 68.15 66.27 66.72 70.10 59.47 59.47 59.47 59.47 Fusion (Hessel and Lee, 2019) 72.52 70.82 71.34 73.82 63.03 63.03 63.03 63.03 TPC-GCN 74.65 75.33 74.88 75.72 67.00 66.97 66.95 66.97 Table 2: Performance(%) comparison of the intra-topic experiments. Method Weibo Dataset Reddit Dataset Avg. P Avg. R Avg. F1 Acc. Avg. P Avg. R Avg. F1 Acc. Content-based TextCNN 71.55 72.63 69.63 69.76 54.20 54.18 54.12 54.18 BiLSTM-Att 67.09 68.09 67.10 68.00 60.96 59.76 58.63 59.76 BiGRU-Att 68.04 67.08 67.35 70.18 58.49 58.17 57.76 58.17 BERT 68.77 68.16 68.42 72.22 60.41 59.96 59.53 59.96 Structure-based SFC 63.06 63.69 63.04 64.03 58.87 58.86 58.86 58.86 Fusion (Hessel and Lee, 2019) 69.25 67.15 67.63 70.84 60.77 60.76 60.74 60.76 TPC-GCN 73.84 72.00 71.53 72.11 63.39 63.24 63.14 63.24 DTPC-GCN 75.57 75.31 75.27 75.35 68.76 67.63 67.14 67.63 Table 3: Performance(%) comparison of the inter-topic experiments. and C-TREE features (statistics in a comment tree, such as maximum depth/total comment ratio). Fusion Method The compared fusion method from (Hessel and Lee, 2019) aims to identify the controversial posts with semantic and structure information. They extract text features of topics, posts, and comments by BERT and structural feature including the CRATE and C-TREE features mentioned above. In addition, publish time features are also exploited. 4.4 Performance Comparison To answer EQ1, we compare the performance of proposed (D)TPC-GCN with mentioned baselines on the two datasets. The evaluation metrics include the macro average precision (Avg. P), macro average recall (Avg. R), macro average F1 score (Avg. F1), and accuracy (Acc.). Table 2 and 3 show the performance of all compared methods for intra-topic detection and inter-topic detection respectively. In the intra-topic experiments, we can see that 1) TPC-GCN outperforms all compared methods on the two datasets. This indicates that our model can effectively detect controversy with a significant generalizability on different datasets. 2) The structure-based model, SFC, reports the low scores on the two datasets, indicating that the statistical structural information is insufficient to timely identify the controversy. 3) The fusion models outperform or are comparable to the other baselines, which proves that information fusion of content and structure is necessary to improve the performance. In the inter-topic experiments, we can see that 1) DTPC-GCN outperforms all baselines by 6.4% of F1 score at least, which validates that DTPC-GCN can detect controversy on unseen or dissimilar topics. 2) DTPC-GCN outperforms TPC-GCN by 3.74% on Weibo and 4.00% on Reddit. This indicates that feature disentanglement and dynamic fusion can significantly improve the performance of inter-topic controversy detection. 4.5 Ablation Study To answer EQ2 and part of EQ3, we also evaluate several internal models, i.e., the simplified variations of (D)TPC-GCN by removing some components or masking some representations. By the ablation study, we aim to investigate the impact of content and structural information in TPC-GCN and topic-related and topic-unrelated information in DTPC-GCN. Ablation Study of TPC-GCN We delete certain type of nodes (and the edges connect to them) to investigate their overall impact and mask the content by randomizing the initial representation to investigate the impact of content. Specifically, we investigate on the following simplified models of TPC-GCN: PC-GCN / TP-GCN: discard the topic / comment nodes. (RT)PC-GCN / T(RP)C-GCN / TP(RC)GCN: randomly initialize the representation of topic / post / comment nodes. 522 Method Weibo Dataset Reddit Dataset Avg. P Avg. R Avg. F1 Acc. Avg. P Avg. R Avg. F1 Acc. TPC-GCN 74.65 75.33 74.88 75.72 67.00 66.97 66.95 66.97 PC-GCN 73.49 74.16 73.72 74.59 66.48 65.60 65.14 65.60 TP-GCN 58.72 59.16 58.20 58.68 52.97 52.83 52.28 52.83 (RT)PC-GCN 71.78 71.07 71.35 73.14 65.86 65.80 65.77 65.80 T(RP)C-GCN 72.30 72.65 72.45 73.55 65.25 64.73 64.43 64.73 TP(RC)-GCN 59.66 59.80 59.71 61.36 62.98 62.80 62.67 62.80 Table 4: Ablation study of TPC-GCN in the intra-topic experiments (%). Method Weibo Dataset Reddit Dataset Avg. P Avg. R Avg. F1 Acc. Avg. P Avg. R Avg. F1 Acc. DTPC-GCN 75.57 75.31 75.27 75.35 68.76 67.63 67.14 67.63 U branch only 74.06 74.06 74.05 74.05 63.95 63.94 63.94 63.94 R branch only 74.16 73.33 73.15 73.41 63.41 63.15 62.97 63.15 Table 5: Ablation study of DTPC-GCN in the inter-topic experiments (%). From Table 4, we have the following observations: 1) TPC-GCN outperforms all simplified models, indicating that the necessity of structure and content from all types of nodes. 2) PC-GCN uses no extra information (the information of other posts in the same topic), the performance is still better than the baselines (Table 2 and 4), showing the effectiveness of our methods. 3) The models deleting comment information, i.e., TP-GCN and TP(RC)-GCN, experience a dramatic drop in performance, which shows the comment information is of the most importance. 4) The effect of structural information varies in the different situations. Without the contents, the comment structure can individually work (TP(RC)-GCN > TP-GCN), while for topics, the structure has to collaborate with the contents ((RT)PC-GCN < PC-GCN on the Weibo dataset). Ablation Study of DTPC-GCN We focus on the roles of the U (topic-unrelated) branch and R (topic-related) branch: U branch only: Only U branch is trained to capture topic-unrelated features. R branch only: Only R branch is trained to capture topic-related features. Table 5 shows that both of the two branches can identify controversial posts well, but their performances are worse than the fusion model. Specifically, the U branch performs slightly better than R, indicating the topic-unrelated features are more suitable for inter-topic detection. We infer that the two branches can learn good but different representation under the guide of the auxiliary task. Cancelling the physical driving license can bring much benefits: No punishment because of forgetting to carry the license; reduce the administrative costs; put an end to the use of fake licenses… Target Post 1 Topic: Cancel the Driving License (Support) Yes! Just use the citizen’s ID card for replacement. (Support) Good proposal! Support! (Refute) I don’t support it. (Refute) Don’t think the cost can be reduced. The costs of new electronic devices and larger data system are not small. Comments Attached to 1 Human traffickers are hateful. People’s Congress Baoyan Zhang thinks that woman- and childtrafficking cases should be sentenced to death and the present sentence of five to 10 years in prison is not heavy enough. Target Post 2 Topic: Suggest Death Penalty for Woman- & Child-traffickers (Support) Directly sentence to death. Execute immediately! (Support) Those harboringtraffickers also need death penalty! (Support) Support! All the child traffickers should be sentenced to death penalty! (Refute) Drug smugglers are sentenced to death, but so many people still do. If we use death penalty to traffickers, they may task crazier actions. Should think more carefully. Comments Attached to 2 Branch Weights U: 0.874 R : 0.126 R : 0.783 Branch Weights U : 0.217 Figure 3: Examples of controversial posts that rely more on one of the two branches. The attention weights of the two posts are on the horizontal bars (left: the U branch, right: the R branch). Post 1 rely more on U (0.874 > 0.126) while Post 2 more on R (0.217 < 0.783). 4.6 Case Study We conduct a case study to further answer EQ3 from the perspective of samples. We compare the attention weight of the U and R branch in DTPCGCN and exhibit some examples where the final decisions lean on one of the two branches. 523 Figure 3 shows two examples in the testing set of the Weibo dataset. The DTPC-GCN rely more on the topic-unrelated features from Branch U when classifying Post 1 (0.874 > 0.126), while more on the topic-related features from Branch R when classifying Post 2 (0.217 < 0.783). The topic of Post 1, Cancel the Driving License, is weakly relevant to topics in training set, and the comments mostly use topic-unspecific words such as simple support and good proposal. Thus, the topic-unrelated features are more beneficial for judging. In contrast, Post 2 discusses the death penalty for women and children traffickers, relevant to one of the topics in the training set, Improve Sentencing Standards for Sexually Assault on Children. Further, both of the two topics are full of comments on death penalty. Exploiting more of the topic-related features is reasonable for the final decision. 4.7 Error Analysis By conducting the error analysis on 186 misclassified samples in the Weibo dataset, we find three main types of samples that lead to the misclassification: 1) 22.6% of the wrong samples are with too much noise in the comments, including unrelated and neutral comments. 2) 16.1% are with a very deep tree structure. This kind of structure is helpful for controversy detection (Hessel and Lee, 2019), but the ability of GCN to obtain information from this kind of structure is limited. 3) 10.2% are with obscure and complex statements. These wrong cases indicate that better handling the noisy data, learning more deep structural features, and mining the semantic more deeply have the potential to improve the performance. 5 Conclusion In this paper, we propose a novel method TPCGCN to integrate the information from the graph structure and content of topics, posts, and comments for post-level controversy detection on social media. Unlike the existing works, we exploit the information from related posts in the same topic and the reply structure for more effective detection. To improve the performance of our model for inter-topic detection, we propose an extension of TPC-GCN named DTPC-GCN, to disentangle the topic-related and topic-unrelated features and then dynamically fuse them. Extensive experiments conducted on two datasets demonstrate that our proposed models outperform the compared methods and prove that our models can integrate both semantic and structural information with significant genaralizablity. Acknowledgments The authors thank Peng Qi, Mingyan Lu, Guang Yang, and Jiachen Wang for helpful discussion. This work is supported by the National Nature Science Foundation of China (U1703261). References Aseel Addawood, Rezvaneh Rezapour, Omid Abdar, and Jana Diesner. 2017. Telling apart tweets associated with controversial versus non-controversial topics. In Proceedings of the Second Workshop on NLP and Computational Social Science, pages 32– 41, Vancouver, Canada. Association for Computational Linguistics. Rawia Awadallah, Maya Ramanath, and Gerhard Weikum. 2012. Harmony and dissonance: organizing the people’s voices on political controversies. In Proceedings of the fifth ACM International Conference on Web Search and Data Mining, pages 523– 532. ACM. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the third International Conference on Learning Representations. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111. Association for Computational Linguistics. Yoonjung Choi, Yuchul Jung, and Sung-Hyon Myaeng. 2010. Identifying controversial issues and their subtopics in news articles. In Pacific-Asia Workshop on Intelligence and Security Informatics, pages 140– 153. Springer. Mauro Coletto, Kiran Garimella, Aristides Gionis, and Claudio Lucchese. 2017. A motif-based approach for identifying controversy. In Eleventh International AAAI Conference on Web and Social Media. AAAI. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. 524 Shiri Dori-Hacohen and James Allan. 2013. Detecting controversy on the web. In Proceedings of the 22nd ACM Intedlrnational Conference on Information and Knowledge Management, pages 1845–1848. ACM. Shiri Dori-Hacohen and James Allan. 2015. Automated controversy detection on the web. In European Conference on Information Retrieval, pages 423–434. Springer. Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2017. Reducing controversy by connecting opposing views. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 81–90. ACM. Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Quantifying controversy on social media. ACM Transactions on Social Computing, 1(1):3. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural networks, 18(5-6):602–610. Jack Hessel and Lillian Lee. 2019. Something’s brewing! early prediction of controversy-causing posts from discussion features. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1648–1659, Minneapolis, Minnesota. Association for Computational Linguistics. Myungha Jang, John Foley, Shiri Dori-Hacohen, and James Allan. 2016. Probabilistic approaches to controversy detection. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management, pages 2069–2072. ACM. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proceedings of the fifth International Conference on Learning Representations. Aniket Kittur, Bongwon Suh, Bryan A Pendleton, and Ed H Chi. 2007. He says, she says: conflict and coordination in wikipedia. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 453–462. ACM. Chang Li and Dan Goldwasser. 2019. Encoding social information with graph convolutional networks for Political perspective detection in news media. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2594– 2604. Association for Computational Linguistics. Jasper Linmans, Bob van de Velde, and Evangelos Kanoulas. 2018. Improved and robust controversy detection in general web pages using semantic approaches under large scale conditions. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 1647–1650. ACM. Diego Marcheggiani, Joost Bastings, and Ivan Titov. 2018. Exploiting semantics in neural machine translation with graph convolutional networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 486–492. Association for Computational Linguistics. Azadeh Nematzadeh, Grace Bang, Xiaomo Liu, and Zhiqiang Ma. 2019. Empirical study on detecting controversy in social media. arXiv preprint arXiv:1909.01093. Ana-Maria Popescu and Marco Pennacchiotti. 2010. Detecting controversial events from twitter. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, pages 1873–1876. ACM. Hoda Sepehri Rad and Denilson Barbosa. 2012. Identifying controversial articles in wikipedia: A comparative study. In Proceedings of the eighth Annual International Symposium on Wikis and Open Collaboration, page 7. ACM. Nils Rethmeier, Marc H¨ubner, and Leonhard Hennig. 2018. Learning comment controversy prediction in web discussions using incidentally supervised multitask CNNs. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 316–321. Association for Computational Linguistics. Mikalai Tsytsarau, Themis Palpanas, and Kerstin Denecke. 2010. Scalable discovery of contradictions on the web. In Proceedings of the 19th International Conference on World Wide Web, pages 1195–1196. ACM. Ba-Quy Vuong, Ee-Peng Lim, Aixin Sun, Minh-Tam Le, Hady Wirawan Lauw, and Kuiyu Chang. 2008. On ranking controversies in wikipedia: models and evaluation. In Proceedings of the first International Conference on Web Search and Data Mining, pages 171–182. ACM. Lu Wang and Claire Cardie. 2014. A piece of my mind: A sentiment analysis approach for online dispute detection. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics 525 (Volume 2: Short Papers), pages 693–699. Association for Computational Linguistics. Yaqing Wang, Fenglong Ma, Zhiwei Jin, Ye Yuan, Guangxu Xun, Kishlay Jha, Lu Su, and Jing Gao. 2018. Eann: Event adversarial neural networks for multi-modal fake news detection. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 849– 857. ACM. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7370–7377. AAAI. Taha Yasseri, Robert Sumi, Andr´as Rung, Andr´as Kornai, and J´anos Kert´esz. 2012. Dynamics of conflicts in wikipedia. PloS one, 7(6):1–12. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for webscale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 974– 983. ACM. 526 A Topics in the Weibo dataset # Topics 1 Wechat businessman Ting Zhang and his wife paid taxes of 2.1 billion. (张庭夫妇微商纳税21亿) 2 Singer Zhiqian Xue climbed a telegraph pole. (薛之谦爬电线杆) 3 Young Artist Yuan Wang was spotted to smoke. (王源抽烟) 4 Actor Yunlei Zhang believe women must do home cleaning well. (张云雷女人连家务活都不干好) 5 Jiuxiang Sun sparred with the audience. (孙九香怼观众) 6 Host Xin Wu sold the gift that Actor Hanliang Zhong gave. (吴昕将钟汉良送的礼物卖了) 7 Director Huatao Teng said he wrongly invited Actor Han Lu. (滕华涛称用错了鹿晗) 8 Actor Changjiang Pan responded for his not knowing who Xukun Cai was. (潘长江回应不认识蔡徐坤) 9 Constume drama and idol drama will be off air from August. (8月起停播娱乐性古装剧偶像剧) 10 A woman who was questioned to occupy the seats showed six train tickets. (女子被质疑霸座掏出6张车票) 11 An Internet user was detained for creating doggerels that slandered the Yichun City’s image. (打油诗拘留) 12 Scanning QR codes can let you know the cleaning times of hotel sheets. (酒店床单洗过几次扫码即知) 13 31 names of places that do not conform the regulations in Xiamen are required to change. (厦门31个不规范地名被要求 整改) 14 Traditional Chinese medicine injection. (中药注射液) 15 Jilin University provides wake-up services for foreign students. (吉林大学为留学生提供叫醒服务) 16 A Gaokao-taking student who was rejected by Peking University for three times in the same year responded. (考生回应 被北大三次退档) 17 Xiaohongshu App was removed by top Android app stores. (小红书疑被各大安卓应用商店下架) 18 FView questioned the authenticity of the Moon photo captured by the Huawei phone. (爱否质疑华为拍的月亮造假) 19 A new advertisement of Burger King is suspected of racial discrimination. (汉堡王新广告被指种族歧视) 20 Zara responded for being suspected of uglifying a Chinese model. (zara回应丑化中国模特) 21 A microblogger implied that Xiaomi’s Mimoji copied Apple’s Memoji. (小米回应萌拍抄袭苹果事件) 22 Baidu CEO Robin Li was splashed water. (李彦宏被泼水) 23 Huawei announced HarmonyOS. (华为鸿蒙系统发布) 24 Xiaomi adjusted its organizational structure. (小米组织架构调整) 25 Resume the mandatory before-marriage examination. (建议恢复强制性婚检) 26 Add another legal day-off every other week. (建议每周双休改成隔周三休) 27 Lower the legal marriageable age to 20 for male and 18 for female. (建议法定最低婚龄修订男20女18) 28 Cancel the driving license. (建议取消机动车驾驶证) 29 Lower the minimum age of criminal responsibility for juveniles to 12. (建议未成年人刑责年龄降至12岁) 30 The salary of teachers should not be lower than civil servants. (教师待遇不应低于公务员) 31 Regulate the phenomenon that let parents check homework. (建议严禁批作业转移给家长) 32 Suggest printing horror pictures on cigarette boxes. (建议烟盒印恐怖图片) 33 Improve Sentencing Standards for Sexually Assault on Children (完善性侵儿童犯罪量刑标准) 34 Suggest a minor long leave every month. (建议实行每月一次小长假) 35 Women with a second child should have more supporting policies. (建议给予生二胎女性更多配套措施) 36 Suggest promoting education of death for all citizens. (建议全民开展死亡教育) 37 Both of the wife and husband should have maternity leave. (建议夫妻一起休产假) 38 Suggest extending women’s maternity leave by one month. (建议女性产假延长一个月) 39 Need heavier punishment to the violence to doctors. (建议对暴力伤医从严判决) 40 Suggest at least 10 years in prison for child-traffickers. (建议拐卖儿童最低刑期10年) 41 Suggest different prices for seat tickets and stand-by tickets. (建议改进高铁站票座票同价) 42 Severely punish the juveniles for violating the law on purpose. (建议严管未成年人知法犯法) 43 Suggest death penalty for woman- and child-traffickers. (建议拐卖妇女儿童罪最高调至死刑) 44 The Double First-Class University list should be allowed to change. (建议双一流大学名单流动) 45 Forbid the no-dining-room catering companies to deliver take-out food. (建议严禁无实体店外卖) 46 Include the lunar New Year’s Eve in the legal holidays. (建议年三十纳入法定假期) 47 Give special care to menstrual female employees. (建议给经期女职工特殊保护) 48 Forbid the juveniles’ being live video streamers on the Internet. (建议禁止未成年人担任网络主播) 49 Suggest parents going to schools for learning to be a qualified parents. (建议上家长学校学当家长) Table 6: 49 topics in the Weibo dataset. We modify some words and polish the sentences to improve the understandability when translating them into English.
2020
49
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5523–5539 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5523 Cross-Linguistic Syntactic Evaluation of Word Prediction Models Aaron Mueller1 Garrett Nicolai1† Panayiota Petrou-Zeniou2 Natalia Talmina2 Tal Linzen1,2 1Department of Computer Science 2Department of Cognitive Science Johns Hopkins University {amueller, gnicola2, ppetrou1, talmina, tal.linzen}@jhu.edu Abstract A range of studies have concluded that neural word prediction models can distinguish grammatical from ungrammatical sentences with high accuracy. However, these studies are based primarily on monolingual evidence from English. To investigate how these models’ ability to learn syntax varies by language, we introduce CLAMS (Cross-Linguistic Assessment of Models on Syntax), a syntactic evaluation suite for monolingual and multilingual models. CLAMS includes subject-verb agreement challenge sets for English, French, German, Hebrew and Russian, generated from grammars we develop. We use CLAMS to evaluate LSTM language models as well as monolingual and multilingual BERT. Across languages, monolingual LSTMs achieved high accuracy on dependencies without attractors, and generally poor accuracy on agreement across object relative clauses. On other constructions, agreement accuracy was generally higher in languages with richer morphology. Multilingual models generally underperformed monolingual models. Multilingual BERT showed high syntactic accuracy on English, but noticeable deficiencies in other languages. 1 Introduction Neural networks can be trained to predict words from their context with much greater accuracy than the architectures used for this purpose in the past. This has been shown to be the case for both recurrent neural networks (Mikolov et al., 2010; Sundermeyer et al., 2012; Jozefowicz et al., 2016) and non-recurrent attention-based models (Devlin et al., 2019; Radford et al., 2019). To gain a better understanding of these models’ successes and failures, in particular in the domain of syntax, proposals have been made for testing the † Work done while at Johns Hopkins University. Now in the University of British Columbia’s Linguistics Department. models on subsets of the test corpus where successful word prediction crucially depends on a correct analysis of the structure of the sentence (Linzen et al., 2016). A paradigmatic example is subjectverb agreement. In many languages, including English, the verb often needs to agree in number (here, singular or plural) with the subject (asterisks represent ungrammatical word predictions): (1) The key to the cabinets is/*are next to the coins. To correctly predict the form of the verb (underlined), the model needs to determine that the head of the subject of the sentence—an abstract, structurally defined notion—is the word key rather than cabinets or coins. The approach of sampling challenging sentences from a test corpus has its limitations. Examples of relevant constructions may be difficult to find in the corpus, and naturally occurring sentences often contain statistical cues (confounds) that make it possible for the model to predict the correct form of the verb without an adequate syntactic analysis (Gulordava et al., 2018). To address these limitations, a growing number of studies have used constructed materials, which improve experimental control and coverage of syntactic constructions (Marvin and Linzen, 2018; Wilcox et al., 2018; Futrell et al., 2019; Warstadt et al., 2019a). Existing experimentally controlled data sets—in particular, those targeting subject-verb agreement— have largely been restricted to English. As such, we have a limited understanding of the effect of the cross-linguistic variability in neural networks’ syntactic prediction abilities. In this paper, we introduce the Cross-Linguistic Assessment of Models on Syntax (CLAMS) data set, which extends the subject-verb agreement component of the Marvin and Linzen (2018) challenge set to French, German, Hebrew and Russian. By focusing on a single lin5524 guistic phenomenon in related languages,1 we can directly compare the models’ performance across languages. We see the present effort as providing a core data set that can be expanded in future work to improve coverage to other languages and syntactic constructions. To this end, we release the code for a simple grammar engineering framework that facilitates the creation and generation of syntactic evaluation sets.2 We use CLAMS to test two hypotheses. First, we hypothesize that a multilingual model would show transfer across languages with similar syntactic constructions, which would lead to improved syntactic performance compared to monolingual models. In experiments on LSTM language models (LMs), we do not find support for this hypothesis; contrarily, accuracy was lower for the multilingual model than the monolingual ones. Second, we hypothesize that language models would be better able to learn hierarchical syntactic generalizations in morphologically complex languages (which provide frequent overt cues to syntactic structure) than in morphologically simpler languages (Gulordava et al., 2018; Lorimor et al., 2008; McCoy et al., 2018). We test this using LSTM LMs we train, and find moderate support for this hypothesis. In addition to our analysis of LSTM LMs, we demonstrate the utility of CLAMS for testing pretrained word prediction models. We evaluate multilingual BERT (Devlin et al., 2019), a bidirectional Transformer model trained on a multilingual corpus, and find that this model performs well on English, has mixed syntactic abilities in French and German, and performs poorly on Hebrew and Russian. Its syntactic performance in English was somewhat worse than that of monolingual English BERT, again suggesting that interference between languages offsets any potential syntactic transfer. 2 Background and Previous Work 2.1 Word Prediction Models Language models (LMs) are statistical models that estimate the probability of sequences of words—or, equivalently, the probability of the next word of the sentence given the preceding ones. Currently, the most effective LMs are based on neural networks that are trained to predict the next word in a 1English, French, German and Russian are all IndoEuropean languages, and (Modern) Hebrew syntax exhibits European areal influence (for different perspectives, see Wexler 1990; Zuckermann 2006; Zeldes 2013). 2https://github.com/aaronmueller/clams large corpus. Neural LMs are commonly based on LSTMs (Hochreiter and Schmidhuber, 1997; Sundermeyer et al., 2012) or non-recurrent attentionbased architectures (Transformers, Vaswani et al. 2017). The results of existing studies comparing the performance of the two architectures on grammatical evaluations are mixed (Tran et al., 2018; van Schijndel et al., 2019), and the best reported syntactic performance on English grammatical evaluations comes from LMs trained with explicit syntactic supervision (Kuncoro et al., 2018, 2019). We focus our experiments in the present study on LSTM-based models, but view CLAMS as a general tool for comparing LM architectures. A generalized version of the word prediction paradigm, in which a bidirectional Transformerbased encoder is trained to predict one or more words in arbitrary locations in the sentence, has been shown to be an effective pre-training method in systems such as BERT (Devlin et al., 2019). While there are a number of variations on this architecture (Raffel et al., 2019; Radford et al., 2019), we focus our evaluation on the pre-trained English BERT and multilingual BERT. 2.2 Acceptability Judgments Human acceptability judgments have long been employed in linguistics to test the predictions of grammatical theories (Chomsky, 1957; Sch¨utze, 1996). There are a number of formulations of this task; we focus on the one in which a speaker is expected to judge a contrast between two minimally different sentences (a minimal pair). For instance, the following examples illustrate the contrast between grammatical and ungrammatical subjectverb agreement on the second verb in a coordination of short (2a) and long (2b) verb phrases; native speakers of English will generally agree that the first underlined verb is more acceptable than the second one in this context. (2) Verb-phrase coordination: a. The woman laughs and talks/*talk. b. My friends play tennis every week and then get/*gets ice cream. In computational linguistics, acceptability judgments have been used extensively to assess the grammatical abilities of LMs (Linzen et al., 2016; Lau et al., 2017). For the minimal pair paradigm, this is done by determining whether the LM assigns a higher probability to the grammatical member of 5525 the minimal pair than to the ungrammatical member. This paradigm has been applied to a range of constructions, including subject-verb agreement (Marvin and Linzen, 2018; An et al., 2019), negative polarity item licensing (Marvin and Linzen, 2018; Jumelet and Hupkes, 2018), filler-gap dependencies (Chowdhury and Zamparelli, 2018; Wilcox et al., 2018), argument structure (Kann et al., 2019), and several others (Warstadt et al., 2019a). To the extent that the acceptability contrast relies on a single word in a particular location, as in (2), this approach can be extended to bidirectional word prediction systems such as BERT, even though they do not assign a probability to the sentence (Goldberg, 2019). As we describe below, the current version of CLAMS only includes contrasts of this category. An alternative use of acceptability judgments in NLP involves training an encoder to classify sentences into acceptable and unacceptable, as in the Corpus of Linguistic Acceptability (CoLA, Warstadt et al. 2019b). This approach requires supervised training on acceptable and unacceptable sentences; by contrast, the prediction approach we adopt can be used to evaluate any word prediction model without additional training. 2.3 Grammatical Evaluation Beyond English Most of the work on grammatical evaluation of word prediction models has focused on English. However, there are a few exceptions, which we discuss in this section. To our knowledge, all of these studies have used sentences extracted from a corpus rather than a controlled challenge set, as we propose. Gulordava et al. (2018) extracted English, Italian, Hebrew, and Russian evaluation sentences from a treebank. Dhar and Bisazza (2018) trained a multilingual LM on a concatenated French and Italian corpus, and tested whether grammatical abilities transfer across languages. Ravfogel et al. (2018) reported an in-depth analysis of LSTM LM performance on agreement prediction in Basque, and Ravfogel et al. (2019) investigated the effect of different syntactic properties of a language on RNNs’ agreement prediction accuracy by creating synthetic variants of English. Finally, grammatical evaluation has been proposed for machine translation systems for languages such as German and French (Sennrich, 2017; Isabelle et al., 2017). 3 Grammar Framework To construct our challenge sets, we use a lightweight grammar engineering framework that we term attribute-varying grammars (AVGs). This framework provides more flexibility than the hard-coded templates of Marvin and Linzen (2018) while avoiding the unbounded embedding depth of sentences generated from a recursive contextfree grammar (CFG, Chomsky 1956). This is done using templates, which consist of preterminals (which have attributes) and terminals. A vary statement specifies which preterminal attributes are varied to generate ungrammatical sentences. Templates define the structure of the sentences in the evaluation set. This is similar to the expansions of the S nonterminal in CFGs. Preterminals are similar to nonterminals in CFGs: they have a lefthand side which specifies the name of the preterminal and the preterminal’s list of attributes, and a right-hand side which specifies all terminals to be generated by the preterminal. However, they are non-recursive and their right-hand sides may not contain other preterminals; rather, they must define a list of terminals to be generated. This is because we wish to generate all possible sentences given the template and preterminal definitions; if there existed any recursive preterminals, there would be an infinite number of possible sentences. All preterminals have an attribute list which is defined at the same time as the preterminal itself; this list is allowed to be empty. A terminal is a token or list of space-separated tokens. The vary statement specifies a list of preterminals and associated attributes for each. Typically, we only wish to vary one preterminal per grammar such that each grammatical case is internally consistent with respect to which syntactic feature is varied. The following is a simple example of an attribute-varying grammar: vary: V[] S[] →je V[1,s] V[1,s] →pense V[2,s] →penses V[1,p] →pensons V[2,p] →pensez Preterminals are blue and attributes are orange. Here, the first statement is the vary statement. This is followed by a template, with the special S keyword in red. All remaining statements are preterminal definitions. All attributes are spec5526 ified within brackets as comma-separated lists; these may be multiple characters and even multiple words long, so long as they do not contain commas. The output of this AVG is as follows (True indicates that the sentence is grammatical): True je pense False je penses False je pensons False je pensez This particular grammar generates all possible verb forms because the attribute list for V in the vary statement is empty, which means that we may generate any V regardless of attributes. One may change which incorrect examples are generated by changing the vary statement; for example, if we change V[] to V[1], we would only vary over verbs with the 1 (first-person) attribute, thus generating je pense and *je pensons. One may also add multiple attributes within a single vary preterminal (implementing a logical AND) or multiple semicolon-separated vary preterminals (a logical OR). Changing V[] to V[1,s] in the example above would generate all first-person singular V terminals (here, je pense). If instead we used V[1]; V[s], this would generate all V terminals with either first-person and/or singular attributes (here, je pense, *je penses, and *je pensons). 4 Syntactic Constructions We construct grammars in French, German, Hebrew and Russian for a subset of the English constructions from Marvin and Linzen (2018), shown in Figure 1. These are implemented as AVGs by native or fluent speakers of the relevant languages who have academic training in linguistics.3 A number of the constructions used by Marvin and Linzen are English-specific. None of our languages besides English allow relative pronoun dropping, so we are unable to compare performance across languages on reduced relative clauses (the author the farmers like smile/*smiles). Likewise, we exclude Marvin and Linzen’s sentential complement condition, which relies on the English-specific ability to omit complementizers (the bankers knew the officer smiles/*smile). The Marvin and Linzen (2018) data set includes two additional structure-sensitive phenomena other than subject-verb agreement: reflexive anaphora 3The German grammar was created by a non-native speaker but was then validated by native speakers. Simple Agreement: The author laughs/*laugh. Across a Prepositional Phrase: The farmer near the parents smiles/*smile. Across a Subject Relative Clause: The officers that love the skater *smiles/smile. Short Verb Phrase Coordination: The senator smiles and laughs/*laugh. Long Verb Phrase Coordination: The manager writes in a journal every day and likes/*like to watch television shows. Across Object Relative Clause: The farmer that the parents love swims/*swim. Within Object Relative Clause: The farmer that the parents *loves/love swims. Figure 1: Syntactic constructions used in CLAMS. Only English examples are shown; for examples in other languages, see Appendix A. Ungrammatical forms are marked with asterisks. and negative polarity item licensing. We do not include reflexive anaphora, as our languages vary significantly in how those are implemented. French and German, for example, do not distinguish singular from plural third-person reflexive pronouns. Similarly, negative polarity items (NPIs) have significantly different distributions across languages, and some of our evaluation languages do not even have items comparable to English NPIs (Giannakidou, 2011). We attempt to use translations of all terminals in Marvin and Linzen (2018). In cases where this is not possible (due to differences in LM vocabulary across languages), we replace the word with another in-vocabulary item. See Appendix D for more detail on vocabulary replacement procedures. For replicability, we observe only third-person singular vs. plural distinctions (as opposed to all possible present-tense inflections) when replicating the evaluation sets of Marvin and Linzen (2018) in any language. 5 Experimental Setup 5.1 Corpora Following Gulordava et al. (2018), we download recent Wikipedia dumps for each of the languages, 5527 strip the Wikipedia markup using WikiExtractor,4 and use TreeTagger5 to tokenize the text and segment it into sentences. We eliminate sentences with more than 5% unknown words. Our evaluation is within-sentence rather than across sentences. Thus, to minimize the availability of cross-sentential dependencies in the training corpus, we shuffle the preprocessed Wikipedia sentences before extracting them into train/dev/test corpora. The corpus for each language consists of approximately 80 million tokens for training, as well as 10 million tokens each for development and testing. We generate language-specific vocabularies containing the 50,000 most common tokens in the training and development set; as is standard, out-of-vocabulary tokens in the training, development, and test sets are replaced with <unk>. 5.2 Training and Evaluation We experiment with recurrent LMs and Transformer-based bidirectional encoders. LSTM LMs are trained for each language using the best hyperparameters in van Schijndel et al. (2019).6 We will refer to these models as monolingual LMs. We also train a multilingual LSTM LM over all of our languages. The training set for this model is a concatenation of all of the individual languages’ training corpora. The validation and test sets are concatenated in the same way, as are the vocabularies. We use the same hyperparameters as the monolingual models (Footnote 6). At each epoch, the corpora are randomly shuffled before batching; as such, each training batch consists with very high probability of sentences from multiple languages. To obtain LSTM accuracies, we compute the total probability of each of the sentences in our challenge set, and then check within each minimal set whether the grammatical sentence has higher probability than the ungrammatical one. Because the syntactic performance of LSTM LMs has been found to vary across weight initializations (McCoy et al., 2018; Kuncoro et al., 2019), we report mean accuracy over five random initializations for each 4https://github.com/attardi/ wikiextractor 5https://www.cis.uni-muenchen.de/ ˜schmid/tools/TreeTagger/ 6 Specifically, we use 2-layer word-level LSTMs with 800 hidden units in each layer, 800-dimensional word embeddings, initial learning rate 20.0 (annealed after any epoch in which validation perplexity did not improve relative to the previous epoch), batch size 20, and dropout probability 0.2. LM. See Appendix C for standard deviations across runs on each test construction in each language. We evaluate the syntactic abilities of multilingual BERT (mBERT, Devlin et al. 2019) using the approach of Goldberg (2019). Specifically, we mask out the focus verb, obtain predictions for the masked position, and then compare the scores assigned to the grammatical and ungrammatical forms in the minimal set. We use the scripts provided by Goldberg7 without modification, with the exception of using bert-base-multilingual-cased to obtain word probabilities. This approach is not equivalent to the method we use to evaluate LSTM LMs, as LSTM LMs score words based only on the left context, whereas BERT has access to left and right contexts. In some cases, mBERT’s vocabulary does not include the focus verbs that we vary in a particular minimal set. In such cases, if either or both verbs were missing, we skip that minimal set and calculate accuracies without the sentences contained therein. 6 Results 6.1 LSTMs The overall syntactic performance of the monolingual LSTMs was fairly consistent across languages (Table 1 and Figure 2). Accuracy on short dependencies without attractors—Simple Agreement and Short VP Coordination—was close to perfect in all languages. This suggests that all monolingual models learned the basic facts of agreement, and were able to apply them to the vocabulary items in our materials. At the other end of the spectrum, performance was only slightly higher than chance in the Across an Object Relative Clause condition for all languages except German, suggesting that LSTMs tend to struggle with center embedding—that is, when a subject-verb dependency is nested within another dependency of the same kind (Marvin and Linzen, 2018; Noji and Takamura, 2020). There was higher variability across languages in the remaining three constructions. The German models had almost perfect accuracy in Long VP Coordination and Across Prepositional Phrase, compared to accuracies ranging between 0.76 and 0.87 for other languages in those constructions. The Hebrew, Russian, and German models showed very high performance on the Across Subject Relative Clause condition: ≥0.88 compared to 0.6–0.71 7https://github.com/yoavg/bert-syntax 5528 English French German Hebrew Russian Mono Multi Mono Multi Mono Multi Mono Multi Mono Multi Test Perplexity 57.90 66.13 35.48 57.40 46.31 61.06 48.78 61.85 35.09 54.61 Simple agreement 1.00 1.00 1.00 1.00 1.00 0.96 0.95 0.96 0.91 0.75 VP coordination (short) 0.94 0.96 0.97 0.85 0.99 1.00 1.00 0.95 0.98 0.92 VP coordination (long) 0.76 0.69 0.85 0.72 0.96 0.73 0.84 0.70 0.86 0.72 Across subject rel. clause 0.60 0.63 0.71 0.70 0.94 0.74 0.91 0.84 0.88 0.86 Within object rel. clause 0.89 0.79 0.99 0.99 0.74 0.69 1.00 0.88 0.95 0.88 Across object rel. clause 0.55 0.52 0.52 0.52 0.81 0.74 0.56 0.54 0.60 0.57 Across prepositional phrase 0.63 0.61 0.74 0.63 0.89 0.82 0.88 0.82 0.76 0.61 Average accuracy 0.77 0.74 0.83 0.78 0.90 0.81 0.88 0.81 0.85 0.76 Table 1: LSTM LM test perplexities and accuracies on CLAMS across languages for the language-specific monolingual models and for our multilingual model. Results are averaged across five random initializations. Chance accuracy is 0.5. Boldfaced numbers indicate the model that achieved the highest performance on a given construction across languages. in other languages (recall that all our results are averaged over five runs, so this pattern is unlikely to be due to a single outlier). With each of these trends, German seems to be a persistent outlier. This could be due to its marking of cases in separate article tokens—a unique feature among the languages evaluated here—or some facet of its word ordering or unique capitalization rules. In particular, subject relative clauses and object relative clauses have the same word order in German, but are differentiated by the case markings of the articles and relative pronouns. More investigation will be necessary to determine the sources of this deviation. For most languages and constructions, the multilingual LM performed worse than the monolingual LMs, even though it was trained on five times as much data as each of the monolingual ones. Its average accuracy in each language was at least 3 percentage points lower than that of the corresponding monolingual LMs. Although all languages in our sample shared constructions such as prepositional phrases and relative clauses, there is no evidence that the multilingual LM acquired abstract representations that enable transfer across those languages; if anything, the languages interfered with each other. The absence of evidence for syntactic transfer across languages is consistent with the results of Dhar and Bisazza (2020), who likewise found no evidence of transfer in an LSTM LM trained on two closely related languages (French and Italian). One caveat is that the hyperparameters we chose for all of our LSTM LMs were based on a monolingual LM (van Schijndel et al., 2019); it is Simple VP coord (short) VP coord (long) Across subj. rel. Within obj. rel. Across object rel. Across prep. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy (averaged across languages) Chance Monolingual Multilingual Figure 2: Mean accuracy (bars) and standard deviation (whiskers) for LSTM LMs over all languages for each stimulus type. Note: these are means over languages per-case, whereas the numbers in Table 1 are means over cases per-language. possible that the multilingual LM would have been more successful if we had optimized its hyperparameters separately (e.g., it might benefit from a larger hidden layer). These findings also suggest that test perplexity and subject-verb agreement accuracy in syntactically complex contexts are not strongly correlated cross-linguistically. This extends one of the results of Kuncoro et al. (2019), who found that test perplexity and syntactic accuracy were not necessarily strongly correlated within English. Finally, the multilingual LM’s perplexity was always higher than that of the monolingual LMs. At 5529 English French German Hebrew Russian Simple agreement 1.00 1.00 0.95 0.70 0.65 VP coordination (short) 1.00 1.00 0.97 0.91 0.80 VP coordination (long) 0.92 0.98 1.00 0.73 — Across subject relative clause 0.88 0.57 0.73 0.61 0.70 Within object relative clause 0.83 — — — — Across object relative clause 0.87 0.86 0.93 0.55 0.67 Across prepositional phrase 0.92 0.57 0.95 0.62 0.56 Table 2: Multilingual BERT accuracies on CLAMS. If a hyphen is present, this means that all focus verbs for that particular language and construction were out-of-vocabulary. Chance accuracy is 0.5. first glance, this contradicts the results of ¨Ostling and Tiedemann (2017), who observed lower perplexity in LMs trained on a small number of very similar languages (e.g., Danish, Swedish, and Norwegian) than in LMs trained on just one of those languages. However, their perplexity rose precipitously when trained on more languages and/or lessrelated languages—as we have here. 6.2 BERT and mBERT Table 2 shows mBERT’s accuracies on all stimuli. Performance on CLAMS was fairly high in the languages that are written in Latin script (English, French and German). On English in particular, accuracy was high across conditions, ranging between 0.83 and 0.88 for sentences with relative clauses, and between 0.92 and 1.00 for the remaining conditions. Accuracy in German was also high: above 0.90 on all constructions except Across Subject Relative Clause, where it was 0.73. French accuracy was more variable: high for most conditions, but low for Across Subject Relative Clause and Across Prepositional Phrase. In all Latin-script languages, accuracy on Across an Object Relative Clause was much higher than in our LSTMs. However, the results are not directly comparable, for two reasons. First, as we have mentioned, we followed Goldberg (2019) in excluding the examples whose focus verbs were not present in mBERT’s vocabulary; this happened frequently (see Appendix D for statistics). Perhaps more importantly, unlike the LSTM LMs, mBERT has access to the right context of the focus word; in Across Object Relative Clause sentences (the farmers that the lawyer likes smile/*smiles.), the period at the end of the sentence may indicate to a bidirectional model that the preceding word (smile/smiles) is part of the main clause rather than the relative clause, and should therefore agree with farmers rather than lawyer. In contrast to the languages written in Latin script, mBERT’s accuracy was noticeably lower on Hebrew and Russian—even on the Simple Agreement cases, which do not pose any syntactic challenge. Multilingual BERT’s surprisingly poor syntactic performance on these languages may arise from the fact that mBERT’s vocabulary (of size 110,000) is shared across all languages, and that a large proportion of the training data is likely in Latin script. While Devlin et al. (2019) reweighted the training sets for each language to obtain a more even distribution across various languages during training, it remains the case that most of the largest Wikipedias are written in languages which use Latin script, whereas Hebrew script is used only by Hebrew, and the Cyrillic script, while used by several languages, is not as well-represented in the largest Wikipedias. We next compare the performance of monolingual and multilingual BERT. Since this experiment is not limited to using constructions that appear in all of our languages, we use additional constructions from Marvin and Linzen (2018), including reflexive anaphora and reduced relative clauses (i.e., relative clauses without that). We exclude their negative polarity item examples, as the two members of a minimal pair in this construction differ in more than one word position. The results of this experiment are shown in Table 3. Multilingual BERT performed better than English BERT on Sentential Complements, Short VP Coordination, and Across a Prepositional Phrase, but worse on Within an Object Relative Clause, Across an Object Relative Clause (no relative pronoun), and in Reflexive Anaphora Across a Relative Clause. The omission of the relative pronoun that caused a sharp drop in performance in mBERT, and a milder drop in English BERT. Otherwise, both models had similar accuracies on other stimuli. 5530 Mono Multi SUBJECT-VERB AGREEMENT Simple 1.00 1.00 In a sentential complement 0.83 1.00 VP coordination (short) 0.89 1.00 VP coordination (long) 0.98 0.92 Across subject rel. clause 0.84 0.88 Within object rel. clause 0.95 0.83 Within object rel. clause (no that) 0.79 0.61 Across object rel. clause 0.89 0.87 Across object rel. clause (no that) 0.86 0.64 Across prepositional phrase 0.85 0.92 Average accuracy 0.89 0.87 REFLEXIVE ANAPHORA Simple 0.94 0.87 In a sentential complement 0.89 0.89 Across a relative clause 0.80 0.74 Average accuracy 0.88 0.83 Table 3: English BERT (base) and multilingual BERT accuracies on the English stimuli from Marvin and Linzen (2018). Monolingual results are taken from Goldberg (2019). These results reinforce the finding in LSTMs that multilingual models generally underperform monolingual models of the same architecture, though there are specific contexts in which they can perform slightly better. 6.3 Morphological Complexity vs. Accuracy Languages vary in the extent to which they indicate the syntactic role of a word using overt morphemes. In Russian, for example, the subject is generally marked with a suffix indicating nominative case, and the direct object with a different suffix indicating accusative case. Such case distinctions are rarely indicated in English, with the exception of pronouns (he vs. him). English also displays significant syncretism: morphological distinctions that are made in some contexts (e.g., eat for plural subjects vs. eats for singular subjects) are neutralized in others (ate for both singular and plural subjects). We predict that greater morphological complexity, which is likely to correlate with less syncretism, will provide more explicit cues to hierarchical syntactic structure,8 and thus result in increased overall accuracy on a given language. To measure the morphological complexity of a 8For more evidence that explicit cues to structural information can aid syntactic performance, see Appendix B. language, we use the CWALS metric of Bentz et al. (2016): Pn i=1 fi n . This is a typological measure of complexity based on the World Atlas of Language Structures (WALS, Dryer and Haspelmath 2013), where fi refers to a morphological feature value normalized to the range [0, 1].9 This essentially amounts to a mean over normalized values of quantified morphological features. Here, n is 27 or 28 depending on the number of morphological categorizations present for a given language in WALS. 0.30 0.35 0.40 0.45 0.50 0.55 Morphological Complexity (CWALS) 0.7 0.8 0.9 1.0 Average Accuracy en en de de fr fr ru ru he he LSTM mBERT Figure 3: Morphological complexities against average accuracies per-language for LSTMs and mBERT. Does the morphological complexity of a language correlate with the syntactic prediction accuracy of LMs trained on that language? In the LSTM LMs (Table 1), the answer is generally yes, but not consistently. We see higher average accuracies for French than English (French has more distinct person/number verb inflections), higher for Russian than French, and higher for Hebrew than Russian (Hebrew verbs are inflected for person, number, and gender). However, German is again an outlier: despite its notably lower complexity than Hebrew and Russian, it achieved a higher average accuracy. The same reasoning applied in Section 6.1 for German’s deviation from otherwise consistent trends applies to this analysis as well. Nonetheless, the Spearman correlation between morphological complexity and average accuracy including German is 0.4; excluding German, it is 1.0. Because we have the same amount of training data per-language in the same domain, this could point to the importance of having explicit cues to lin9For example, if WALS states that a language has negative morphemes, f28 is 1; otherwise, f28 is 0. 5531 guistic structure such that models can learn that structure. While more language varieties need to be evaluated to determine whether this trend is robust, we note that this finding is consistent with that of Ravfogel et al. (2019), who compared English to a synthetic variety of English augmented with case markers and found that the addition of case markers increased LSTM agreement prediction accuracy. We see the opposite trend for mBERT (Table 2): if we take the average accuracy over all stimulus types for which we have scores for all languages— i.e., all stimulus types except Long VP Coordination and Within an Object Relative Clause—then we see a correlation of ρ = −0.9. In other words, accuracy is likely to decrease with increasing morphological complexity. This unexpected inverse correlation may be an artifact of mBERT’s limited vocabulary, especially in non-Latin scripts. Morphologically complex languages have more unique word types. In some languages, this issue can be mitigated to some extent by splitting the word into subword units, as BERT does; however, the effectiveness of such a strategy would be limited at best in a language with non-concatenative morphology such as Hebrew. Finally, we stress that the exclusion of certain stimulus types and the differing amount of training data per-language act as confounding variables, rendering a comparison between mBERT and LSTMs difficult. 7 Conclusions In this work, we have introduced the CLAMS data set for cross-linguistic syntactic evaluation of word prediction models, and used it to to evaluate monolingual and multilingual versions of LSTMs and BERT. The design conditions of Marvin and Linzen (2018) and our cross-linguistic replications rule out the possibility of memorizing the training data or relying on statistical correlations/token collocations. Thus, our findings indicate that LSTM language models can distinguish grammatical from ungrammatical subject-verb agreement dependencies with considerable overall accuracy across languages, but their accuracy declines on some constructions (in particular, center-embedded clauses). We also find that multilingual neural LMs in their current form do not show signs of transfer across languages, but rather harmful interference. This issue could be mitigated in the future with architectural changes to neural LMs (such as better handling of morphology), more principled combinations of languages (as in Dhar and Bisazza 2020), or through explicit separation between languages during training (e.g., using explicit language IDs). Our experiments on BERT and mBERT suggest (1) that mBERT shows signs of learning syntactic generalizations in multiple languages, (2) that it learns these generalizations better in some languages than others, and (3) that its sensitivity to syntax is lower than that of monolingual BERT. It is possible that its performance drop in Hebrew and Russian could be mitigated with fine-tuning on more data in these languages. When evaluating the effect of the morphological complexity of a language on the LMs’ syntactic prediction accuracy, we found that recurrent neural LMs demonstrate better hierarchical syntactic knowledge in morphologically richer languages. Conversely, mBERT demonstrated moderately better syntactic knowledge in morphologically simpler languages. Since CLAMS currently includes only five languages, this correlation should be taken as very preliminary. In future work, we intend to expand the coverage of CLAMS by incorporating language-specific and non-binary phenomena (e.g., French subjunctive vs. indicative and different person/number combinations, respectively), and by expanding the typological diversity of our languages. Acknowledgments This material is based on work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1746891. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the other supporting agencies. Additionally, this work was supported by a Google Faculty Research Award to Tal Linzen, and by the United States–Israel Binational Science Foundation (award 2018284). References Aixiu An, Peng Qian, Ethan Wilcox, and Roger Levy. 2019. Representation of constituents in neural language models: Coordination phrase as a case study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2881– 2892, Hong Kong, China. Association for Computational Linguistics. 5532 Christian Bentz, Tatyana Ruzsics, Alexander Koplenig, and Tanja Samardˇzi´c. 2016. A comparison between morphological complexity measures: Typological data vs. language corpora. In Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC), pages 142–153, Osaka, Japan. The COLING 2016 Organizing Committee. Noam Chomsky. 1956. Three models for the description of language. IRE Transactions on Information Theory, 2(3):113–124. Noam Chomsky. 1957. Syntactic Structures. Mouton, The Hague. Shammur Absar Chowdhury and Roberto Zamparelli. 2018. RNN simulations of grammaticality judgments on long-distance dependencies. In Proceedings of the 27th International Conference on Computational Linguistics, pages 133–144. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Prajit Dhar and Arianna Bisazza. 2018. Does syntactic knowledge in multilingual language models transfer across languages? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 374–377, Brussels, Belgium. Association for Computational Linguistics. Prajit Dhar and Arianna Bisazza. 2020. Understanding cross-lingual syntactic transfer in multilingual recurrent neural networks. arXiv preprint 2003.14056. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic subjects: Representations of syntactic state. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32–42, Minneapolis, Minnesota. Association for Computational Linguistics. Anastasia Giannakidou. 2011. Negative and positive polarity items: Variation, licensing, and compositionality. In Semantics: An international handbook of natural language meaning, pages 1660– 1712. Berlin: Mouton de Gruyter. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. arXiv preprint 1901.05287. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205, New Orleans, Louisiana. Association for Computational Linguistics. Yiding Hao. 2020. Attribution analysis of grammatical dependencies in lstms. arXiv preprint 2005.00062. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Pierre Isabelle, Colin Cherry, and George Foster. 2017. A challenge set approach to evaluating machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2486–2496, Copenhagen, Denmark. Association for Computational Linguistics. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint 1602.02410. Jaap Jumelet and Dieuwke Hupkes. 2018. Do language models understand anything? On the ability of LSTMs to understand negative polarity items. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 222–231, Brussels, Belgium. Association for Computational Linguistics. Katharina Kann, Alex Warstadt, Adina Williams, and Samuel R. Bowman. 2019. Verb argument structure alternations in word and sentence embeddings. In Proceedings of the Society for Computation in Linguistics (SCiL) 2019, pages 287–297. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436, Melbourne, Australia. Association for Computational Linguistics. Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable syntaxaware language models using knowledge distillation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3472–3484, Florence, Italy. Association for Computational Linguistics. Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Cognitive Science, (5):1202–1247. 5533 Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics), 4:521– 535. Heidi Lorimor, Kathryn Bock, Ekaterina Zalkind, Alina Sheyman, and Robert Beard. 2008. Agreement and attraction in Russian. Language and Cognitive Processes, 23(6):769–799. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Association for Computational Linguistics. R. Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: Hierarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 2093—2098, Austin, TX. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH 2010), pages 1045–1048, Makuhari, Chiba, Japan. Hiroshi Noji and Hiroya Takamura. 2020. An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, Washington, USA. Association for Computational Linguistics. Robert ¨Ostling and J¨org Tiedemann. 2017. Continuous multilinguality with language vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 644–649, Valencia, Spain. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint 1910.10683. Shauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the inductive biases of RNNs with synthetic variations of natural languages. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3532–3542, Minneapolis, Minnesota. Association for Computational Linguistics. Shauli Ravfogel, Yoav Goldberg, and Francis Tyers. 2018. Can LSTM learn to capture agreement? the case of Basque. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 98–107, Brussels, Belgium. Association for Computational Linguistics. Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn’t buy quality syntax with neural language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5835–5841, Hong Kong, China. Association for Computational Linguistics. Carson T Sch¨utze. 1996. The empirical base of linguistics: Grammaticality judgments and linguistic methodology. University of Chicago Press. Rico Sennrich. 2017. How grammatical is characterlevel neural machine translation? Assessing MT quality with contrastive translation pairs. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 376–382, Valencia, Spain. Association for Computational Linguistics. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In Proceedings of the 13th Annual Conference of the International Speech Communication Association (INTERSPEECH), pages 194–197. Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hierarchical structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4731–4736, Brussels, Belgium. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2019a. BLiMP: A benchmark of linguistic minimal pairs for English. arXiv preprint 1912.00582. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019b. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Paul Wexler. 1990. The schizoid nature of modern Hebrew: A Slavic language in search of a Semitic past. Wiesbaden: Otto Harrassowitz Verlag. 5534 Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN language models learn about filler–gap dependencies? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 211–221, Brussels, Belgium. Association for Computational Linguistics. Amir Zeldes. 2013. Is Modern Hebrew Standard Average European? The view from European. Linguistic Typology, 17(3):439–470. Ghil’ad Zuckermann. 2006. A new vision for Israel Hebrew: Theoretical and practical implications of analyzing Israel’s main language as a semi-engineered Semito-European hybrid language. Journal of Modern Jewish Studies, 5(1):57–71. A Linguistic Examples This section provides examples of the syntactic structures included in the CLAMS dataset across languages. For Hebrew, we transliterate its original right-to-left script into the left-to-right Latin script; this makes labeling and glossing more consistent across languages. Hebrew was not transliterated in the training/development/test corpora or in the evaluation sets. In all examples, (a) is English, (b) is French, (c) is German, (d) is Hebrew, and (e) is Russian. The first case is simple agreement. This simply involves agreeing a verb with its adjacent subject, which should pose little challenge for any good language model regardless of syntactic knowledge. (3) Simple Agreement: a. The surgeons laugh/*laughs. b. Le The pilote pilot parle laughs / / *parlent. *laugh. c. Der The Schriftsteller writer spricht speaks / / *sprechen. *speak. d. Ha The meltsar server yashen sleeps / / yeshenim. *sleep. e. Врачи Doctors говорят speak / / *говорит. *speaks. Short verb-phrase coordination introduces some slight distance between the subject and verb, though the presence of the previous verb should give a model a clue as to which inflection should be more probable. (4) VP coordination (short): a. The author swims and smiles/*smile. b. Les The directeurs directors parlent talk et and d´em´enagent move / / *d´em´enage. *moves. c. Der The Polizist police.officer schwimmt swims und and lacht laughs / / *lachen. *laugh. d. Ha The tabaxim cooks rokdim dance ve and soxim swim / / *soxe. *swims. e. Профессор Professor старый is.old и and читает reads / / *читают. *read. Long verb-phrase coordination is similar, but makes each verb phrase much longer to introduce more distance and attractors between the subject and target verb. (5) VP coordination (long): a. The teacher knows many different foreign languages and likes/*like to watch television shows. b. L’ The agriculteur farmer ´ecrit writes dans in un a journal journal tous all les the jours days et and pr´ef`ere prefers / / *pr´ef`erent *prefer jouer to.play au at.the tennis tennis avec with des some coll`egues. colleagues. c. Die The Bauern farmers sprechen speak viele many verschiedene various Sprachen languages und and sehen watch / / *sieht *watches gern gladly Fernsehprogramme. TV.shows. d. Ha The tabax cook ohev likes litspot to.watch be in toxniot shows televizya TV ve and gar lives / / *garim *live be in merkaz center ha the ir. city. e. Автор Author знает knows много many иностранных foreign языков languages и and любит likes / / *любят *like смотреть to.watch телепередачи. TV.shows. Now we have more complex structures that require some form of structural knowledge if a model is to obtain the correct predictions with more than random-chance accuracy. Agreement across a subject relative clause involves a subject with an attached relative clause containing a verb and object, followed by the main verb. Here, the attractor is the object in the relative clause. (An attractor is an intervening noun between a noun and its associated finite verb which might influence a human’s or model’s decision as to which inflection to choose. This might be of the same person and number, or, in more difficult cases, a different person and/or number. It does not necessarily need to occur between the noun and its associated verb, though this 5535 does tend to render this task more difficult.) (6) Across a subject relative clause: a. The officers that love the chef are/*is old. b. Les The chirurgiens surgeons qui that d´etestent hate le the garde guard retournent return / / *retourne. *returns c. Der The Kunde, customer der that die the Architekten architects hasst, hates ist is / / *sind *are klein. short. d. Ha The menahel manager she who ma’arits admires et ACC ha the shomer guard rats runs / / *ratsim. *run. e. Пилоты, Pilots которые that понимают understand агентов, agents говорят speak / / *говорит. *speaks. Agreement within an object relative clause requires the model to inflect the proper verb inside of an object relative clause; the object relative clause contains a noun and an associated transitive verb whose object requirement is filled by the relative pronoun. The model must choose the proper verb inflection given the noun within the relative clause as opposed to the noun outside of it. This may seem similar to simple agreement, but we now have an attractor which appears before the noun of the target verb. (7) Within an object relative clause: a. The senator that the executives love/*loves laughs. b. Les The professeurs professors que that le the chef boss admire admires / / *admirent *admire parlent. talk. c. Die The Polizisten, police.officers die that der the Bruder brother hasst, hates / / *hassen, *hate sind are alt old. d. Ha The menahel manager she that ha the nahag driver ma’aritz admires / / *ma’aritsim *admire soxe. swims. e. Сенаторы, Senators которых that рабочие workers ищут, seek / / *ищет, *seeks ждали. wait. Agreement across an object relative clause is similar, but now the model must choose the correct inflection for the noun outside of the relative clause. This requires the model to capture long-range dependencies, and requires it to have the proper structural understanding to ignore the relative clause when choosing the proper inflection for the focus verb. (8) Across an object relative clause: a. The senator that the executives love laughs/*laugh. b. Les The professeurs professors que that le the chef boss admire admires parlent talk / / *parle. *talks. c. Der The Senator, senator den that die the T¨anzer dancers m¨ogen, like spricht speaks / / *sprechen. *speak. d. Ha The katsin officer she that ha the zamar singer ohev likes soxe swims / / *soxim. *swim. e. Фермеры, Farmers которых that танцоры dancers хотят, want большие are.big / / *большой. *is.big. Finally, agreement across a prepositional phrase entails placing a prepositional phrase after the subject; the prepositional phrase contains an attractor, which makes choosing the correct inflection more difficult. (9) Across a prepositional phrase: a. The consultants behind the executive smile/*smiles. b. Les The clients clients devant in.front.of l’ the adjoint deputy sont are / / *est *is vieux. old. c. Der The Lehrer teacher neben next.to den the Ministern ministers lacht laughs / / *lachen. *laugh. d. Ha The meltsarim servers leyad near ha the zamarim singers nos’im drive / / *nose’a. *drives. e. Режиссёры Directors перед in.front.of агентами agents маленькие are.small / / *маленький. *is.small. Some of the constructions used by Marvin and Linzen (2018) could not be replicated across languages. This includes reflexive anaphora, where none of our non-English languages use quite the 5536 English French German Russian Mono Multi Mono Multi Mono Multi Mono Multi Simple agreement — -.02 — -.01 — +.02 +.02 — VP coordination (short) -.01 — +.01 +.14 -.02 -.01 -.03 -.01 VP coordination (long) -.03 +.01 +.04 -.02 -.06 +.07 +.04 +.02 Across subject rel. clause +.24 +.07 +.23 +.15 -.03 +.13 +.02 +.01 Within object rel. clause — -.04 — -.07 — -.02 -.03 Across object rel. clause +.09 +.02 +.05 +.03 +.01 +.09 +.01 Across prepositional phrase +.18 +.11 +.20 +.20 +.03 +.03 +.03 +.02 Average accuracy +.06 +.03 +.07 +.05 -.01 +.05 +.01 +.03 Table 4: Gains (positive, blue) and losses (negative, red) in LSTM LM accuracies on CLAMS after capitalizing the first character of each evaluation example. Differences are relative to the results in Table 1. Results are averaged across five random initializations. same syntactic structures as English (or even to each other) when employing reflexive verbs and pronouns. Some do not even have separate reflexive pronouns for third-person singular and plural distinctions (like French and German). Moreover, the English reflexive examples rely on the syncretism between past-tense verbs for any English person and number,10 whereas other languages often have different surface forms for different person and number combinations in the past tense. This would give the model a large clue as to which reflexive is correct. Thus, any results on reflexive anaphora would not be comparable cross-linguistically. See example (10) below for English, French, and German examples of the differences in reflexive syntax. (10) Reflexive anaphora across relative clause: a. The author that the guards like injured himself/*themselves. b. L’ The auteur author que that les the gardes guards aiment like s’ REFL.3 est has.3S bless´e injured.S.MASC / / *se REFL.3 sont have.3P bless´es. injured.P.MASC c. Der The Autor, author den that die the W¨achter guards m¨ogen, like verletzte injured.3S sich REFL.3 / / *verletzten injured.3P sich. REFL.3 B The Importance of Capitalization As discovered in Hao (2020), capitalizing the first character of each test example improves the per10For example, regardless of whether the subject is singular, plural, first- or third-person, etc., the past-tense of see is always saw. formance of language models in distinguishing grammatical from ungrammatical sentences in English. To test whether this finding holds crosslinguistically, we capitalize the first character of each of our test examples in all applicable languages. Hebrew has no capital-/lower-case distinction, so it is excluded from this analysis. Table 4 contains the results and relative gains or losses of our LSTM language models on the capitalized stimuli compared to the lowercase ones. For all languages except German, we see a notable increase in the syntactic ability of our models. For German, we see a small drop in overall performance, but its performance was already exceptionally high in the lowercase examples (perhaps due to its mandatory capitalization of all nouns). An interesting change is that morphological complexity no longer correlates with the overall syntactic performance across languages (ρ = 0.2). Perhaps the capitalization acts as an explicit cue to syntactic structure by delineating the beginning of a sentence, thus supplanting the role of morphological cues in aiding the model to distinguish grammatical sentences. Overall, it seems quite beneficial to capitalize one’s test sentences before feeding them to a language model if one wishes to improve syntactic accuracy. The explanation given by Hao (2020) is that The essentially only appears sentence-initially, thus giving the model clues as to which noun (typically the token following The) is the subject. Conversely, the has a more varied distribution, as it may appear before essentially any noun in subject or object position; thus, it gives the model fewer 5537 English French German Hebrew Russian Mono Multi Mono Multi Mono Multi Mono Multi Mono Multi Simple agreement .00 .00 .00 .00 .00 .02 .01 .01 .01 .07 VP coordination (short) .01 .00 .01 .05 .02 .00 .01 .01 .02 .02 VP coordination (long) .06 .08 .05 .09 .04 .07 .06 .06 .04 .06 Across subject rel. clause .06 .02 .05 .05 .04 .07 .03 .03 .03 .04 Within object rel. clause .01 .02 .01 .01 .03 .04 .01 .03 .04 .02 Across object rel. clause .05 .02 .01 .01 .09 .06 .01 .01 .03 .02 Across prepositional phrase .02 .02 .02 .02 .06 .03 .03 .04 .02 .01 Table 5: Standard deviation of LSTM LM performance across five random weight initializations for all languages and stimulus types. cues as to which noun agrees with a given verb. This would explain the larger score increase for English and French (which employ articles in a similar fashion in CLAMS), as well as the milder increase for Russian (which does not have articles). However, it does not explain the decrease in performance on German. A deeper investigation of this trend per-language could reveal interesting trends about the heuristics employed by language models when scoring syntactically complex sentences. C Performance Variance Previous work has found the variance of LSTM performance in syntactic agreement to be quite high (McCoy et al., 2018; Kuncoro et al., 2019). In Table 5, we provide the standard deviation of accuracy over five random initializations on all CLAMS languages and stimulus types. This value never exceeds 0.1, and tends to only exceed 0.05 in more difficult syntactic contexts. For syntactic contexts without attractors, the standard deviation is generally low. In more difficult cases like Across a Subject Relative Clause and Long VP Coordination, we see far higher variance. In Across an Object Relative Clause, however, the standard deviation is quite low despite this being the case on which language models struggled most; this is likely due to the consistently at-chance performance on this case, further showcasing the difficulty of learning syntactic agreements in such contexts. On cases where German tended to deviate from the general trends seen in other languages, we see our highest standard deviations. Notably, the performance of German LMs in Across an Object Relative Clause and Across a Prepositional Phrase varies far more than other languages for the same stimulus type. D Evaluation Set Sizes Here, we describe the size of the various evaluation set replications. These will differ for the LSTMs, BERT, and mBERT, as the two latter models sometimes do not contain the varied focus verb for a particular minimal set. Table 6 displays the number of minimal sets per language and stimulus type (with animate nouns only) in our evaluation sets; the total number of sentences (grammatical and ungrammatical) is the number of minimal sets times two. These are also the number of examples that the LSTM is evaluated on. We do not include inanimate-noun cases in our evaluations for now, since these are much more difficult to replicate cross-linguistically. Indeed, grammatical gender is a confounding variable which— according to preliminary experiments—does have an effect on model performance. Additionally, Hebrew has differing inflections depending on the combination of the subject and object noun genders, which means that we rarely have all needed inflections in the vocabulary. We have differing numbers of examples perlanguage for similar cases. The reasoning for this is two-fold: (1) direct translations do not exist for all English items in the evaluation set of Marvin and Linzen (2018), so we often must decide between multiple possibilities. In cases where there are two translations of a noun that could reasonably fit, we use both; if we have multiple possibilities for a given verb, we use only one—the most frequent of the possible translations. If no such translation exists for a given noun or verb, we pick a different word that is as close to the English token is possible in the same domain. Reason (2) is that many of the nouns and verbs in the direct translation of the evaluation sets do not appear in the language models’ vocabularies. Thus, 5538 English French German Hebrew Russian Simple agreement 140 280 140 140 280 VP coordination (short) 840 980 980 980 980 VP coordination (long) 400 500 500 500 500 Across subject rel. clause 11200 11200 11200 11200 10080 Within object rel. clause 11200 11200 11200 11200 11200 Across object rel. clause 11200 11200 11200 11200 11200 Across prepositional phrase 16800 14000 12600 5600 5880 Table 6: Number of minimal sets for all languages and stimulus types using animate nouns. English Mono Multi French German Hebrew Russian SUBJECT-VERB AGREEMENT Simple agreement 120 80 40 100 20 80 In a sentential complement 1440 960 VP coordination (short) 720 480 140 700 140 280 VP coordination (long) 400 240 100 300 100 0 Across subject rel. clause 9600 6400 1600 5406 1600 2880 Within object rel. clause 15960 5320 0 0 0 0 Within object rel. clause (no that) 15960 5320 Across object rel. clause 19680 16480 1600 5620 1600 3200 Across object rel. clause (no that) 19680 16480 Across prepositional phrase 19440 14640 2000 9000 800 1680 REFLEXIVE ANAPHORA Simple 280 280 In a sentential complement 3360 3360 Across a rel. clause 22400 22400 Table 7: Number of minimal sets used by BERT (English monolingual only) and mBERT for evaluation. The number of monolingual English examples is the same as in Goldberg (2019). Hyphens indicate non-replicable stimulus types, and 0 indicates that all focus verbs for a given stimulus type were out-of-vocabulary. some nouns or focus verbs would effectively be <unk>s if left in, rendering that particular example unusable. In such cases, if a given noun/verb is not the vocabulary, we pick a similar noun from the same domain if one exists; if a similar item does not exist in the vocabulary, we choose some common noun in that language’s vocabulary that has not already been used in the evaluation set. We use a similar process to add new verbs, but sometimes, third-person singular and plural inflections of similar verbs did not exist in the vocabulary. In such cases, we used a similar verb if possible (e.g., ‘dislike’ would be reasonably similar in distribution and meaning to ‘hate’), but if no such similar verb exists in the vocabulary, we do not replace it. A similar process is used for closed classes like prepositions: if no sufficient replacement exists in the vocabulary, it is not replaced. Table 7 contains the number of examples used by BERT and mBERT to calculate examples. Important to note is that for these evaluations, we use stimulus types containing both animate and inanimate nouns to better match Goldberg (2019)’s experimental setup; this is why we have more examples for English in this table than for the LSTM evaluations. Including or excluding inanimate nouns was found to make no significant difference in the final scores (for BERT or mBERT) regardless, since the performance of the model never diverges by more than 0.02 for animate vs. inanimate stimulus types. The variation in the number of examples across languages is due to many of the focus verbs not being in mBERT’s vocabulary. We see the lowest 5539 coverage in general for Hebrew and (surprisingly) French; this is likely due to Hebrew script being a rarer script in mBERT and due to many of French’s most common tokens being split into subwords, respectively. Russian also has relatively low coverage, having 0 in-vocabulary target verbs for long VP coordination. None of our languages except English had any target verbs for Within an Object Relative Clause.
2020
490
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5540–5552 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5540 Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? Peter Hase and Mohit Bansal UNC Chapel Hill [email protected], [email protected] Abstract Algorithmic approaches to interpreting machine learning models have proliferated in recent years. We carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability, simulatability, while avoiding important confounding experimental factors. A model is simulatable when a person can predict its behavior on new inputs. Through two kinds of simulation tests involving text and tabular data, we evaluate five explanations methods: (1) LIME, (2) Anchor, (3) Decision Boundary, (4) a Prototype model, and (5) a Composite approach that combines explanations from each method. Clear evidence of method effectiveness is found in very few cases: LIME improves simulatability in tabular classification, and our Prototype method is effective in counterfactual simulation tests. We also collect subjective ratings of explanations, but we do not find that ratings are predictive of how helpful explanations are. Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability across a variety of explanation methods and data domains. We show that (1) we need to be careful about the metrics we use to evaluate explanation methods, and (2) there is significant room for improvement in current methods.1 1 Introduction Interpretable machine learning is now a widely discussed topic (Rudin, 2019; Doshi-Velez and Kim, 2017; Lipton, 2016; Gilpin et al., 2018). While survey papers have not converged on definitions of “explainable” or “interpretable,” there are some common threads in the discourse. Commentators observe that interpretability is useful for 1We make all our supporting code, data, and models publicly available at: https://github.com/peterbhase/ InterpretableNLP-ACL2020 achieving other model desiderata, which may include building user trust, identifying the influence of certain variables, understanding how a model will behave on given inputs, and ensuring that models are fair and unbiased. In their review, Doshi-Velez and Kim (2017) outline an approach to measuring interpretability. They describe two human-subject tasks that test for a particularly useful property: simulatability. A model is simulatable when a person can predict its behavior on new inputs. This property is especially useful since it indicates that a person understands why a model produces the outputs it does. The first of the two tasks is termed forward simulation: given an input and an “explanation,” users must predict what a model would output for the given input. The second is counterfactual simulation: users are given an input, a model’s output for that input, and an “explanation” of that output, and then they must predict what the model will output when given a perturbation of the original input. The explanation itself is algorithmically generated by a method for interpreting or explaining a model. Simulation tests have been carried out before, but no study to date has isolated the effect of explanations on simulatability (Ribeiro et al., 2018; Chandrasekaran et al., 2018; Nguyen, 2018; Bang et al., 2019). We carry out simulation tests that are the first to incorporate all of the following design choices: (1) separating explained instances from test instances, so explanations do not give away the answers, (2) evaluating the effect of explanations against a baseline of unexplained examples, (3) balancing data by model correctness, so users cannot succeed by guessing the true label, and (4) forcing user predictions on all inputs, so performance is not biased toward overly specific explanations. We display our study design in Figure 1. We provide results from high-quality human 5541 (Post) Prediction Phase Learning Phase (w/ explanations) Learning Phase (Pre) Prediction Phase Simulation Forward Simulation Counterfactual (Pre) Prediction Phase : Human simulation : Model prediction : Explanation : Counterfactual input : Counterfactual model prediction (Post) Prediction Phase Explanation Effect Post Sim. Accuracy Pre Sim. Accuracy Figure 1: Forward and counterfactual simulation test procedures. We measure human users’ ability to predict model behavior. We isolate the effect of explanations by first measuring baseline accuracy, then measuring accuracy after users are given access to explanations of model behavior. In the forward test, the explained examples are distinct from the test instances. In the counterfactual test, each test instance is a counterfactual version of a model input, and the explanations pertain to the original inputs. user tests (with over 2100 responses) that include both forward and counterfactual simulation tasks. Through these tests, we measure explanation effectiveness for five methods across text and tabular classification tasks. Our evaluation includes two existing explanation techniques, LIME and Anchor (Ribeiro et al., 2016, 2018), and we translate two other explanation methods from image recognition models to work with our textual and tabular setups. The first of these is a latent space traversal method, which we term the Decision Boundary approach (Joshi et al., 2018; Samangouei et al., 2018), and the second is a case-based reasoning method, which we term the Prototype method (Chen et al., 2019). The final method is a novel Composite approach that combines complementary explanations from each method. Lastly, we also collect subjective, numerical user ratings of explanation quality. Our key findings are: 1. LIME improves forward and counterfactual simulatability in our tabular classification task. 2. Prototype improves counterfactual simulatability across textual and tabular data domains. 3. No method definitively improves forward and counterfactual simulatability together on the text task, though our Prototype and Composite methods perform the best on average. 4. It appears that users’ quality ratings of explanations are not predictive of how helpful the explanations are with counterfactual simulation. 5. While users rate Composite explanations as among the best in quality, these combined explanations do not overtly improve simulatability in either data domain. 2 Background and Related Work 2.1 What Does “Interpretable” Mean? Survey papers use key terms in varying ways. Rudin (2019) draws a distinction between interpretability and explainability, suggesting that a model is interpretable if it performs computations that are directly understandable. Post-hoc explanations, on the other hand, are potentially misleading approximations of the true computations. Gilpin et al. (2018) also distinguish between the two concepts, though they define them differently. In this paper, we do not distinguish between interpretability and explainability. Rather, we adopt the conceptual framework of Doshi-Velez and Kim (2017), who consider interpretability in terms of downstream desiderata one can assess models with respect to. Our terminology is as follows: we will say that explanation methods may improve the interpretability of a model, in the sense that an interpretable model is simulatable. 2.2 Explanation Methods Several taxonomies have been proposed for categorizing methods for interpretability. We organize methods below into the categories of: feature importance estimation, case-based reasoning, and latent space traversal. Feature Importance Estimation. Feature importance estimates provide information about how the model uses certain features. Most prominent among these methods are the gradient-based approaches first introduced for vision by Simonyan et al. (2014), which Li et al. (2016) show may 5542 be translated for use with text data. These approaches have since been demonstrated to sometimes behave in counterintuitive ways (Adebayo et al., 2018; Kim et al., 2018). A number of alternative methods have been proposed for quantifying feature importance across data domains (Kim et al., 2018; Lundberg and Lee, 2017; Sundararajan et al., 2017). In our study, we choose to evaluate two domain-agnostic approaches, LIME and Anchor (Ribeiro et al., 2016, 2018). These methods use simple models, i.e. sparse linear models and rule lists, to approximate complex model behavior locally around inputs. They show the estimated effects of directly interpretable features on the model’s output. For these methods, what is “local” to an input is defined in a domain-specific manner via a perturbation distribution centered on that input. Case-based Reasoning. Prototype models classify new instances based on their similarity to other known cases. Two works on prototype models for computer vision introduced neural models that learn prototypes corresponding to parts of images (Chen et al., 2019; Hase et al., 2019). These prototypes are used to produce classifier features that are intended to be directly interpretable. Latent Space Traversal. These methods traverse the latent space of a model in order to show how the model behaves as its input changes. In a classification setting, crossing the decision boundary may reveal necessary conditions for a model’s prediction for the original input. Several methods exist for vision models (Joshi et al., 2018; Samangouei et al., 2018). To our knowledge no such approach exists for discriminative models of text and tabular data, so we develop a simple method for these kinds of models (described in Section 3.4). 2.3 Evaluating Interpretability Here we discuss works involving automatic and human evaluations of interpretability, as well as how we improve on past simulation test design. While human evaluations are useful for evaluating many aspects of interpretability, we restrict our discussion to works measuring simulatability. Improving Forward Test Design. Forward simulation tasks have been implemented in many different forms, and there is a serious need for consensus on proper procedure here. Doshi-Velez and Kim (2017) originally propose that users predict model behavior, given an input and an explanation. With many explanation methods, this is a trivial task because the explanations directly reveal the output. For example, LIME gives a predicted probability that indicates the model behavior with high likelihood. We make a number of experimental design choices that give us more reliable estimates of method effectiveness than past studies. (1) We separate the explained instances from the test instances, to prevent explanations from giving away the answers. In three studies, the same data points were used as both explanation and prediction items (Nguyen, 2018; Chandrasekaran et al., 2018; Bang et al., 2019). (2) We evaluate the effect of explanations against a baseline where users see the same example data points without explanations. No prior evaluation includes this control. (3) Two choices further distinguish our test from that of Ribeiro et al. (2018). We balance data by model correctness, so users cannot succeed simply by guessing the true label, and we force user predictions on every input, so our metrics do not favor overly niche explanations. Counterfactual Simulatability. Counterfactual simulatability has, to our knowledge, never been measured for machine learning models. While Doshi-Velez and Kim (2017) propose asking users to edit inputs in order to change the model outputs, we instead ask users to predict model behavior on edited versions of data points, as this approach is more scalable than soliciting creative responses. Relation to Automatic Tests. Prior works have proposed automatic metrics for feature importance estimates (Nguyen, 2018; Hooker et al., 2019; DeYoung et al., 2020). Typically these operate by checking that model behavior follows reasonable patterns on counterfactual inputs constructed using the explanation, e.g., by masking “important” features and checking that a class score drops. Whereas automatic metrics define appropriate model behavior in advance for counterfactual instances generated by a fixed schema, we present a random counterfactual to a human and elicit their prediction of model behavior for that instance. This allows for human validation of model behavior in a broader range of input scenarios than an automatic procedure, where human expectations are given in response to diverse and concrete examples rather than dictated in advance. Subjective Ratings. Hutton et al. (2012) measure user judgments of whether word importance measures explain model behavior in a text classi5543 LIME 0 1 +.05 +.04 -.06 -.11 -.18 .24 -.02 -.26 charms modest dismissed occasional despite Sum of Words Baseline Est. Probability Negative Positive Despite modest aspirations its occasional charms are not to be dismissed. Input, Label, and Model Output Step 2 modest impressive Evidence Margin: +0.32 Decision Boundary Evidence Margin: -5.21 Step 0 Step 1 occasional rare Evidence Margin: -3.00 Despite impressive aspirations its rare charms are not to be dismissed. Anchor Prototype Most similar prototype: Important words: (none selected) Similarity score: 9.96 out of 10 Routine and rather silly. Figure 2: Explanation methods applied to an input from the test set of movie reviews. fication setting. Our rating task is thus similar to theirs; our changes are that we evaluate with a Likert scale rather than forced ranking, using explanation techniques for neural models rather than word importance estimates from a naive Bayes classifier. In another study, users judged image classification explanations on a Likert scale ranging from “no explanation” to “concise explanation” (Bang et al., 2019). Whereas this scale focuses on conciseness, we ask users to rate how explanations reveal reasons for model behavior. 3 Explanation Methods In this section, we describe the explanation methods. Example explanations for a test movie review are shown in Figure 2. We limit our discussion of LIME and Anchor, since details for these methods can be found in the original papers. Note that LIME, Anchor, and our Decision Boundary method can be used with arbitrary blackbox models. The Prototype method is itself a neural model that also produces an explanation. 3.1 LIME Ribeiro et al. (2016) present LIME as a local linear approximation of model behavior. With a userspecified feature space, a linear model is fit to the blackbox outputs on samples from a distribution around an input. We set the number of features to use to 5, and we take class probabilities as our model output. When showing LIME explanations to users, we give them the selected features with estimated weights, the model intercept, the sum of model weights, and the predicted model output. 3.2 Anchor Ribeiro et al. (2018) introduce a method for learning rule lists that predict model behavior with high confidence. With samples from a distribution around an input, they use a PAC learning approach to obtain a rule list. When the rules apply to an input, there is a high probability it will receive the same prediction as the original. The feature space of the rule list is specified by the user. As in the original work, we use individual tokens for our text data, and we use the same learning parameters for each Anchor explanation. 3.3 Prototype Model Prototype models have previously been used for interpretable computer vision (Chen et al., 2019; Hase et al., 2019). We develop a prototype model for use with text and tabular classification tasks. In our model, a neural network g maps inputs to a latent space, and the score of class c is: f(xi)c = max pk∈Pc a(g(xi), pk) where a is a similarity function for vectors in the latent space, and Pc is the set of protoype vectors for class c. We choose the Gaussian kernel for our similarity function: a(zi, pk) = e−||zi−pk||2. The model predicts inputs to belong to the same class as the prototype they’re closest to in the latent space. Unlike in Chen et al. (2019), we take the max activation to obtain concise explanations. In lieu of image heatmaps, we provide feature importance scores. What distinguishes these scores from those of standard feature importance estimates is that the scores are prototype-specific, rather than class-specific. We choose a feature omission approach for estimation. With text data, omission is straightforward: for a given token, we take the difference in function output between the original input and the input with that token’s embedding zeroed out. In the tabular domain, however, variables can never take on meaningless values. To circumvent this problem, we take the difference between the function value at the original 5544 input and the expected function value with a particular feature missing. The expectation is computed with a distribution over possible values for a missing feature, which is provided by a multinomial logistic regression conditioned on the remaining covariates. When presenting prototype explanations, we provide users with the predicted class score, most similar prototype, and top six feature importance scores, provided that score magnitudes meet a small threshold. In the explanation in Figure 2, no scores meet this threshold. We set the size of Pc to 40 for our text classification task and 20 for our tabular classification task. For further training and feature importance details, see the Appendix. 3.4 Decision Boundary Joshi et al. (2018) and Samangouei et al. (2018) introduce techniques for traversing the latent spaces of generative image models. Their methods provide paths that start at input data points and cross a classifier’s decision boundary. Such methods may help users see the necessary conditions for the model prediction. We provide a simple method for traversing the latent space of a discriminative classifier (see example in Figure 2). Our algorithm first samples around the original input to get instances that cross the decision boundary. A counterfactual input is chosen from these by taking the instance with the fewest edited features (tokens or variables), while breaking ties using the Euclidean distance between latent representations. Lastly, we provide a path between inputs by greedily picking the edit from the remaining edits that least changes the model’s evidence margin, which is the difference between positive and negative class scores. The explanations we present to users include the input, steps to the counterfactual input, and evidence margin at each step. When the path is longer than four steps, we show only the last four. 3.5 Composite Approach We hypothesize that the above explanations provide complementary information, since they take distinct approaches to explaining model behavior. Hence, we test a Composite method that combines LIME and Anchor with our decision boundary and prototype explanations. We make two adjustments to methods as we combine them. First, we show only the last step of each decision boundary explanation, i.e., the set of changes that flips the prediction. Second, we train our prototype model with its feature extraction layers initialized from the neural task model and thereafter fixed. We do so since we are interested in explaining the task model behavior, and this tactic yields prototypes that reflect characteristics of the task model. 4 Experimental Design In this section, we describe our datasets, task models, user pool, and experimental design. 4.1 Data and Task Models We perform experiments for classification tasks with text and tabular data. The first dataset consists of movie review excerpts (Pang et al., 2002). The dataset includes 10,662 reviews with binary sentiment labels, which we split into partitions of 70%, 10%, and 20% for the train, validation, and test sets, respectively. We use the same neural architecture as in Yang et al. (2016), limited to use with single sentences. The second dataset is the tabular Adult data from the UCI ML repository (Dua and Graff, 2017). This dataset contains records of 15,682 individuals, and the label is whether their annual income is more than $50,000. We use the same data processing scheme and neural network architecture as Ribeiro et al. (2018). Model accuracies are given in the Appendix. 4.2 User Pool We gathered over 2100 responses via in-person tests with 32 trained undergraduates who had taken at least one course in computer science or statistics.2 Each user was randomly assigned to one of the ten conditions corresponding to our dataset-method pairs. Once each condition had at least 3 full tests collected, we allocated remaining participants to the Composite method. In order to ensure high quality data, we employed a screening test to check for user understanding of their explanation method and test procedure. Two participants were screened out due to low scores. We also excluded data from a user whose task completion time was extremely low. We paid all users $15 USD per hour. Ten users were tested again with a new dataset and explanation method, giving us a total of 39 user tests. Some users had to exit the experiment before finishing all of the tasks; 2We require this advanced background because explanations rely on conditional probabilities, approximations of probabilities, and other quantitative concepts. 5545 Text Tabular Method n Pre Change CI p n Pre Change CI p User Avg. 1144 62.67 7.07 1022 70.74 6.96 LIME 190 0.99 9.58 .834 179 11.25 8.83 .014 Anchor 181 1.71 9.43 .704 215 5.01 8.58 .234 Prototype 223 3.68 9.67 .421 192 1.68 10.07 .711 DB 230 −1.93 13.25 .756 182 5.27 10.08 .271 Composite 320 3.80 11.09 .486 254 0.33 10.30 .952 Table 1: Change in user accuracies after being given explanations of model behavior, relative to the baseline performance (Pre). Data is grouped by domain. CI gives the 95% confidence interval, calculated by bootstrap using n user responses, and we bold results that are significant at a level of p < .05. LIME improves simulatability with tabular data. Other methods do not definitively improve simulatability in either domain. Forward Simulation Counterfactual Simulation Method n Pre Change CI p n Pre Change CI p User Avg. 1103 69.71 6.16 1063 63.13 7.87 LIME 190 5.70 9.05 .197 179 5.25 10.59 .309 Anchor 199 0.86 10.48 .869 197 5.66 7.91 .140 Prototype 223 −2.64 9.59 .566 192 9.53 8.55 .032 DB 205 −0.92 11.87 .876 207 2.48 11.62 .667 Composite 286 −2.07 8.51 .618 288 7.36 9.38 .122 Table 2: Change in user accuracies after being given explanations of model behavior, relative to the baseline performance (Pre). Data is grouped by simulation test type. CI gives the 95% confidence interval, calculated by bootstrap using n user responses. We bold results that are significant at the p < .05 level. Prototype explanations improve counterfactual simulatability, while other methods do not definitively improve simulatability for one test. for data analysis purposes, we consider only task items answered in both Pre and Post test phases. 4.3 Simulation Tests We collect 1103 forward test and 1063 counterfactual test responses in total. Forward Simulation. This test is represented in Figure 1. The test is split into four phases: a learning phase, a Pre prediction phase, a learning phase with explanations, and a Post prediction phase. To begin, users are given 16 examples from the validation set with labels and model predictions but no explanations. Then they must predict the model output for either 16 or 32 new inputs, with the number chosen based on user time constraints. Users are not allowed to reference the learning data while in prediction phases. Next, they return to the same learning examples, now with explanations included. Finally, they predict model behavior again on the same instances from the first prediction round. By design, any improvement in user performance in the Post prediction phase is attributable only to the addition of explanations. We show a screenshot of the user testing interface in the Appendix. Counterfactual Simulation. Represented in Figure 1, this test requires users to predict how a model will behave on a perturbation of a given data point. The test consists of Pre and Post prediction rounds, where the only difference between them is the addition of explanations. In both rounds, we provide users with the same 32 inputs from the test dataset (or 16 due to time constraints), their ground truth labels, the model’s prediction, and a perturbation of the input. See the Appendix for a description of the perturbation generation algorithm. Users then predict model behavior on the perturbations. In the Post round, users are given the same data, but they are also equipped with explanations of the model predictions for the original inputs. Therefore, any improvement in performance is attributable to the addition of explanations. Data Balancing. One critical aspect of our experimental design is our data balancing. We aim to prevent users from succeeding on our tests simply by guessing the true label for every instance. To do so, we ensure that true positives, false positives, true negatives, and false negatives are equally represented in the inputs. Likewise, for the counterfactual test, we sample perturbations such that for any instance, there is a 50% chance that the pertur5546 Text Ratings Tabular Ratings Method n µ CI σ n µ CI σ LIME 144 4.78 1.47 1.76 130 5.36 0.63 1.70 Anchor 133 3.86 0.59 1.79 175 4.99 0.71 1.38 Prototype 191 4.45 1.02 2.08 144 4.20 0.82 1.88 DB 224 3.85 0.60 1.81 144 4.61 1.14 1.86 Composite 240 4.47 0.58 1.70 192 5.10 1.04 1.42 Table 3: User simulatability ratings by data domain, on a scale of 1 to 7. The mean and standard deviation for ratings are given by µ and σ. The 95% confidence interval for the mean is given by CI, as calculated by bootstrap. bation receives the same prediction as the original input. We confirm user understanding of the data balancing in our screening test. Data Matching. Within each data domain, all users receive the same data points throughout the experiment. This design controls for any differences in the data across conditions and users, though this does reduce the information added by each test, making our confidence intervals relatively wide given the same sample size. We also match data across prediction rounds in order to control for the influence of particular data points on user accuracy between the Pre and Post phases. 4.4 Subjective Simulatability Ratings Users see explanations in two phases of the tests: the second learning phase in the forward test, and the Post phase of the counterfactual test. In these stages, we ask users to give subjective judgments of the explanations. They rate each method on a 7 point Likert scale, in response to the question, “Does this explanation show me why the system thought what it did?” We explain that users should give higher ratings when the explanation shows the reasons for a model prediction, regardless of whether or not the prediction is correct. 5 Results We report data from a total of 2166 responses from 39 user tests. Each test is for a method and data domain pair, and contains either 16 or 32 task items, with some missingness due to users exiting the study early. In the results to follow, we use the term Change to refer to our estimate of explanation effectiveness: the difference in user accuracy across prediction phases in simulation tests. We perform two-sided hypothesis tests for this quantity by a block bootstrap, resampling both users and unique task items within each condition (Efron and Tibshirani, 1994). In addition, since users complete the first prediction round in either simulation test without access to explanations, we estimate the mean Pre accuracy for each method with a random effects model. This allows us to share information across methods to yield more precise estimates of test performance. Below, we analyze our experimental results and answer three questions: 1) Do explanations help users? 2) How do users rate explanations? 3) Can users predict explanation effectiveness? 5.1 Do explanations help users? We show simulation test results in Tables 1 and 2. In Table 1, we group results by data domain, and in Table 2, we group results by test type. Our principal findings are as follows: 1. LIME with tabular data is the only setting where there is definitive improvement in forward and counterfactual simulatability. With no other method and data domain do we find a definitive improvement across tests. 2. Even with combined explanations in the Composite method, we do not observe definitive effects on model simulatability. 3. Interestingly, our prototype method does reliably well on counterfactual simulation tests in both data domains, though not forward tests. It may be that the explanations are helpful only when shown side by side with inputs. These results suggest that: (1) many explanation methods may not noticeably help users understand how models will behave, (2) methods that are successful in one domain might not work equally well in another, (3) combining information from explanations does not result in overt improvements in simulatability. Yet, given our wide confidence intervals, these results should be considered cautiously. It may also be that other methods do in fact improve simulatability, but we have not precisely estimated this. For example, our Prototype and Composite methods do the best on average with text data, though we cannot be confident that they improve simulatability. Note that estimates of explanation effectiveness 5547 could be influenced by users simply regressing to the mean accuracy between prediction rounds. We find that our primary results are not skewed by this phenomenon: the highest estimates of Change in each data domain and test type come from conditions where mean Pre test performance was either above the overall mean or, in one case, within 1.15 percentage points. This potential problem is further mitigated by our random effects model of Pre test performance, which pulls low Pre test means toward the overall mean. 5.2 How do users rate explanations? It seems that, as intended, users rated explanations based on quality rather than model correctness, as we observe no significant difference in ratings grouped by model correctness (table in Appendix). In Table 3, we show user ratings for each method and data domain. We observe that: 1) ratings are generally higher for tabular data, relative to text data, 2) the Composite and LIME methods receive the highest ratings in both domains, and 3) variance in explanation ratings is quite high, relative to their scale. 5.3 Can users predict explanation effectiveness? We answer this question by measuring how explanation ratings relate to user correctness in the Post phase of the counterfactual simulation test. In this phase, users rate explanations of model predictions for an original input and predict model behavior for a perturbation of that input. If ratings of explanation quality are a good indicator of their effectiveness, we would expect to see that higher ratings are associated with user correctness. We do not find evidence that explanation ratings are predictive of user correctness. We estimate the relationship via logistic regression with user correctness and ratings. We test models with both absolute ratings and ratings normalized within users, since ratings lack an absolute scale between users. With 640 text data points, we estimate with 95% confidence that moving from a rating of 4 to 5 is associated with between a −2.9 and 5.2 percentage point change in expected user correctness. Using normalized ratings, we find that moving from the mean explanation rating to the first standard deviation is associated with between a −3.9 and 12.2 percentage point change. With 515 tabular data points, we estimate that a change in rating from 4 to 5 is associated with between a −2.6 and 5.3 percentage point change in expected user correctness. Of course, we have not shown that there is no association. Yet it’s important to note that if there is no relationship between user ratings and simulatability, then simply querying humans about explanation quality will not provide a good indication of true explanation effectiveness. 6 Qualitative Analysis When do explanations succeed at improving user accuracy, and when do they fail at doing so? Below, we present example counterfactual test items, and we analyze how the explanations may have pointed to the reasons for model behavior. 6.1 Explanation Success Example For the example below, 5 of 6 Post test responses for Prototype and LIME were correct that the model output did not change for the counterfactual, up from 3 of 6 in the Pre test. Original (ˆy = pos): “Pretty much sucks, but has a funny moment or two.” Counterfactual (ˆyc = pos): “Mostly just bothers, but looks a funny moment or two.” LIME identifies “funny” and “moment” as positive words, with weights adding to 1.04 after including the baseline. The notable negative word is “sucks” (w = −.23), which changes to a similar word (“bothers”). All together, LIME suggests the prediction would stay the same since the positive words are unaffected and the only important negative word has a similar substitute. The Prototype model gives the most activated prototype: “Murders by Numbers isn’t a great movie, but it’s a perfectly acceptable widget.” It identifies “but” and “funny” as important words for the prototype’s activation. The counterfactual is still similar to the prototype in key ways, suggesting the prediction would not change. 6.2 Explanation Failure Example For the item below, only 7 of 13 responses were correct after seeing explanations, with no method improving correctness relative to the Pre test accuracy. Users needed to predict that the model prediction changed to negative for the counterfactual. Original (ˆy = pos): “A bittersweet film, simple in form but rich with human events.” Counterfactual (ˆyc = neg): “A teary film, simple in form but vibrant with devoid events.” 5548 Anchor gives one word as a condition for the original positive prediction: “bittersweet.” But what happens when “bittersweet” changes to “teary”? The Anchor explanation does not actually apply to this counterfactual scenario, as its probabilistic description of model behavior is conditioned on the word bittersweet being present. LIME gives five words, each with small weights (|w| < .04), while the baseline is .91. This suggests that LIME has failed to identify features of the input that are necessary to the model output. Among these five words are the three that changed between sentences, but we would not suspect from their weights that the changes made in the counterfactual would flip the model output. Decision Boundary gives a counterfactual input with a negative prediction: “A sappy film, simple in link but unique with human events.” However, it is difficult to tell whether this counterfactual sentence is similar in decision-relevant ways to the proposed counterfactual sentence. The Prototype model gives the activated prototype for the original prediction: “Watstein handily directs and edits around his screenplay’s sappier elements...and sustains Off the Hook’s buildup with remarkable assuredness for a first-timer.” No important words are selected. We are left without a clear sense of why this was the most similar prototype and what circumstances would lead to the model output changing. These examples reveal areas for improvement in explanations. Better methods will need to distinguish between sufficient and necessary factors in model behavior and clearly point to the ways in which examples share decision-relevant characteristics with new inputs. Further, they must do so in the appropriate feature space for the problem at hand, especially for models of complex data. 7 Discussion Forward Tests Stretch User Memory. We show users 16 examples during learning phases but do not allow them to reference the learning data during prediction phases. Reasonably, some users reported that it was difficult to retain insights from the learning phase during later prediction rounds. Generating Counterfactual Inputs. It may be difficult to algorithmically construct counterfactual inputs that match the true data distribution, especially when seeking to change the model prediction. Our text counterfactuals are regularly out of the data distribution, in the sense that no real movie review would exhibit the word choice they do. We still consider these inputs to be of interest, for the reason that a model will handle such inputs in some manner, and we aim to assess all possible model behaviors in our analysis. Fair Comparison of Explanation Methods. In our forward simulation treatment phases, we provide users with 16 explained instances and allow them to read at their own pace. We control for the number of data points between methods, but one could instead control for user exposure time or computation time of explanation generation. Further, for LIME and Anchor, there are approaches for efficiently covering the space of inputs with a limited budget of examples (Ribeiro et al., 2018). We opt not to use them since 1) they are not applicable to the Decision Boundary and Prototype methods, which lack a similar notion of coverage, and 2) it is not clear whether these approaches are useful for text data. It may be that when using such approaches, LIME and Anchor perform better on forward simulation tasks. 8 Conclusion Simulatability metrics give a quantitative measure of interpretability, capturing the intuition that explanations should improve a person’s understanding of why a model produces its outputs. In this paper, we evaluated five explanation methods through simulation tests with text and tabular data. These are the first experiments to fully isolate the effect of algorithmic explanations on simulatability. We find clear improvements in simulatability only with LIME for tabular data and our Prototype method in counterfactual tests. It also appears that subjective user ratings of explanation quality are not predictive of explanation effectiveness in simulation tests. These results suggest that we must be careful about the metrics we use to evaluate explanation methods, and that there is significant room for improvement in current methods. Acknowledgments We thank the reviewers for their helpful feedback and our study users. This work was supported by NSF-CAREER Award 1846185, DARPA MCS Grant N66001-19-2-4031, a Royster Society PhD Fellowship, and Google and AWS cloud compute awards. The views contained in this article are those of the authors and not of the funding agency. 5549 References Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. Seojin Bang, Pengtao Xie, Heewook Lee, Wei Wu, and Eric Xing. 2019. Explaining a black-box using Deep Variational Information Bottleneck Approach. arXiv:1902.06918 [cs, stat]. ArXiv: 1902.06918. Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, and Devi Parikh. 2018. Do explanations make VQA models more predictable to a human? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1036–1042, Brussels, Belgium. Association for Computational Linguistics. Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, and Cynthia Rudin. 2019. This Looks Like That: Deep Learning for Interpretable Image Recognition. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. Eraser: A benchmark to evaluate rationalized nlp models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020). Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608 [cs, stat]. ArXiv: 1702.08608. Dheeru Dua and Casey Graff. 2017. UCI machine learning repository. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. The 5th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2018). Peter Hase, Chaofan Chen, Oscar Li, and Cynthia Rudin. 2019. Interpretable Image Recognition with Hierarchical Prototypes. In Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing (HCOMP-19), pages 32–40. Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. 2019. A benchmark for interpretability methods in deep neural networks. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. Amanda Hutton, Alexander Liu, and Cheryl Martin. 2012. Crowdsourcing Evaluations of Classifier Interpretability. In AAAI Spring Symposium: Wisdom of the Crowd, pages 21–26. Shalmali Joshi, Oluwasanmi Koyejo, Been Kim, and Joydeep Ghosh. 2018. xGEMs: Generating Examplars to Explain Black-Box Models. arXiv:1806.08867 [cs, stat]. ArXiv: 1806.08867. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2018. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In Proceedings of the 35th International Conference on Machine Learning. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and Understanding Neural Models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681–691. Association for Computational Linguistics. Zachary C. Lipton. 2016. The Mythos of Model Interpretability. 2016 ICML Workshop on Human Interpretability in Machine Learning. Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, pages 4765–4774. Dong Nguyen. 2018. Comparing Automatic and Human Evaluation of Local Explanations for Text Classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1069– 1078. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing - EMNLP ’02, volume 10, pages 79–86. Association for Computational Linguistics. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Knowledge Discovery and Data Mining (KDD). Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision modelagnostic explanations. In AAAI Conference on Artificial Intelligence. Cynthia Rudin. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1:206–215. 5550 Pouya Samangouei, Ardavan Saeedi, Liam Nakagawa, and Nathan Silberman. 2018. ExplainGAN: Model Explanation via Decision Boundary Crossing Transformations. In ECCV 2018. Springer International Publishing. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Workshop at International Conference on Learning Representations. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical Attention Networks for Document Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Association for Computational Linguistics. A Appendix A.1 Method Implementations Explanation methods. For our tabular data, we use the implementations of Anchor and LIME provided in the code for Ribeiro et al. (2018). We implement our prototype and decision boundary methods. With text data, we use the implementation of Anchor provided by Ribeiro et al. (2018), and for LIME we use the code provided with Ribeiro et al. (2016). As before, we implement our prototype and decision boundary methods. Text and Tabular Models. We train neural networks for both tasks as follows: for our tabular task model, we use a neural network with two hidden layers, each of width 50, as Ribeiro et al. (2018) do. For our text task model, we use a BiLSTM of the kind introduced by Yang et al. (2016), who reported state of the art results on a number of sentiment analysis tasks. Since their network is designed for classification of documents, we limit our network components to those relevant to classification of single sentences. We build our prototype models on top of the feature extractor layers of each of these models, meaning that we only replace the final classifier layer of the neural task model with a prototype layer. Accuracies for each model are shown in Table 4. The task models are trained with stochastic gradient descent and a cross-entropy loss function, using early stopping on a validation dataset and l2 regularization with Model Accuracies Data & Model Test Acc Text Task Model 80.93 Prototype 80.64 Tabular Task Model 83.49 Prototype 81.90 Table 4: Model accuracies on each data domain. Text data is split into partitions of 70%, 10%, and 20% for the train, validation, and test sets, respectively. We use the same data processing scheme as Ribeiro et al. (2018) for tabular data. User Ratings Model Correctness n µ CI σ Text Correct 464 4.44 .49 1.89 Incorrect 468 4.12 .67 1.81 Tabular Correct 391 5.09 .27 1.64 Incorrect 394 4.64 .27 1.69 Table 5: User simulatability ratings grouped by model correctness and data domain. Users do not seem to be rating explanations simply based on model correctness, as the differences in group means based on model correctness are not significant at a level of p < .05. a coefficient of 1e−4. See training details for the prototype models below. Prototype Model Training. Here we describe our prototype training algorithm, beginning with weight initialization. We initialize 1) feature extraction layers using the pretrained weights of our neural task model, 2) prototype vectors via k-means clustering on the latent representations of the entire training set, and 3) final classifier weights as 1 where the corresponding prototype’s class matches the weight vector’s class, and −.5 elsewhere. The objective function for our prototype models contains three terms: 1) a cross entropy loss, 2) l1 regularization on off-class weights in the classifier, and 3) a separation cost term, which is the minimum distance between a latent representation and any prototype not belonging to the input’s class. Importance Scores in Protoype Model. For a given feature, we compute an importance score by taking the difference in function output with that feature present in the input, relative to when that feature is omitted. With text data, there are a num5551 Prediction Period Learning Period Rounds = {1,3} Rounds = {2,4} Round = 3 Input Label Model Output Figure 3: Forward simulation test procedure. We measure human users’ ability to predict model behavior. We isolate the effect of explanations by first measuring baseline performance after users are shown examples of model behavior (Rounds 1, 2), and then measuring performance after they are shown explained examples of model behavior (Rounds 3, 4). ber of mechanisms by which one can omit a word from an input; we opt for setting that word’s embedding to the zero vector. For tabular data, to estimate a variable value’s importance we compute a measure of evidence gain from knowing the value, relative to not knowing it. Formally, our importance function is the difference between the function value at the original input and the expected function value for the input with variable j removed. The expectation is taken over a distribution generated by an imputation model conditioned on the remaining covariates. Importance(xi,j) = f(xi) −Ep(xi,j|xi,−j)f(xi,−j ∪xi,j) where p(xi,j|xi,−j) is given by a multinomial logistic regression fit to the training data, and xi,−j is the data point without feature j, and f(xi,−j ∪ xi,j) is the data point xi,−j with feature value xi,j imputed at index j. We choose to use logistic regressions with no feature engineering in order to 1) generate calibrated probability distributions, and 2) scale straightforwardly with dataset size. Decision Boundary Algorithm. In detail, the algorithm takes as input a data point x∗, the classifier f, a perturbation distribution D(·|x∗), and a measure of distance between inputs d(x1, x2). We first sample {˜x}10,000 i=1 from the perturbation distribution around x∗. The eligible perturbations to Rounds = {1,2} Prediction Period Round = 2 Input Label Counterfactual Output Counterfactual Input Figure 4: Counterfactual simulation test procedure. Users see model behavior for an input, then they predict model behavior on an edited version of the input. We isolate the effect of explanations by measuring user accuracy with and without explanations. choose from are those with the opposite prediction from the original: E = {˜xi|f(˜xi) ̸= f(x∗)}. Then using a distance function d, we select a counterfactual input as x(c) = min ˜xi∈E d(x∗, ˜xi) We provide a path from x∗to x(c) by greedily picking the single edit from the remaining edits that least changes the model’s evidence margin, which is the difference between positive and negative class scores. Our distance function is the count of different features between inputs, plus the squared Euclidean distance between latent representations. The Euclidean distance is on a scale such that it serves as a tie-breaker: d(x1, x2) = X j 1(x1j ̸= x2j) + ||f(x1) −f(x2)||2 2. A.2 Perturbation Distributions We design perturbation distributions for two points in our experiments: 1) selecting counterfactual inputs in simulation tests, and 2) generating decision boundary explanations. First, we describe our approaches for selecting counterfactual inputs, which are conditioned on the need for a certain prediction type: either the same prediction as the original input or the alternative class. In both data domains, we sample 10, 000 local perturbations around the input and then randomly pick a sample that the model predicts to be of the needed prediction type. While working with tabular data, we sample perturbations as follows: we 5552 Figure 5: A screenshot of our user testing interface. This example is of the counterfactual Post test with LIME for text data. randomly choose to make between 1 and 3 edits, then choose the features to edit uniformly at random, and finally pick new feature values uniformly at random. The only sampling constraint is that a variable cannot be set as its original value. For text data, we use a strategy that is similar to sampling from the perturbation distribution in Ribeiro et al. (2018), which is to randomly substitute words with their neighbors in GloVe word embedding space, sampling neighbors with probability proportional to their similarity. We make a few changes: we 1) decrease probability of token change with the length of sentence, 2) cap the number of edited words at 5 in the chosen perturbation if possible, and 3) limit edited tokens to be nouns, verbs, adjectives, adverbs, and adpositions. Example perturbations are shown in the example of the user testing interface in Figure 5, which is given for a counterfactual test with text data. A.3 Simulation Test Design In Figures 3 and 4, we include additional representations of our experimental design, showing each test separately and in slightly greater detail than in Figure 1. A.4 Testing Environment We show a screenshot of our user testing interface in Figure 5. This example is of the counterfactual Post test with LIME for text data. Tests are administered through spreadsheets, wherein users read test material and place responses. Users are guided from file to file by the experimenter.
2020
491
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5553–5563 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5553 Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions Xiaochuang Han Carnegie Mellon University [email protected] Byron C. Wallace Northeastern University [email protected] Yulia Tsvetkov Carnegie Mellon University [email protected] Abstract Modern deep learning models for NLP are notoriously opaque. This has motivated the development of methods for interpreting such models, e.g., via gradient-based saliency maps or the visualization of attention weights. Such approaches aim to provide explanations for a particular model prediction by highlighting important words in the corresponding input text. While this might be useful for tasks where decisions are explicitly influenced by individual tokens in the input, we suspect that such highlighting is not always suitable for tasks where model decisions should be driven by more complex reasoning. In this work, we investigate the use of influence functions for NLP, providing an alternative approach to interpreting neural text classifiers. Influence functions explain the decisions of a model by identifying influential training examples. Despite the promise of this approach, influence functions have not yet been extensively evaluated in the context of NLP, a gap addressed by this work. We conduct a comparison between influence functions and common word-saliency methods on representative tasks. As suspected, we find that influence functions are particularly useful for natural language inference, a task in which ‘saliency maps’ may not provide clear interpretation. Furthermore, we develop a new quantitative measure based on influence functions that can reveal artifacts in training data.1 1 Introduction Deep learning models have become increasingly complex, and unfortunately their inscrutability has grown in tandem with their predictive power (Doshi-Velez and Kim, 2017). This has motivated efforts to design example-specific approaches to interpreting black box NLP model predictions, i.e., 1Code is available at https://github.com/ xhan77/influence-function-analysis. indicating specific input tokens as being particularly influential for a given prediction. This in turn facilitates the construction of saliency maps over texts, in which words are highlighted with intensity proportional to continuous ‘importance’ scores. Prominent examples of the latter include gradient-based attribution (Simonyan et al., 2014; Sundararajan et al., 2017; Smilkov et al., 2017), LIME (Ribeiro et al., 2016), and attention-based (Xu et al., 2015) heatmaps. While widely used and potentially useful for some lexicon-driven tasks (e.g., sentiment analysis), we argue that by virtue of being constrained to highlighting individual input tokens, saliency maps will necessarily fail to explain predictions in more complex semantic tasks involving reasoning, such as natural language inference (NLI), where fine-grained interactions between multiple words or spans are key (Camburu et al., 2018). Moreover, saliency maps are inherently limited as a model debugging tool; they may tell us which inputs the model found to be important, but not why. To address these shortcomings, we investigate the use of what Lipton (2018) referred to as explanation by example. Instead of constructing importance scores over the input texts on which the model makes predictions, such methods rank training examples by their influence on the model’s prediction for the test input (Caruana et al., 1999; Koh and Liang, 2017; Card et al., 2019). Specifically, we are interested in the use of influence functions (Koh and Liang, 2017), which are in a sense inherently ‘faithful’ in that they reveal the training examples most responsible for particular predictions. These do not require any modifications to the model structure. This paper presents a series of experiments intended to evaluate the potential utility of influence functions for better understanding modern neural NLP models. In this context, our contributions in5554 … Figure 1: A sentiment analysis example interpreted by gradient-based saliency maps (left) and influence functions (right). Note that this example is classified incorrectly by the model. Positive saliency tokens and highly influential examples may suggest why the model makes the wrong decision; tokens and examples with negative saliency or influence scores may decrease the model’s confidence in making that decision. clude answering the following research questions. RQ1 We empirically assess whether the approximation to the influence functions (Koh and Liang, 2017) can be reliably used to interpret decisions of deep transformer-based models such as BERT (Devlin et al., 2019). RQ2 We investigate the degree to which results from the influence function are consistent with insights gleaned from gradient-based saliency scores for representative NLP tasks. RQ3 We explore the application of influence functions as a mechanism to reveal artifacts (or confounds) in training data that might be exploited by models. To the best of our knowledge, this is the first work in NLP to compare interpretation methods that construct saliency maps over inputs with methods that explain predictions via influential training examples. We also propose a new quantitative measurement for the effect of hypothesized artifacts (Gururangan et al., 2018; McCoy et al., 2019) on the model’s prediction using influence functions. 2 Explaining Black-box Model Predictions Machine learning models in NLP depend on two factors when making predictions: the input text and the model parameters. Prior attempts to interpret opaque NLP models have typically focused on the input text. Our work investigates the complementary approach of interpreting predictions by analyzing the influence of examples in training data. Saliency maps aim to provide interpretability by highlighting parts of the input text, whereas influence functions seek clues in the model parameters, eventually locating interpretations within the training examples that influenced these estimates. In this section we explain the two interpretation methods in detail.2 2.1 Gradient-based saliency maps As a standard, illustrative ‘explanation-by-inputfeatures’ method, we focus on gradient-based saliency maps, in which the gradient of the loss L is computed with respect to each token t in the input text, and the magnitude of the gradient serves as a feature importance score (Simonyan et al., 2014; Li et al., 2016a). Gradients have the advantage of being locally ‘faithful’ by construction: they tell us how much the loss would change, were we to perturb a token by a small amount. Gradient-based attributions are also agnostic with respect to the model, as long as it is differentiable with respect to inputs. Finally, calculating gradients is computationally efficient, especially compared to methods that require post-hoc input perturbation and function fitting, like LIME (Ribeiro et al., 2016). We are interested in why the model made a particular prediction. We therefore define a loss Lˆy with respect to the prediction ˆyi that the model actually made, rather than the ground truth yi. For each token t ∈xi, we define a saliency score −∇e(t)Lˆy · e(t), where e(t) is the embedding of t. This is also referred as the “gradient × input” method in Shrikumar et al. (2017). The “gradient” ∇e(t)Lˆy captures the sensitivity of the loss to the change in the input embedding, and the “input” 2 Here we focus on interpretability approaches which are faithful (Wiegreffe and Pinter, 2019; Jacovi and Goldberg, 2020; Jain et al., 2020) by construction; other approaches are discussed in §6. 5555 e(t) leverages the sign and magnitude of the input. The final saliency score of each token t would be L1-normalized across all tokens in xi. Unlike Simonyan et al. (2014) and Li et al. (2016a), when scoring features for importance, we do not take the absolute value of the saliency score, as this encodes whether a token is positively influencing the prediction (i.e., providing support the prediction) or negatively influencing the prediction (highlighting counter-evidence). We show an example in the left part of Figure 1. 2.2 Influence functions In contrast to explanations in the form of tokenlevel heatmaps, the influence function provides a method for tracing model predictions back to training examples. It first approximates how upweighting a particular training example (xi, yi) in the training set {(x1, y1), . . . , (xn, yn)} by ϵi would change the learned model parameters ˆθ: dˆθ dϵi = −( 1 n n X j=1 ∇2 θL(xj, yj, ˆθ))−1∇θL(xi, yi, ˆθ) We can then use the chain rule to measure how this change in the model parameters would in turn affect the loss of the test input (as in saliency maps, w.r.t. the model prediction): dLˆy dϵi = ∇θLˆy · dˆθ dϵi More details (including proofs) can be found in Koh and Liang (2017). We define the influence score for each training example (xi, yi) as −dLˆy dϵi , and then z-normalize it across all examples in the training set. Note that since Lˆy is defined with respect to a particular test input, influence scores of training examples are also defined for individual test instances. Intuitively, a positive influence score for a training example means: were we to remove this example from the train set, we would expect a drop in the model’s confidence when making the prediction on the test input. A negative influence score means that removing the training example would increase the model’s confidence in this prediction. We show an example in the right part of Figure 1. 3 Experimental Setup We are interested in analyzing and comparing the two interpretation approaches (gradient-based attributions and influence functions) on relatively shallow, lexicon-driven tasks and on more complex, reasoning-driven tasks. We focus on sentiment analysis and natural language inference (NLI) as illustrative examples of these properties, respectively. Both models are implemented on top of BERT encoders (Devlin et al., 2019). In particular we use BERT-Base, with the first 8 of the 12 layers frozen, only fine-tuning the last 4 transformer layers and the final projection layer.3 It is worth noting that influence functions are guaranteed to be accurate only when the model is strictly convex (i.e., its Hessian is positive definite and thus invertible) and is trained to convergence. However, deep neural models like BERT are not convex, and one often performs early stopping during training. We refer to Koh and Liang (2017) for details on how influence functions can nonetheless provide good approximations. To summarize briefly: for the non-convexity issue, we add an appropriate ‘damping’ term to the model’s Hessian so that it is positive definite and invertible. Concerning non-convergence: the approximated influence may still be interpretable as the true influence of each training example plus a constant offset that does not depend on the individual examples. Aside from this theory, we also perform a sanity check in §4 to show that influence functions can be applied to BERT in practice on the two tasks that we consider. Sentiment analysis We use a binarized version of the Stanford Sentiment Treebank (SST-2) (Socher et al., 2013). Our BERT-based model is trained on 10k examples; this achieves 89.6% accuracy on the SST-2 dev set of 872 examples. We randomly sample 50 examples from the SST-2 dev set as the set for which we extract explanations for model predictions. Natural language inference Our deeper ‘semantic’ task is NLI, a classification problem that concerns the relationship between a premise sentence and a hypothesis sentence. NLI is a ternary task with three types of premise–hypothesis relations: entailment, neutral, and contradiction. We train our BERT model on the Multi-Genre NLI (MNLI) dataset (Williams et al., 2018), which contains 393k 3We used smaller BERT models because influence functions are notoriously expensive to compute. We also resort to the same stochastic estimation method, LiSSA (Agarwal et al., 2017), as in Koh and Liang (2017), and we deliberately reduce the size of our training sets. Even with these efforts, computing the influence scores of 10k training examples w.r.t. one typical test input would take approximately 10 minutes on one NVIDIA GeForce RTX 2080 Ti GPU. 5556 premise and hypothesis pairs of three relations from 10 different genres. We collapse the neutral and contradiction labels to a single non-entailment label and only use 10k randomly sampled examples for training. On the MNLI dev set of 9815 examples, the model achieves an accuracy of 84.6%. To evaluate model interpretations in a controlled manner, we adopt a diagnostic dataset, HANS (McCoy et al., 2019). This contains a balanced number of examples where hypotheses may or may not entail premises with certain artifacts that they call ‘heuristics’ (e.g., lexical overlap, subsequence). The original HANS dataset contains 30k examples that span 30 different heuristic sub-categories. We test our model and interpretation methods on 30 examples covering all the sub-categories. 4 Evaluating Influence Functions for NLP RQ1: Is influence function approximation reliable when used for deep architectures in NLP? Influence functions are designed to be an approximation to leave-one-out training for each training example. But the theory only proves that this works on strictly convex models. While Koh and Liang (2017) show that influence functions can be a good approximation even when the convexity assumption is not satisfied (in their case, a CNN for image classification), it is still not obvious that the influence function would work for BERT. Therefore, we conduct a sanity check: for each instance in our test set, we by turns remove the most positively influential 10%, the most negatively influential 10%, the least influential (where influence scores are near zero) 10%, and a random 10% of training examples. We are interested in how these removals in retraining would affect the confidence of model predictions. Table 1 and Table 2 show the result of experiments on sentiment analysis and NLI, repeated with 5 random initialization seeds. Removal type Avg. ∆in prediction confidence Positively influential −6.00% (±1.12%) Negative influential +0.17% (±0.50%) Least influential −1.30% (±0.54%) Random −1.67% (±0.54%) Table 1: Sanity check for influence function result on BERT in sentiment analysis. The results are largely in accordance with our Removal type Avg. ∆in prediction confidence Positively influential −11.62% (±2.09%) Negative influential +2.01% (±1.44%) Least influential +1.01% (±0.97%) Random +0.13% (±1.07%) Table 2: Sanity check for influence function result on BERT in NLI. expectation in both tasks: removing the most positively influential training examples would cause the model to have a significantly lower prediction confidence for each test example; removing the most negatively influential examples makes the model slightly more confident during prediction; and removing the least influential examples leads to an effect that is closest to removing a same amount of random examples (although we note that deleting the least influential features still yields a larger ∆than choosing features at random to remove in NLI). We therefore conclude that the influence function behaves reasonably and reliably for BERT in both sentiment analysis and NLI tasks. RQ2. Are gradient-based saliency maps and ‘influential’ examples compatible? Comparing saliency maps and outputs from application of the influence function is not straightforward. Saliency maps communicate the importance of individual tokens in test instances, while influence functions measure the importance of training examples. Still, it is reasonable to ask if they seem to tell similar stories regarding specific predictions. We propose two experiments that aim to estimate the consistency between these two interpretation methods. The first experiment addresses whether a token with high saliency also appears more frequently in the training examples that have relatively high influence. For each example in the test set, we find the tokens with the most positive, most negative, and median saliency scores. We then find all the influential training examples w.r.t. the test inputs that contain one of these tokens. These training examples could have any labels in the label set. We further only consider examples whose label is the same as the test prediction, because the token saliency scores, whether positive or negative, are directly w.r.t. the test prediction, and the effect of a token in an oppositely labeled training example is therefore indirect. We compute the average influence score of these 5557 training examples and report the results on top 10%, 20%, 50%, and all training examples for both sentiment analysis and NLI tasks in Figure 2 and Figure 3 respectively. The reason we have results at different granularity is that from empirical results in Koh and Liang (2017), we see that the influence function approximations tend to be less accurate when going from the most influential to the less influential examples down in the spectrum. Figure 2: Average influence score of top sentiment analysis training examples that contain a token in test example with most positive, most negative, or median saliency. Error bars depict standard errors. Figure 3: Average influence score of top NLI training examples that contain a token in test example with most positive, most negative, or median saliency. Standard error is shown in error bars. In the task of sentiment analysis, we observe that training examples containing the most positively salient token in the test example generally have a higher influence to the test prediction. However, we do not see this trend (in fact, it is the opposite) in the task of natural language inference. The second experiment answers the question of whether the influence result would change significantly when a salient token is removed from the Saliency of the removed token @0.1% @0.2% @0.5% @1% Most negative 75.6% 77.4% 80.0% 82.4% Median 84.2% 86.7% 88.9% 89.1% Most positive 65.2% 68.8% 71.4% 72.0% Table 3: Average overlap rate of top influential sentiment analysis training examples before and after removal of a token with the most positive, most negative, or median saliency. Saliency of the removed token @0.1% @0.2% @0.5% @1% Most negative 33.0% 33.5% 37.5% 40.9% Median 79.3% 78.0% 80.5% 84.0% Most positive 46.0% 48.3% 49.9% 54.9% Table 4: Average overlap rate of top influential NLI training examples before and after removal of a token with the most positive, negative, or median saliency. input. Again, for each of the test examples, we identify the tokens with the most positive, most negative, and median saliency score. We by turns remove them from the input and compute the influence distribution over all training examples. We compare these new influence results with the one on the original input, and report an overlap rate of the top 0.1%, 0.2%, 0.5%, and 1% influential training examples before and after the token removal. Table 3 and Table 4 show results for sentiment analysis and NLI, respectively. When removing a token with the most positive saliency score, we expect the model to be less confident about its current prediction; it could possibly make a different prediction. Therefore, we expect to see a most different influence distribution from the original influence result compared to removing the token with median or the most negative saliency score. This is exactly what we observe in Table 3 for sentiment analysis. However, for NLI, we again see a rather opposite trend: removing the most negatively salient token (might make the prediction more confident but should not change the prediction itself) leads to the most different influence distribution. We conclude from the above two experiments that gradient-based saliency maps and influential examples are compatible and consistent with each other in sentiment analysis. However, for NLI the two approaches do not agree with each other and 5558 could potentially tell very different stories. To this end, we take a closer look at the task of NLI. 5 Interpreting NLI Predictions with Influence Functions Are saliency-based explanations useful for NLI? Gradient-based saliency maps are faithful by construction, but this does not mean that they will highlight input tokens that humans find plausible or useful. We hypothesize that highlighting individual input tokens as important is likely most useful for ‘shallow’ classification tasks like sentiment analysis, and less so for more complex reasoning tasks such as NLI. To contrast the types of explanations these methods offer in this context, we show explanations for a prediction made for a typical example in HANS in the form of a saliency map and influential examples in Table 5. The tokens that get the most positive and most negative saliency scores are marked in cyan and red, respectively. The training examples with the most positive and most negative influence scores are presented as supporting and opposing instances, respectively. Test input P: The :::::: manager was ::::::::: encouraged by the secretary. H: The secretary encouraged the manager. {entail} Most supporting training examples P: Because you’re having fun. H: Because you’re having fun. [entail] P: I don’t know if I was in heaven or hell, said Lillian Carter, the president’s mother, after a visit. H: The president’s mother visited. [entail] P: Inverse price caps. H: Inward caps on price. [entail] P: Do it now, think ’bout it later. H: Don’t think about it now, just do it. [entail] Most::::::: opposing training examples P: H’m, yes, that might be, said John. H: Yes, that might be the case, said John. [non-entail] P: This coalition of public and private entities undertakes initiatives aimed at raising public awareness about personal finance and retirement planning. H: Personal finance and retirement planning are initiatives aimed at raising public awareness. [non-entail] Table 5: A correctly predicted example in HANS interpreted by saliency map and influence function. The relationship classification decision in NLI is often made through an interaction between multiple words or spans. Therefore, an importance measure on each individual token might not give us much useful insight into model prediction. Though influence functions also do not explicitly tell us which latent interactions between words or spans informed the model prediction, we can test whether the model is relying on some hypothesized artifacts in a post-hoc way by looking at patterns in the influential training examples. In Table 5, though the most influential examples (both supporting and opposing) are ostensibly far from the test input, they all exhibit lexical overlap between the premise and hypothesis. Some of the influential training examples (e.g., the 4th supporting example and 2nd opposing example) capture a reverse ordering of spans in the premise and hypothesis. We note that our test input also has a high lexical overlap and similar reverse ordering. This exposes a problem: the model might be relying on the wrong artifacts like word overlap during the decision process rather than learning the relationship between the active and passive voice in our case. This problem was surfaced by finding influential examples. 5.1 Quantitatively measuring artifacts McCoy et al. (2019) hypothesize that the main artifact NLI models might learn is lexical overlap. In fact, for all of the examples in HANS, every word in the hypothesis would appear in the corresponding premise (100% lexical overlap rate). Half of the examples would have an entailment relationship while the other half have an non-entailment relationship. McCoy et al. (2019) compare four models with strong performance in MNLI, and all of them predict far more entailments than non-entailments. Because of this imbalance in prediction, they conclude that the models are perhaps exploiting artifacts in data when making decisions. We see one potential problem out of the above method: it can only be applied to a certain group of examples and imply a general model behavior by examining the prediction imbalance. However, model behavior should depend on the actual example it sees each time. The extent to which the model exploits the artifact in each individual example remains unclear. To analyze the effect of artifacts on individual examples, we propose a method using influence functions. We hypothesize that if an artifact in5559 forms the model’s predictions for a test instance, the most influential training examples for this test example should contain occurrences of said artifact. For instance, if our model exploits ‘lexical overlap’ when predicting the relation between a premise and a hypothesis, we should expect the most influential training examples found by the influence function to have a highly overlapping premise and hypothesis. In Figure 4a, we plot each training example’s influence score and lexical overlap rate between its premise and hypothesis for a typical example in the HANS dataset. In linen with our expectation, the most influential (both positively and negatively) training examples tend to have a higher lexical overlap rate. Note that we also expect this trend for the most negatively influential examples, because they influence the model’s prediction as much as the positively influential examples do, only in a different direction. To quantify this bi-polarizing effect, we find it natural to fit a quadratic regression to the influenceartifact distribution. We would expect a high positive quadratic coefficient if the artifact feature appears more in the most influential examples. For an irrelevant feature, we would expect this coefficient to be zero. With this new quantitative measure, we are ready to explore the below problems unanswered by the original diagnostic dataset. For test examples predicted as non-entailment, did the model fail to recognize that they have a lexical overlap feature? Was the artifact not exploited in these cases? Figure 4a and Figure 4b show two examples in HANS, one predicted as entailment and the other predicted as non-entailment. We observe that the example predicted as nonentailment does not have a significantly different influence-artifact pattern from the entailment example. In fact, the average quadratic coefficients for all examples predicted as entailment and nonentailment are +3.28×10−3 and +3.30×10−3 respectively. Therefore, for predicted non-entailment examples, we still see that the most influential training examples tend to have a high rate of lexical overlap, indicating that the model still recognizes the artifact in these cases. The model relies on training examples with high lexical overlap when predicting in the artificial HANS dataset. Would it still exploit the same artifact for natural examples? Apart from finding the most influential training examples for each HANS example, we also apply influence functions on 50 natural MNLI examples, not controlled to exhibit any specific artifacts. A typical example is shown in Figure 4c. The average quadratic coefficient over all 50 natural examples is +0.65 × 10−3, which is considerably smaller than the above cases in HANS dataset. The model therefore does not rely on as much lexical overlap in natural examples as in the diagnostic dataset. We have been analyzing scenarios focusing on one data artifact. What if we have a second artifact during prediction possibly indicating a contradicting decision? How will the model recognize the two artifacts in such a scenario? We know that lexical overlap could be a data artifact exploited by NLI models for making an entailment prediction in HANS. On the other hand, as briefly pointed out by McCoy et al. (2019), other artifacts like negation might be indicative of non-entailment. We are interested in how two contradicting artifacts might compete when they both appear in an example. We take all examples in HANS labeled as entailment and manually negate the hypothesis so that the relationship becomes non-entailment. For example, a hypothesis “the lawyers saw the professor” would become “the lawyers did not see the professor”. Figure 5a and Figure 5b show the influenceartifact distributions on both lexical overlap and negation for an original HANS example. Figure 5c and Figure 5d show the distributions for the same HANS example with negated hypothesis. The average quadratic coefficients on all examples are shown in Table 6. We observe that in the original HANS example, negation is actually a negative artifact: the training examples with negation tend to be the least influential ones. In the negated HANS example, we see the effect of negations becomes positive, while the effect of lexical overlap is drastically weakened. This confirms that the model recognizes the new set of artifacts, and the two are competing with each other. Importantly, observing an artifact in the most influential training examples is a necessary but not sufficient condition to concluding that it was truly exploited by the model. However, it can serve as a first step towards identifying artifacts in blackbox neural models and may be complemented by probing a larger set of hypothesized artifacts. 5560 (a) HANS example predicted as entailment. (P: The athlete by the doctors encouraged the senator. H: The athlete encouraged the senator.) Quadratic coefficient: +3.74 × 10−3. (b) HANS example predicted as nonentailment. (P: Since the author introduced the actors, the senators called the tourists. H: The senators called the tourists.) Quadratic coef: +3.59×10−3. (c) A typical MNLI example. (P: And uh as a matter of fact he’s a draft dodger. H: They dodged the draft, I’ll have you know.) Quadratic coefficient: +0.74 × 10−3. Figure 4: Influence-artifact distribution for different test examples. (a) Lexical overlap in original HANS example. Quadratic coefficient: +3.13 × 10−3. (b) Negation in original HANS example. Quadratic coefficient: −0.92 × 10−3. (c) Lexical overlap in negated HANS example. Quadratic coefficient: +0.76 × 10−3. (d) Negation in negated HANS example. Quadratic coefficient: +0.55 × 10−3. Figure 5: Influence-artifact distribution for an original and negated HANS example. (P: The lawyers saw the professor behind the bankers. H: The lawyers saw / did not see the professor.) Lexical overlap coef Negation coef Original +3.05 × 10−3 −1.13 × 10−3 Negated +0.53 × 10−3 +0.27 × 10−3 Table 6: Average quadratic coefficients of the influence-artifact distribution for all original HANS examples and all negated HANS examples. 6 Related Work Interpreting NLP model predictions by constructing importance scores over the input tokens is a widely adopted approach (Belinkov and Glass, 2019). Since the appearance and rise of attentionbased models, many work naturally inspect attention scores and interpret with them. However, we are aware of the recent discussion over whether attention is a kind of faithful explanation (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019). Using vanilla attention as interpretation could be more problematic in now ubiquitous deep transformerbased models, such as we use here. Gradient-based saliency maps are locally ‘faithful’ by construction. Other than the vanilla gradients (Simonyan et al., 2014) and the “gradient × input” method (Shrikumar et al., 2017) we use in this work, there are some variants that aim to make gradient-based attributions robust to potential noise in the input (Sundararajan et al., 2017; Smilkov et al., 2017). We also note that Feng et al. (2018) find that gradient-based methods sometimes yield counter-intuitive results when iterative input reductions are performed. Other token-level interpretations include input perturbation (Li et al., 2016b) which measure a token’s importance by the effect of removing it, and LIME (Ribeiro et al., 2016) which can explain any model’s decision by fitting a sparse linear model to the local region of the input example. The main focus of this work is the applicability of influence functions (Koh and Liang, 2017) as an interpretation method in NLP tasks, and to highlight the possibility of using this to surface annotation artifacts. Other methods that can trace the model’s decision back into the training examples include deep weighted averaging classifiers (Card et al., 2019), which make decisions based on the labels of training examples that are most similar to the test input by some distance metrics. Croce et al. (2019) use kernel-based deep architectures 5561 that project test inputs to a space determined by a group of sampled training examples and make explanations through the most activated training instances. While these methods can similarly identify the ‘influential’ training examples, they require special designs or modifications to the model and could sacrifice the model’s performance and generalizability. Other general methods for model interpretability include adversarial-attack approaches that identify that part of input texts can lead to drastically different model decisions when minimally edited (Ebrahimi et al., 2018; Ribeiro et al., 2018), probing approaches that test internal representations of models for certain tasks and properties (Liu et al., 2019b; Hewitt and Liang, 2019), and generative approaches that make the model jointly extract or generate natural language explanations to support predictions (Lei et al., 2016; Camburu et al., 2018; Liu et al., 2019a; Rajani et al., 2019). Specific to the NLI task, Gururangan et al. (2018) recognize and define some possible artifacts within NLI annotations. McCoy et al. (2019) create a diagnostic dataset that we use in this work and suggest that the model could be exploiting some artifacts in training data based on its poor performance on the diagnostic set. Beyond NLI, the negative influence of artifacts in data was explored in other text classification tasks (Pryzant et al., 2018; Kumar et al., 2019; Landeiro et al., 2019), focusing on approaches to adversarial learning to demote the artifacts. 7 Conclusion We compared two complementary interpretation methods—gradient-based saliency maps and influence functions—in two text classification tasks: sentiment analysis and NLI. We first validated the reliability of influence functions when used with deep transformer-based models. We found that in a lexicon-driven sentiment analysis task, saliency maps and influence functions are largely consistent with each other. They are not consistent, however, on the task of NLI. We posit that influence functions may be a more suitable approach to interpreting models for such relatively complex natural language ‘understanding‘ tasks (while simpler attribution methods like gradients may be sufficient for tasks like sentiment analysis). We introduced a new potential use of influence functions: revealing and quantifying the effect of data artifacts on model predictions, which have been shown to be very common in NLI. Future work might explore how rankings induced over training instances by influence functions can be systematically analyzed in a stand-alone manner (rather than in comparison with interpretations from other methods), and how these might be used to improve model performance. Finally, we are interested in exploring how these types of explanations are actually interpreted by users, and whether providing them actually establishes trust in predictive systems. Acknowledgments. We thank the anonymous ACL reviewers and members of TsvetShop at CMU for helpful discussions of this work. This material is based upon work supported by NSF grants IIS1812327 and SES1926043, and by Amazon MLRA award. Wallace’s contributions were supported by the Army Research Office (W911NF1810328). We also thank Amazon for providing GPU credits. References Naman Agarwal, Brian Bullins, and Elad Hazan. 2017. Second-order stochastic optimization for machine learning in linear time. Journal of Machine Learning Research (JMLR), 18:116:1–116:40. Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49–72. Oana-Maria Camburu, Tim Rockt¨aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In Proc. NeurIPS. Dallas Card, Michael Zhang, and Noah A. Smith. 2019. Deep weighted averaging classifiers. In FAT*. Rich Caruana, Hooshang Kangarloo, John David N. Dionisio, Usha S. Sinha, and David B. Johnson. 1999. Case-based explanation of non-case-based learning methods. Proc. AMIA Symposium, pages 212–5. Danilo Croce, Daniele Rossini, and Roberto Basili. 2019. Auditing deep learning processes through kernel-based explanatory models. In Proc. EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL-HLT. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. 5562 Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proc. ACL. Shi Feng, Eric Wallace, Alvin Grissom, Mohit Iyyer, Pedro Rodriguez, and Jordan L. Boyd-Graber. 2018. Pathologies of neural models make interpretation difficult. In Proc. EMNLP. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proc. NAACL-HLT. John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proc. EMNLP, pages 2733–2743. Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? ArXiv, abs/2004.03685. Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proc. NAACL-HLT. Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and Byron C. Wallace. 2020. Learning to faithfully rationalize by construction. In Proc. ACL. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proc. ICML. Sachin Kumar, Shuly Wintner, Noah A. Smith, and Yulia Tsvetkov. 2019. Topics to avoid: Demoting latent confounds in text classification. In Proc. EMNLP, pages 4151–4161. Virgile Landeiro, Tuan Tran, and Aron Culotta. 2019. Discovering and controlling for latent confounds in text classification using adversarial domain adaptation. In Proc. SIAM International Conference on Data Mining, pages 298–305. Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2016. Rationalizing neural predictions. In Proc. EMNLP. Jiwei Li, Xinlei Chen, Eduard H. Hovy, and Dan Jurafsky. 2016a. Visualizing and understanding neural models in NLP. In Proc. HLT-NAACL. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding neural networks through representation erasure. ArXiv, abs/1612.08220. Zachary Chase Lipton. 2018. The mythos of model interpretability. Commun. ACM, 61:36–43. Hui Liu, Qingyu Yin, and William Yang Wang. 2019a. Towards explainable NLP: A generative explanation framework for text classification. In Proc. ACL. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019b. Linguistic knowledge and transferability of contextual representations. In Proc. NAACL-HLT. R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proc. ACL. Reid Pryzant, Kelly Shen, Dan Jurafsky, and Stefan Wagner. 2018. Deconfounded lexicon induction for interpretable social science. In NAACL-HLT. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. arXiv preprint arXiv:1906.02361. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “why should i trust you?”: Explaining the predictions of any classifier. In Proc. HLTNAACL (System Demonstrations). Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Proc. ACL. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In Proc. ICML. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Workshop at International Conference on Learning Representations. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Vi´egas, and Martin Wattenberg. 2017. SmoothGrad: removing noise by adding noise. ArXiv, abs/1706.03825. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. EMNLP. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proc. ICML. Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, and Sameer Singh. 2019. AllenNLP interpret: A framework for explaining predictions of NLP models. In Proc. EMNLP (System Demonstrations), pages 7–12. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proc. EMNLP, pages 11–20. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proc. NAACL-HLT. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. 5563 Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proc. ICML. A Implementation Details The main model we used for experiments is a BERT-Base model (Devlin et al., 2019), adapted from Wolf et al. (2019). We “froze” the embedding layer and the first 8 transformer layers and only fine-tuned the last 4 transformer layers and the final projection layer. We used the default BERT optimizer with default hyperparameters: a learning rate of 5e−5, a total of 3 epochs, a max sequence length of 128, and a training batch size of 32. For gradient-based saliency maps, we used a “vanilla” version implemented by Wallace et al. (2019). For influence functions, we adapted code from Koh and Liang (2017) to PyTorch and used the same stochastic estimation trick, LiSSA (Agarwal et al., 2017). Since our model is not convex, we used a “damping” term (as mentioned in §3) of 3e−3. This value was picked so that the recursive approximation to the inverse Hessian-vector product can be finished (converged) in a reasonable time. More specifically, we chose the recursion depth to be 2500 (with a total of 10k training examples), the number of recursions to be 1, and a scaling factor to be 1e4. In each step estimating the Hessian-vector product, we took a batch of 8 training examples for stability. We empirically checked that the inverse Hessian-vector product converges after the recursive estimation for all test examples on which we performed the analysis.
2020
492
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5564–5577 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5564 Finding Universal Grammatical Relations in Multilingual BERT Ethan A. Chi, John Hewitt, and Christopher D. Manning Department of Computer Science Stanford University {ethanchi,johnhew,manning}@cs.stanford.edu Abstract Recent work has found evidence that Multilingual BERT (mBERT), a transformer-based multilingual masked language model, is capable of zero-shot cross-lingual transfer, suggesting that some aspects of its representations are shared cross-lingually. To better understand this overlap, we extend recent work on finding syntactic trees in neural networks’ internal representations to the multilingual setting. We show that subspaces of mBERT representations recover syntactic tree distances in languages other than English, and that these subspaces are approximately shared across languages. Motivated by these results, we present an unsupervised analysis method that provides evidence mBERT learns representations of syntactic dependency labels, in the form of clusters which largely agree with the Universal Dependencies taxonomy. This evidence suggests that even without explicit supervision, multilingual masked language models learn certain linguistic universals. 1 Introduction Past work (Liu et al., 2019; Tenney et al., 2019a,b) has found that masked language models such as BERT (Devlin et al., 2019) learn a surprising amount of linguistic structure, despite a lack of direct linguistic supervision. Recently, large multilingual masked language models such as Multilingual BERT (mBERT) and XLM (Conneau and Lample, 2019; Conneau et al., 2019) have shown strong cross-lingual performance on tasks like XNLI (Lample and Conneau, 2019; Williams et al., 2018) and dependency parsing (Wu and Dredze, 2019). Much previous analysis has been motivated by a desire to explain why BERT-like models perform so well on downstream applications in the monolingual setting, which begs the question: what properties of these models make them so crosslingually effective? Figure 1: t-SNE visualization of head-dependent dependency pairs belonging to selected dependencies in English and French, projected into a syntactic subspace of Multilingual BERT, as learned on English syntax trees. Colors correspond to gold UD dependency type labels. Although neither mBERT nor our probe was ever trained on UD dependency labels, English and French dependencies exhibit cross-lingual clustering that largely agrees with UD dependency labels. In this paper, we examine the extent to which Multilingual BERT learns a cross-lingual representation of syntactic structure. We extend probing methodology, in which a simple supervised model is used to predict linguistic properties from a model’s representations. In a key departure from past work, we not only evaluate a probe’s performance (on recreating dependency tree structure), but also use the probe as a window into understanding aspects of the representation that the probe was not trained on (i.e. dependency labels; Figure 1). In particular, we use the structural probing method of Hewitt and Manning (2019), which probes for syntactic trees by finding a linear transformation under which two words’ distance in their dependency parse is approximated by the squared 5565 distance between their model representation vectors under a linear transformation. After evaluating whether such transformations recover syntactic tree distances across languages in mBERT, we turn to analyzing the transformed vector representations themselves. We interpret the linear transformation of the structural probe as defining a syntactic subspace (Figure 2), which intuitively may focus on syntactic aspects of the mBERT representations. Since the subspace is optimized to recreate syntactic tree distances, it has no supervision about edge labels (such as adjectival modifier or noun subject). This allows us to unsupervisedly analyze how representations of head-dependent pairs in syntactic trees cluster and qualitatively discuss how these clusters relate to linguistic notions of grammatical relations. We make the following contributions: • We find that structural probes extract considerably more syntax from mBERT than baselines in 10 languages, extending the structural probe result to a multilingual setting. • We demonstrate that mBERT represents some syntactic features in syntactic subspaces that overlap between languages. We find that structural probes trained on one language can recover syntax in other languages (zeroshot), demonstrating that the syntactic subspace found for each language picks up on features that BERT uses across languages. • Representing a dependency by the difference of the head and dependent vectors in the syntactic space, we show that mBERT represents dependency clusters that largely overlap with the dependency taxonomy of Universal Dependencies (UD) (Nivre et al., 2020); see Figure 1. Our method allows for fine-grained analysis of the distinctions made by mBERT that disagree with UD, one way of moving past probing’s limitation of detecting only linguistic properties we have training data for rather than properties inherent to the model. Our analysis sheds light on the cross-lingual properties of Multilingual BERT, through both zeroshot cross-lingual structural probe experiments and novel unsupervised dependency label discovery experiments which treat the probe’s syntactic subspace as an object of study. We find evidence that mBERT induces universal grammatical relations without any explicit supervision, which largely Figure 2: The structural probe recovers syntax by finding a syntactic subspace in which all syntactic trees’ distances are approximately encoded as squared L2 distance (Hewitt and Manning, 2019). agree with the dependency labels of Universal Dependencies.1 2 Methodology We present a brief overview of Hewitt and Manning (2019)’s structural probe, closely following their derivation. The method represents each dependency tree T as a distance metric where the distance between two words dT (wi, wj) is the number of edges in the path between them in T. It attempts to find a single linear transformation of the model’s word representation vector space under which squared distance recreates tree distance in any sentence. Formally, let hℓ 1:n be a sequence of n representations produced by a model from a sequence of n words wℓ 1:n composing sentence ℓ. Given a matrix B ∈Rk×m which specifies the probe parameters, we define a squared distance metric dB as the squared L2 distance after transformation by B: dB(hℓ i, hℓ j) = ||Bhℓ i −Bhℓ j||2 2 We optimize to find a B that recreates the tree distance dT ℓbetween all pairs of words (wℓ i, wℓ j) in all sentences sℓin the training set of a parsed corpus. Specifically, we optimize by gradient descent: arg min B X ℓ 1 |sℓ|2 X i,j |dT ℓ(wℓ i, wℓ j) −dB(hℓ i, hℓ j)| For more details, see Hewitt and Manning (2019). Departing from prior work, we view the probetransformed word vectors Bh themselves—not just the distances between them—as objects of study. 1Code for reproducing our experiments is available here: https://github.com/ethanachi/ multilingual-probing-visualization 5566 The rows of B are a basis that defines a subspace of Rm, which we call the syntactic subspace, and may focus only on parts of the original BERT representations. A vector Bh corresponds to a point in that space; the value of each dimension equals the dot product of h with one of the basis vectors.2 2.1 Experimental Settings These settings apply to all experiments using the structural probe throughout this paper. Data Multilingual BERT is pretrained on corpora in 104 languages; however, we probe the performance of the model in 11 languages (Arabic, Chinese, Czech, English, Farsi, Finnish, French, German, Indonesian, Latvian, and Spanish).3,4 Specifically, we probe the model on trees encoded in the Universal Dependencies v2 formalism (Nivre et al., 2020). Model In all our experiments, we investigate the 110M-parameter pre-trained weights of the BERTBase, Multilingual Cased model.5 Baselines We use the following baselines:6 • MBERTRAND: A model with the same parametrization as mBERT but no training. Specifically, all of the contextual attention layers are reinitialized from a normal distribution with the same mean and variance as the original parameters. However, the subword embeddings and positional encoding layers remain unchanged. As randomly initialized ELMo layers are a surprisingly competitive baseline for syntactic parsing (Conneau et al., 2018), we also expect this to be the case for BERT. In our experiments, we find that this baseline performs approximately equally across layers, so we draw always from Layer 7. • LINEAR: All sentences are given an exclusively left-to-right chain dependency analysis. 2For ease of notation, we will discuss vectors Bh as being in the syntactic subspace, despite being in Rk. 3When we refer to all languages, we refer to all languages in this set, not all languages that mBERT trains on. 4This list is not typologically representative of all human languages. However, we are constrained by the languages for which both large UD datasets and mBERT’s pretraining are available. Nevertheless, we try to achieve a reasonable spread over language families, while also having some pairs of close languages for comparison. 5https://github.com/google-research/bert 6We omit a baseline that uses uncontextualized word embeddings because Hewitt and Manning (2019) found it to be a weak baseline compared to the two we use. EVALUATION To evaluate transfer accuracy, we use both of the evaluation metrics of Hewitt and Manning (2019). That is, we report the Spearman correlation between predicted and true word pair distances (DSpr.).7 We also construct an undirected minimum spanning tree from said distances, and evaluate this tree on undirected, unlabeled attachment score (UUAS), the percentage of undirected edges placed correctly when compared to the gold tree. 3 Does mBERT Build a Syntactic Subspace for Each Language? We first investigate whether mBERT builds syntactic subspaces, potentially private to each language, for a subset of the languages it was trained on; this is a prerequisite for the existence of a shared, cross-lingual syntactic subspace. Specifically, we train the structural probe to recover tree distances in each of our eleven languages. We experiment with training syntactic probes of various ranks, as well as on embeddings from all 12 layers of mBERT. 3.1 Results We find that the syntactic probe recovers syntactic trees across all the languages we investigate, achieving on average an improvement of 22 points UUAS and 0.175 DSpr. over both baselines (Table 1, section IN-LANGUAGE).8 Additionally, the probe achieves significantly higher UUAS (on average, 9.3 points better on absolute performance and 6.7 points better on improvement over baseline) on Western European languages.9. Such languages have been shown to have better performance on recent shared task results on multilingual parsing (e.g. Zeman et al., 2018). However, we do not find a large improvement when evaluated on DSpr. (0.041 DSpr. absolute, -0.013 relative). We find that across all languages we examine, the structural probe most effectively recovers tree structure from the 7th or 8th mBERT layer (Figure 4). Furthermore, increasing the probe maximum rank beyond approximately 64 or 128 gives 7Following Hewitt and Manning (2019), we evaluate only sentences of lengths 5 to 50, first average correlations for word pairs in sentences of a specific length, and then average across sentence lengths. 8Throughout this paper, we report improvement over the stronger of our two baselines per-language. 9Here, we define Western European as Czech, English, French, German, and Spanish. 5567 Structural Probe Results: Undirected Unlabeled Attachment Score (UUAS) Arabic Czech German English Spanish Farsi Finnish French Indonesian Latvian Chinese Average LINEAR 57.1 45.4 42.8 41.5 44.6 52.6 50.1 46.4 55.2 47.0 44.2 47.9 MBERTRAND 49.8 57.3 55.2 57.4 55.3 43.2 54.9 61.2 53.2 53.0 41.1 52.9 IN-LANG 72.8 83.7 83.4 80.1 79.4 70.7 76.3 81.3 74.4 77.1 66.3 76.8 ∆BASELINE 15.7 26.4 28.1 22.6 24.1 18.0 21.4 20.1 19.1 24.1 22.1 22.0 SINGLETRAN 68.6 74.7 70.8 65.4 75.8 61.3 69.8 74.3 69.0 73.2 51.1 68.5 ∆BASELINE 11.5 17.4 15.6 8.0 20.4 8.7 14.9 13.1 13.8 20.2 6.9 13.7 HOLDOUT 70.4 77.8 75.1 68.9 75.5 63.3 70.7 76.4 70.8 73.7 51.3 70.4 ∆BASELINE 13.3 20.5 19.8 11.5 20.1 10.7 15.8 15.2 15.6 20.7 7.1 15.5 ALLLANGS 72.0 82.5 79.6 75.9 77.6 68.2 73.0 80.3 73.1 75.1 57.8 74.1 ∆BASELINE 14.9 25.2 24.4 18.5 22.2 15.6 18.1 19.1 17.9 22.1 13.7 19.2 Structural Probe Results: Distance Spearman Correlation (DSpr.) LINEAR .573 .570 .533 .567 .589 .489 .564 .598 .578 .543 .493 .554 MBERTRAND .657 .658 .672 .659 .693 .611 .621 .710 .656 .608 .590 .649 IN-LANG .822 .845 .846 .817 .859 .813 .812 .864 .807 .798 .777 .824 ∆BASELINE .165 .187 .174 .158 .166 .202 .191 .154 .151 .190 .187 .175 SINGLETRAN .774 .801 .807 .773 .838 .732 .787 .836 .772 .771 .655 .777 ∆BASELINE .117 .143 .135 .114 .145 .121 .166 .126 .117 .163 .064 .128 HOLDOUT .779 .821 .824 .788 .838 .744 .792 .840 .776 .775 .664 .786 ∆BASELINE .122 .163 .152 .129 .146 .133 .171 .130 .121 .166 .074 .137 ALLLANGS .795 .839 .836 .806 .848 .777 .802 .853 .789 .783 .717 .804 ∆BASELINE .138 .181 .165 .147 .155 .165 .181 .143 .134 .174 .127 .156 Table 1: Performance (in UUAS and DSpr.) of the structural probe trained on the following cross-lingual sources of data: the evaluation language (IN-LANG); the single other best language (SINGLETRAN); all other languages (HOLDOUT); and all languages, including the evaluation language (ALLLANGS). Note that all improvements over baseline (∆BASELINE) are reported against the stronger of our two baselines per-language. 1 2 4 8 16 32 64 128 256 512 Probe Maximum Rank 0.2 0.4 0.6 0.8 UUAS en fr es fi zh Figure 3: Parse distance tree reconstruction accuracy (UUAS) for selected languages at layer 7 when the linear transformation is constrained to varying maximum dimensionality. no further gains, implying that the syntactic subspace is a small part of the overall mBERT representation, which has dimension 768 (Figure 3). These results closely correspond to the results found by Hewitt and Manning (2019) for an equivalently sized monolingual English model trained and evaluated on the Penn Treebank (Marcus et al., 1993), suggesting that mBERT behaves similarly to monolingual BERT in representing syntax. 1 2 3 4 5 6 7 8 9 10 11 12 Hidden Layer Index 0.5 0.6 0.7 0.8 UUAS. en de fi zh Figure 4: Parse distance tree reconstruction accuracy (UUAS) on layers 1–12 for selected languages, with probe maximum rank 128. 4 Cross-Lingual Probing 4.1 Transfer Experiments We now evaluate the extent to which Multilingual BERT’s syntactic subspaces are similar across languages. To do this, we evaluate the performance of a structural probe when evaluated on a language unseen at training time. If a probe trained to predict syntax from representations in language i also predicts syntax in language j, this is evidence that mBERT’s syntactic subspace for language i also encodes syntax in language j, and thus that syntax 5568 is encoded similarly between the two languages. Specifically, we evaluate the performance of the structural probe in the following contexts: • Direct transfer, where we train on language i and evaluate on language j. • Hold-one-out transfer, where we train on all languages other than j and evaluate on language j. 4.2 Joint Syntactic Subspace Building off these cross-lingual transfer experiments, we investigate whether there exists a single joint syntactic subspace that encodes syntax in all languages, and if so, the degree to which it does so. To do so, we train a probe on the concatenation of data from all languages, evaluating it on the concatenation of validation data from all languages. 4.3 Results We find that mBERT’s syntactic subspaces are transferable across all of the languages we examine. Specifically, transfer from the best source language (chosen post hoc per-language) achieves on average an improvement of 14 points UUAS and 0.128 DSpr. over the best baseline (Table 1, section SINGLETRAN).10 Additionally, our results demonstrate the existence of a cross-lingual syntactic subspace; on average, a holdout subspace trained on all languages but the evaluation language achieves an improvement of 16 points UUAS and 0.137 DSpr. over baseline, while a joint ALLLANGS subspace trained on a concatenation of data from all source languages achieves an improvement of 19 points UUAS and 0.156 DSpr. (Table 1, section HOLDOUT, ALLLANGS). Furthermore, for most languages, syntactic information embedded in the post hoc best crosslingual subspace accounts for 62.3% of the total possible improvement in UUAS (73.1% DSpr.) in recovering syntactic trees over the baseline (as represented by in-language supervision). Holdout transfer represents on average 70.5% of improvement in UUAS (79% DSpr.) over the best baseline, while evaluating on a joint syntactic subspace accounts for 88% of improvement in UUAS (89% DSpr.). These results demonstrate the degree to which the cross-lingual syntactic space represents syntax cross-lingually. 10For full results, consult Appendix Table 1. 4.4 Subspace Similarity Our experiments attempt to evaluate syntactic overlap through zero-shot evaluation of structural probes. In an effort to measure more directly the degree to which the syntactic subspaces of mBERT overlap, we calculate the average principal angle11 between the subspaces parametrized by each language we evaluate, to test the hypothesis that syntactic subspaces which are closer in angle have closer syntactic properties (Table 4). We evaluate this hypothesis by asking whether closer subspaces (as measured by lower average principal angle) correlate with better cross-lingual transfer performance. For each language i, we first compute an ordering of all other languages j by increasing probing transfer performance trained on j and evaluated on i. We then compute the Spearman correlation between this ordering and the ordering given by decreasing subspace angle. Averaged across all languages, the Spearman correlation is 0.78 with UUAS, and 0.82 with DSpr., showing that transfer probe performance is substantially correlated with subspace similarity. 4.5 Extrapolation Testing To get a finer-grained understanding of how syntax is shared cross-lingually, we aim to understand whether less common syntactic features are embedded in the same cross-lingual space as syntactic features common to all languages. To this end, we examine two syntactic relations—prenominal and postnominal adjectives—which appear in some of our languages but not others. We train syntactic probes to learn a subspace on languages that primarily only use one ordering (i.e. majority class is greater than 95% of all adjectives), then evaluate their UUAS score solely on adjectives of the other ordering. Specifically, we evaluate on French, which has a mix (69.8% prenominal) of both orderings, in the hope that evaluating both orderings in the same language may help correct for biases in pairwise language similarity. Since the evaluation ordering is out-of-domain for the probe, predicting evaluation-order dependencies successfully suggests that the learned subspace is capable of generalizing between both kinds of adjectives. We find that for both categories of languages, accuracy does not differ significantly on either prenominal or postnominal adjectives. Specifi11https://docs.scipy.org/doc/scipy/reference/ generated/scipy.linalg.subspace angles.html 5569 Language Prenom. Postnom. % data prenom. de 0.932 0.900 100.0% zh 0.801 0.826 100.0% lv 0.752 0.811 99.7% en 0.906 0.898 99.1% fi 0.834 0.840 98.5% cz 0.830 0.894 95.4% fa 0.873 0.882 9.6% id 0.891 0.893 4.9% ar 0.834 0.870 0.1% Average pre: 0.843 0.862 Average post: 0.866 0.881 Table 2: Performance of syntactic spaces trained on various languages on recovering prenominal and postnominal French noun–adjective edges. cally, for both primarily-prenominal and primarilypostnominal training languages, postnominal adjectives score on average approximately 2 points better than prenominal adjectives (Table 2). 5 mBERT Dependency Clusters Capture Universal Grammatical Relations 5.1 Methodology Given the previous evidence that mBERT shares syntactic representations cross-lingually, we aim to more qualitatively examine the nature of syntactic dependencies in syntactic subspaces. Let D be a dataset of parsed sentences, and the linear transformation B ∈Rk×m define a k-dimensional syntactic subspace. For every non-root word and hence syntactic dependency in D (since every word is a dependent of some other word or an added ROOT symbol), we calculate the k-dimensional head-dependent vector between the head and the dependent after projection by B. Specifically, for all head-dependent pairs (whead, wdep), we compute vdiff = B(hhead −hdep). We then visualize all differences over all sentences in two dimensions using t-SNE (van der Maaten and Hinton, 2008). 5.2 Experiments As with multilingual probing, one can visualize head-dependent vectors in several ways; we present the following experiments: • dependencies from one language, projected into a different language’s space (Figure 1) • dependencies from one language, projected into a holdout syntactic space trained on all other languages (Figure 5) Figure 5: t-SNE visualization of syntactic differences in Spanish projected into a holdout subspace (learned by a probe trained to recover syntax trees in languages other than Spanish). Despite never seeing a Spanish sentence during probe training, the subspace captures a surprisingly fine-grained view of Spanish dependencies. • dependencies from all languages, projected into a joint syntactic space trained on all languages (Figure 6) For all these experiments, we project into 32dimensional syntactic spaces.12 Additionally, we expose a web interface for visualization in our GitHub repository.13 5.3 Results When projected into a syntactic subspace determined by a structural probe, we find that difference vectors separate into clusters reflecting linguistic characteristics of the dependencies. The cluster identities largely overlap with (but do not exactly agree with) dependency labels as defined by Universal Dependencies (Figure 6). Additionally, the clusters found by mBERT are highly multilingual. When dependencies from several languages are projected into the same syntactic subspace, whether trained monolingually or cross-lingually, we find that dependencies of the same label share the same cluster (e.g. Figure 1, which presents both English 12We reduce the dimensionality of the subspaces here as compared to our previous experiments to match t-SNE suggestions and more aggressively filter non-syntactic information. 13https://github.com/ethanachi/ multilingual-probing-visualization/blob/master/ visualization.md 5570 Example sentences (trimmed for clarity). Heads in bold; dependents in bold italic. (b) Postnominal adjectives fr Le gaz d´eveloppe ses applications domestiques. id Film lain yang menerima penghargaan istimewa. fa ÐA g IÒJ¯ Õæ ¢ JK PX ¹Kð@ ¨ áJªKAÒJÖޕ (c) Genitives en The assortment of customers adds entertainment. es Con la recuperaci´on de la democracia y las libertades lv Sveˇsiniece piec¯el¯as, atvad¯ıj¯as no vec¯a v¯ıra (j) Definite articles en The value of the highest bid fr Merak est une ville d’Indon´esie sur la cˆote occidentale. de Selbst mitten in der Woche war das Lokal gut besucht. Figure 6: t-SNE visualization of 100,000 syntactic difference vectors projected into the cross-lingual syntactic subspace of Multilingual BERT. We exclude punct and visualize the top 11 dependencies remaining, which are collectively responsible for 79.36% of the dependencies in our dataset. Clusters of interest highlighted in yellow; linguistically interesting clusters labeled. and French syntactic difference vectors projected into an English subspace). 5.4 Finer-Grained Analysis Visualizing syntactic differences in the syntactic space provides a surprisingly nuanced view of the native distinctions made by mBERT. In Figure 6, these differences are colored by gold UD dependency labels. A brief summary is as follows: Adjectives Universal Dependencies categorizes all adjectival noun modifiers under the amod relation. However, we find that mBERT splits adjectives into two groups: prenominal adjectives in cluster (b) (e.g., Chinese 獨 獨 獨特 特 特的 的 的地理) and postnominal adjectives in cluster (u) (e.g., French applications domestiques). Nominal arguments mBERT maintains the UD distinction between subject (nsubj) and object (obj). Indirect objects (iobj) cluster with direct objects. Interestingly, mBERT generally groups adjunct arguments (obl) with nsubj if near the beginning of a sentence and obj otherwise. Relative clauses In the languages in our dataset, there are two major ways of forming relative clauses. Relative pronouns (e.g., English the man who is hungry are classed by Universal Dependencies as being an nsubj dependent, while subordinating markers (e.g., English I know that she saw me) are classed as the dependent of a mark relation. However, mBERT groups both of these relations together, clustering them distinctly from most nsubj and mark relations. 5571 Negatives Negative adverbial modifiers (English not, Farsi Q «, Chinese 不) are not clustered with other adverbial syntactic relations (advmod), but form their own group (h).14 Determiners The linguistic category of determiners (det) is split into definite articles (i), indefinite articles (e), possessives (f), and demonstratives (g). Sentence-initial definite articles (k) cluster separately from other definite articles (j). Expletive subjects Just as in UD, with the separate relation expl, expletive subjects, or thirdperson pronouns with no syntactic meaning (e.g. English It is cold, French Il faudrait, Indonesian Yang menjadi masalah kemudian), cluster separately (k) from other nsubj relations (small cluster in the bottom left). Overall, mBERT draws slightly different distinctions from Universal Dependencies. Although some are more fine-grained than UD, others appear to be more influenced by word order, separating relations that most linguists would group together. Still others are valid linguistic distinctions not distinguished by the UD standard. 5.5 Discussion Previous work has found that it is possible to recover dependency labels from mBERT embeddings, in the form of very high accuracy on dependency label probes (Liu et al., 2019; Tenney et al., 2019b). However, although we know that dependency label probes are able to use supervision to map from mBERT’s representations to UD dependency labels, this does not provide full insight into the nature of (or existence of) latent dependency label structure in mBERT. By contrast, in the structural probe, B is optimized such that ∥vdiff∥2 ≈1, but no supervision as to dependency label is given. The contribution of our method is thus to provide a view into mBERT’s “own” dependency label representation. In Appendix A, Figure 8, we provide a similar visualization as applied to MBERTRAND, finding much less cluster coherence. 5.6 Probing as a window into representations Our head-dependent vector visualization uses a supervised probe, but its objects of study are properties of the representation other than those relating to the probe supervision signal. Because the probe 14Stanford Dependencies and Universal Dependencies v1 had a separate neg dependency, but it was eliminated in UDv2. never sees supervision on the task we visualize for, the visualized behavior cannot be the result of the probe memorizing the task, a problem in probing methodology (Hewitt and Liang, 2019). Instead, it is an example of using probe supervision to focus in on aspects that may be drowned out in the original representation. However, the probe’s linear transformation may not pick up on aspects that are of causal influence to the model. 6 Related Work Cross-lingual embedding alignment Lample et al. (2018) find that independently trained monolingual word embedding spaces in ELMo are isometric under rotation. Similarly, Schuster et al. (2019) and Wang et al. (2019) geometrically align contextualized word embeddings trained independently. Wu et al. (2019) find that cross-lingual transfer in mBERT is possible even without shared vocabulary tokens, which they attribute to this isometricity. In concurrent work, Cao et al. (2020) demonstrate that mBERT embeddings of similar words in similar sentences across languages are approximately aligned already, suggesting that mBERT also aligns semantics across languages. K et al. (2020) demonstrate that strong cross-lingual transfer is possible without any word piece overlap at all. Analysis with the structural probe In a monolingual study, Reif et al. (2019) also use the structural probe of Hewitt and Manning (2019) as a tool for understanding the syntax of BERT. They plot the words of individual sentences in a 2dimensional PCA projection of the structural probe distances, for a geometric visualization of individual syntax trees. Further, they find that distances in the mBERT space separate clusters of word senses for the same word type. Understanding representations Pires et al. (2019) find that cross-lingual BERT representations share a common subspace representing useful linguistic information. Libovick`y et al. (2019) find that mBERT representations are composed of a language-specific component and a languageneutral component. Both Libovick`y et al. (2019) and Kudugunta et al. (2019) perform SVCCA on LM representations extracted from mBERT and a massively multilingual transformer-based NMT model, finding language family-like clusters. 5572 Li and Eisner (2019) present a study in syntactically motivated dimensionality reduction; they find that after being passed through an information bottleneck and dimensionality reduction via t-SNE, ELMo representations cluster naturally by UD part of speech tags. Unlike our syntactic dimensionality reduction process, the information bottleneck is directly supervised on POS tags, whereas our process receives no linguistic supervision other than unlabeled tree structure. In addition, the reduction process, a feed-forward neural network, is more complex than our linear transformation. Singh et al. (2019) evaluate the similarity of mBERT representations using Canonical Correlation Analysis (CCA), finding that overlap among subword tokens accounts for much of the representational similarity of mBERT. However, they analyze cross-lingual overlap across all components of the mBERT representation, whereas we evaluate solely the overlap of syntactic subspaces. Since syntactic subspaces are at most a small part of the total BERT space, these are not necessarily mutually contradictory with our results. In concurrent work, Michael et al. (2020) also extend probing methodology, extracting latent ontologies from contextual representations without direct supervision. 7 Discussion Language models trained on large amounts of text have been shown to develop surprising emergent properties; of particular interest is the emergence of non-trivial, easily accessible linguistic properties seemingly far removed from the training objective. For example, it would be a reasonable strategy for mBERT to share little representation space between languages, effectively learning a private model for each language and avoiding destructive interference. Instead, our transfer experiments provide evidence that at a syntactic level, mBERT shares portions of its representation space between languages. Perhaps more surprisingly, we find evidence for fine-grained, cross-lingual syntactic distinctions in these representations. Even though our method for identifying these distinctions lacks dependency label supervision, we still identify that mBERT has a cross-linguistic clustering of grammatical relations that qualitatively overlaps considerably with the Universal Dependencies formalism. The UUAS metric We note that the UUAS metric alone is insufficient for evaluating the accuracy of the structural probe. While the probe is optimized to directly recreate parse distances, (that is, dB(hℓ i, hℓ j) ≈dℓ T (wℓ i, wℓ j)) a perfect UUAS score under the minimum spanning tree construction can be achieved by ensuring that dB(hℓ i, hℓ j) is small if there is an edge between wℓ i and wℓ j, and large otherwise, instead of accurately recreating distances between words connected by longer paths. By evaluating Spearman correlation between all pairs of words, one directly evaluates the extent to which the ordering of words j by distance to each word i is correctly predicted, a key notion of the geometric interpretation of the structural probe. See Maudslay et al. (2020) for further discussion. Limitations Our methods are unable to tease apart, for all pairs of languages, whether transfer performance is caused by subword overlap (Singh et al., 2019) or by a more fundamental sharing of parameters, though we do note that language pairs with minimal subword overlap do exhibit nonzero transfer, both in our experiments and in others (K et al., 2020). Moreover, while we quantitatively evaluate cross-lingual transfer in recovering dependency distances, we only conduct a qualitative study in the unsupervised emergence of dependency labels via t-SNE. Future work could extend this analysis to include quantitative results on the extent of agreement with UD. We acknowledge as well issues in interpreting t-SNE plots (Wattenberg et al., 2016), and include multiple plots with various hyperparameter settings to hedge against this confounder in Figure 11. Future work should explore other multilingual models like XLM and XLM-RoBERTa (Lample and Conneau, 2019) and attempt to come to an understanding of the extent to which the properties we’ve discovered have causal implications for the decisions made by the model, a claim our methods cannot support. 8 Acknowledgements We would like to thank Erik Jones, Sebastian Schuster, and Chris Donahue for helpful feedback and suggestions. We would also like to thank the anonymous reviewers and area chair Adina Williams for their helpful comments on our draft. 5573 References Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations. In International Conference on Learning Representations. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Alexis Conneau, German Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2126– 2136. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 7059– 7069. Curran Associates, Inc. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In International Conference on Learning Representations. Sneha Kudugunta, Ankur Bapna, Isaac Caswell, and Orhan Firat. 2019. Investigating multilingual NMT representations at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In International Conference on Learning Representations. Xiang Lisa Li and Jason Eisner. 2019. Specializing word embeddings (for parsing) by information bottleneck. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Jindˇrich Libovick`y, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual bert? arXiv preprint arXiv:1911.03310. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Rowan Hall Maudslay, Josef Valvoda, Tiago Pimentel, Adina Williams, and Ryan Cotterell. 2020. A tale of a probe and a parser. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Julian Michael, Jan A. Botha, and Ian Tenney. 2020. Asking without telling: Exploring latent ontologies in contextual representations. arXiv preprint arXiv:2004.14513. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajiˇc, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of bert. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch´e Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8592–8600. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing. arXiv preprint arXiv:1902.09492. 5574 Jasdeep Singh, Bryan McCann, Richard Socher, and Caiming Xiong. 2019. BERT is not an interlingua and the bias of tokenization. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. L.J.P. van der Maaten and G.E. Hinton. 2008. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9:2579–2605. Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, and Ting Liu. 2019. Cross-lingual BERT transformation for zero-shot dependency parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Martin Wattenberg, Fernanda Vi´egas, and Ian Johnson. 2016. How to use t-sne effectively. Distill, 1(10):e2. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1112–1122. Association for Computational Linguistics. Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Emerging cross-lingual structure in pretrained language models. arXiv preprint arXiv:1911.01464. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–21. A Additional Syntactic Difference Visualizations A.1 Visualization of All Relations In our t-SNE visualization of syntactic difference vectors projected into the cross-lingual syntactic subspace of Multilingual BERT (Figure 6), we only visualize the top 11 relations, excluding punct. This represents 79.36% of the dependencies in our dataset. In Figure 7, we visualize all 36 relations in the dataset. Figure 7: t-SNE visualization of dependency headdependent pairs projected into the cross-lingual syntactic subspace of Multilingual BERT. Colors correspond to gold UD dependency type labels, which are unlabeled given that there are 43 in this visualization. A.2 Visualization with Randomly-Initialized Baseline In Figure 8, we present a visualization akin to Figure 1; however, both the head-dependency representations, as well as the syntactic subspace, are derived from MBERTRAND. Clusters around the edges of the figure are primarily type-based (e.g. one cluster for the word for and another for pour), and there is insignificant overlap between clusters with parallel syntactic functions from different languages. B Alternative Dimensionality Reduction Strategies In an effort to confirm the level of clarity of the clusters of dependency types which emerge from 5575 Figure 8: t-SNE visualization of head-dependent dependency pairs belonging to selected dependencies in English and French, projected into a syntactic subspace of MBERTRAND, as learned on English syntax trees. Colors correspond to gold UD dependency type labels. syntactic difference vectors, we examine simpler strategies for dimensionality reduction. B.1 PCA for Visualization Reduction We project difference vectors as previously into a 32-dimensional syntactic subspace. However, we visualize in 2 dimensions using PCA instead of t-SNE. There are no significant trends evident. Figure 9: Syntactic difference vectors visualized after dimensionality reduction with PCA, instead of t-SNE, colored by UD dependency types. There are no significant trends evident. B.2 PCA for Dimensionality Reduction Instead of projecting difference vectors into our syntactic subspace, we first reduce them to a 32dimensional representation using PCA,15 then reduce to 2 dimensions using t-SNE as previously. We find that projected under PCA, syntactic difference vectors still cluster into major groups, and major trends are still evident (Figure 10). In addition, many finer-grained distinctions are still apparent (e.g. the division between common nouns and pronouns). However, in some cases, the clusters are motivated less by syntax and more by semantics or language identities. For example: • The nsubj and obj clusters overlap, unlike our syntactically-projected visualization, where there is clearer separation. • Postnominal adjectives, which form a single coherent cluster under our original visualization scheme, are split into several different clusters, each primarily composed of words from one specific language. • There are several small monolingual clusters without any common syntactic meaning, mainly composed of languages parsed more poorly by BERT (i.e. Chinese, Arabic, Farsi, Indonesian). Figure 10: t-SNE visualization of syntactic differences in all languages we study, projected to 32 dimensions using PCA. C Additional Experiment Settings C.1 Pairwise Transfer We present full pairwise transfer results in Table 3. Each experiment was run 3 times with different random seeds; experiment settings with range in 15This is of equal dimensionality to our syntactic subspace. 5576 Structural Probe Results: Undirected Unlabeled Attachment Score (UUAS) Tgt \Src ar cz de en es fa fi fr id lv zh linear rand holdout all ar 72.7 68.6 66.6 65.3 67.5 64.0 60.8 68.1 65.3 60.1 53.4 57.1 49.8 70.4 72.0 cz 57.5* 83.6 74.7 72.6 71.1 63.5 68.9 71.5 62.4 71.0 58.0 45.4 57.3 77.8 82.5 de 49.3 70.2 83.5 70.8 68.2 58.7 61.1 70.6 56.9* 62.0 52.0* 42.8 55.2 75.1 79.6 en 47.2 61.2 65.0 79.8 63.9 50.8 55.3 65.4 54.5 54.0 50.5 41.5 57.4 68.9 75.9 es 52.0 67.2 69.8 69.4 79.7 56.9 56.8 75.8 61.0 55.6 49.2 44.6 55.3 75.5 77.6 fa 51.7 61.3 60.3 57.0 57.8 70.8 53.7 59.7 56.5 53.1 49.7 52.6 43.2 63.3 68.2 fi 55.5 69.8 68.4 66.6 66.0 60.2 76.5 66.0 61.2 68.2 59.2 50.1 54.9 70.7 73.0 fr 50.8* 67.8 73.0 70.0 74.3 56.9 55.9 84.0 60.9 55.1 49.6 46.4 61.2 76.4 80.3 id 57.1 66.3 67.4 63.6 67.0 61.0 59.2 69.0 74.8 57.5 54.6 55.2 53.2 70.8 73.1 lv 56.9* 73.2 69.2 69.1 67.0 61.5 70.8 66.7 61.1 77.0 60.7 47.0 53.0 73.7 75.1 zh 41.2* 49.7 49.6 51.1 47.3 42.7* 48.1 47.9 44.5* 47.2 65.7 44.2 41.1 51.3 57.8 Structural Probe Results: Distance Spearman Correlation (DSpr.) Tgt \Src ar cz de en es fa fi fr id lv zh linear rand holdout all ar .822 .772 .746 .744 .774 .730 .723 .770 .750 .722 .640 .573 .657 .779 .795 cz .730 .845 .799 .781 .801 .741 .782 .796 .745 .791 .656 .570 .658 .821 .839 de .690 .807 .846 .792 .792 .736 .767 .796 .723 .765 .652* .533 .672 .824 .836 en .687 .765 .764 .817 .770 .696 .732 .773 .720 .725 .655 .567 .659 .788 .806 es .745 .821 .812 .806 .859 .741 .775 .838 .777 .774 .669 .589 .693 .838 .848 fa .661 .732 .724 .706 .705 .813 .683 .714 .686 .684 .629 .489 .611 .744 .777 fi .682* .787 .771 .756 .764 .712 .812 .762 .715 .781 .658 .564 .621 .792 .802 fr .731* .810 .816 .806 .836 .738 .767 .864 .776 .760 .674 .598 .710 .840 .853 id .715 .757 .752 .739 .765 .718 .714 .772 .807 .704 .657 .578 .656 .776 .789 lv .681 .771 .746 .737 .745 .699 .763 .740 .698 .798 .644 .543 .608 .775 .783 zh .538* .655 .644 .644 .633 .593* .652 .638 .584* .639 .777 .493 .590 .664 .717 Table 3: Performance (in UUAS and DSpr.) on transfer between all language pairs in our dataset. All runs were repeated 3 times; runs for which the range in performance exceeded 2 points (for UUAS) or 0.02 (for DSpr.) are marked with an asterisk (*). ar cz de en es fa fi fr id lv zh ar 0.000 1.044 1.048 1.049 1.015 1.046 1.058 1.022 1.031 1.059 1.076 cz 1.044 0.000 0.982 1.017 0.970 1.064 1.021 1.007 1.053 1.011 1.083 de 1.048 0.982 0.000 1.005 0.973 1.044 1.017 0.971 1.022 1.029 1.065 en 1.049 1.017 1.005 0.000 0.983 1.051 1.033 0.994 1.035 1.040 1.060 es 1.015 0.970 0.973 0.983 0.000 1.038 1.023 0.936 1.010 1.024 1.065 fa 1.046 1.064 1.044 1.051 1.038 0.000 1.060 1.028 1.040 1.063 1.069 fi 1.058 1.021 1.017 1.033 1.023 1.060 0.000 1.020 1.042 1.011 1.058 fr 1.022 1.007 0.971 0.994 0.936 1.028 1.020 0.000 0.993 1.028 1.041 id 1.031 1.053 1.022 1.035 1.010 1.040 1.042 0.993 0.000 1.051 1.052 lv 1.059 1.011 1.029 1.040 1.024 1.063 1.011 1.028 1.051 0.000 1.068 zh 1.076 1.083 1.065 1.060 1.065 1.069 1.058 1.041 1.052 1.068 0.000 Table 4: Subspace angle overlap as evaluated by the pairwise mean principal angle between subspaces UUAS greater than 2 points are labeled with an asterisk (*). C.2 Subspace Overlap Table 4 presents the average principal angle between the subspaces parametrized by each language we evaluate. Table 5 contains the perlanguage Spearman correlation between the ordering given by (negative) subspace angle and structural probe transfer accuracy, reported both on UUAS and DSpr. D Data Sources We use the following UD corpora in our experiments: Arabic-PADT, Chinese-GSD, CzechPDT, English-EWT, Finnish-TDT, French-GSD, German-GSD, Indonesian-GSD, Latvian-LVTB, Persian-Seraji, and Spanish-Ancora. E t-SNE reproducibility Previous work (Wattenberg et al., 2016) has investigated issues in the interpretability of tSNE plots. Given the qualitative nature of our experiments, to avoid this confounder, we include multiple plots with various settings of the perplexity hyperparameter in Figure 11. 5577 Language ar cz de en es fa fi fr id lv zh Spearman Correl. (UUAS) 0.88 0.85 0.87 0.91 0.91 0.48 0.85 0.89 0.71 0.90 0.41 Spearman Correl. (DSpr.) 0.95 0.96 0.95 0.96 0.97 0.50 0.90 0.93 0.72 0.94 0.23 Table 5: The Spearman correlation between two orderings of all languages for each language i. The first ordering of languages is given by (negative) subspace angle between the B matrix of language i and that of all languages. The second ordering is given by the structural probe transfer accuracy from all languages (including i) to i. This is repeated for each of the two structural probe evaluation metrics. Figure 11: t-SNE visualization of head-dependent dependency pairs belonging to selected dependencies in English and French, projected into a syntactic subspace of Multilingual BERT, as learned on English syntax trees. Colors correspond to gold UD dependency type labels, as in Figure 1, varying the perplexity (PPL) t-SNE hyperparmeter. From left to right, figures correspond to PPL 5, 10, 30, 50, spanning the range of PPL suggested by van der Maaten and Hinton (2008). Cross-lingual dependency label clusters are exhibited across all four figures.
2020
493
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5578–5593 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5578 Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection Hanjie Chen, Guangtao Zheng, Yangfeng Ji Department of Computer Science University of Virginia Charlottesville, VA, USA {hc9mx, gz5hp, yangfeng}@virginia.edu Abstract Generating explanations for neural networks has become crucial for their applications in real-world with respect to reliability and trustworthiness. In natural language processing, existing methods usually provide important features which are words or phrases selected from an input text as an explanation, but ignore the interactions between them. It poses challenges for humans to interpret an explanation and connect it to model prediction. In this work, we build hierarchical explanations by detecting feature interactions. Such explanations visualize how words and phrases are combined at different levels of the hierarchy, which can help users understand the decision-making of blackbox models. The proposed method is evaluated with three neural text classifiers (LSTM, CNN, and BERT) on two benchmark datasets, via both automatic and human evaluations. Experiments show the effectiveness of the proposed method in providing explanations that are both faithful to models and interpretable to humans. 1 Introduction Deep neural networks have achieved remarkable performance in natural language processing (NLP) (Devlin et al., 2018; Howard and Ruder, 2018; Peters et al., 2018), but the lack of understanding on their decision making leads them to be characterized as blackbox models and increases the risk of applying them in real-world applications (Lipton, 2016; Burns et al., 2018; Jumelet and Hupkes, 2018; Jacovi et al., 2018). Understanding model prediction behaviors has been a critical factor in whether people will trust and use these blackbox models (Ribeiro et al., 2016). A typical work on understanding decisionmaking is to generate prediction explanations for each input example, called local explanation generation. In NLP, most of existing work on local explanation generation focuses on producing wordlevel or phrase-level explanations by quantifying contributions of individual words or phrases to a model prediction (Ribeiro et al., 2016; Lundberg and Lee, 2017; Lei et al., 2016; Plumb et al., 2018). Figure 1: Different explanations for a NEGATIVE movie review a waste of good performance, where the color of each block represents the contribution of the corresponding word/phrase/clause (feature) to the model prediction. From the hierarchical explanation, we obtain a set of features in each timestep (t), where the most important one is waste of good. Figure 1 (a) and (b) present a word-level and a phrase-level explanation generated by the LIME (Ribeiro et al., 2016) and the Contextual Decomposition (CD) (Murdoch et al., 2018) respectively for explaining sentiment classification. Both explanations provide scores to quantify how a word or a phrase contributes to the final prediction. For example, the explanation generated by LIME captures a keyword waste and the explanation from CD identifies an important phrase waste of. 5579 However, neither of them is able to explain the model decision-making in terms of how words and phrases are interacted with each other and composed together for the final prediction. In this example, since the final prediction is NEGATIVE, one question that we could ask is that how the word good or a phrase related to the word good contributes to the model prediction. An explanation being able to answer this question will give users a better understanding on the model decision-making and also more confidence to trust the prediction. The goal of this work is to reveal prediction behaviors of a text classifier by detecting feature (e.g., words or phrases) interactions with respect to model predictions. For a given text, we propose a model-agnostic approach, called HEDGE (for Hierarchical Explanation via Divisive Generation), to build hierarchical explanations by recursively detecting the weakest interactions and then dividing large text spans into smaller ones based on the interactions. As shown in Figure 1 (c), the hierarchical structure produced by HEDGE provides a comprehensive picture of how different granularity of features interacting with each other within the model. For example, it shows how the word good is dominated by others in the model prediction, which eventually leads to the correct prediction. Furthermore, the scores of text spans across the whole hierarchy also help identify the most important feature waste of good, which can be served as a phrase-level explanation for the model prediction. The contribution of this work is three-fold: (1) we design a top-down model-agnostic method of constructing hierarchical explanations via feature interaction detection; (2) we propose a simple and effective scoring function to quantify feature contributions with respect to model predictions; and (3) we compare the proposed algorithm with several competitive methods on explanation generation via both automatic and human evaluations. The experiments were conducted on sentiment classification tasks with three neural network models, LSTM (Hochreiter and Schmidhuber, 1997), CNN (Kim, 2014), and BERT (Devlin et al., 2018), on the SST (Socher et al., 2013) and IMDB (Maas et al., 2011) datasets. The comparison with other competitive methods illustrates that HEDGE provides more faithful and human-understandable explanations. Our implementation is available at https:// github.com/UVa-NLP/HEDGE. 2 Related Work Over the past years, many approaches have been explored to interpret neural networks, such as contextual decomposition (CD) for LSTM (Murdoch et al., 2018) or CNN model (Godin et al., 2018), gradient-based interpretation methods (Hechtlinger, 2016; Sundararajan et al., 2017), and attention-based methods (Ghaeini et al., 2018; Lee et al., 2017; Serrano and Smith, 2019). However, these methods have limited capacity in realworld applications, as they require deep understanding of neural network architectures (Murdoch et al., 2018) or only work with specific models (Alvarez-Melis and Jaakkola, 2018). On the other hand, model-agnostic methods (Ribeiro et al., 2016; Lundberg and Lee, 2017) generate explanations solely based on model predictions and are applicable for any black-box models. In this work, we mainly focus on model-agnostic explanations. 2.1 Model-Agnostic Explanations The core of generating model-agnostic explanations is how to efficiently evaluate the importance of features with respect to the prediction. So far, most of existing work on model-agnostic explanations focus on the word level. For example, Li et al. (2016) proposed Leave-one-out to probe the black-box model by observing the probability change on the predicted class when erasing a certain word. LIME proposed by Ribeiro et al. (2016) estimates individual word contribution locally by linear approximation from perturbed examples. A line of relevant works to ours is Shapleybased methods, where the variants of Shapley values (Shapley, 1953) are used to evaluate feature importance, such as SampleShapley (Kononenko et al., 2010), KernelSHAP (Lundberg and Lee, 2017), and L/C-Shapley (Chen et al., 2018). They are still in the category of generating word-level explanations, while mainly focus on addressing the challenge of computational complexity of Shapley values (Datta et al., 2016). In this work, inspired by an extension of Shapley values (Owen, 1972; Grabisch, 1997; Fujimoto et al., 2006), we design a function to detect feature interactions for building hierarchical model-agnostic explanations in subsection 3.1. While, different from prior work of using Shapley values for feature importance evaluation, we propose an effective and simpler way to 5580 evaluate feature importance as described in subsection 3.3, which outperforms Shapley-based methods in selecting important words as explanations in subsection 4.2. 2.2 Hierarchical Explanations Addressing the limitation of word-level explanations (as discussed in section 1) has motivated the work on generating phrase-level or hierarchical explanations. For example, Tsang et al. (2018) generated hierarchical explanations by considering the interactions between any features with exhaustive search, which is computationally expensive. Singh et al. (2019) proposed agglomerative contextual decomposition (ACD) which utilizes CD scores (Murdoch et al., 2018; Godin et al., 2018) for feature importance evaluation and employ a hierarchical clustering algorithm to aggregate features together for hierarchical explanation. Furthermore, Jin et al. (2019) indicated the limitations of CD and ACD in calculating phrase interactions in a formal context, and proposed two explanation algorithms by quantifying context independent importance of words and phrases. A major component of the proposed method on feature interaction detection is based on the Shapley interaction index (Owen, 1972; Grabisch, 1997; Fujimoto et al., 2006), which is extended in this work to capture the interactions in a hierarchical structure. Lundberg et al. (2018) calculated features interactions via SHAP interaction values along a given tree structure. Chen and Jordan (2019) suggested to utilize a linguistic tree structure to capture the contributions beyond individual features for text classification. The difference with our work is that both methods (Lundberg et al., 2018; Chen and Jordan, 2019) require hierarchical structures given, while our method constructs structures solely based on feature interaction detection without resorting external structural information. In addition, different from Singh et al. (2019), our algorithm uses a top-down fashion to divide long texts into short phrases and words based on the weakest interactions, which is shown to be more effective and efficient in the experiments in section 4. 3 Method This section explains the proposed algorithm on building hierarchical explanations (subsection 3.1) and two critical components of this algorithm: detecting feature interaction (subsection 3.2) and quantifying feature importance (subsection 3.3). Algorithm 1 Hierarchical Explanation via Divisive Generation 1: Input: text x with length n, and predicted label ˆy 2: Initialize the original partition P0 ←{x(0,n]} 3: Initialize the contribution set C0 = ∅ 4: Initialize the hierarchy H = [P0] 5: for t = 1, . . . , n −1 do 6: Find x(si,si+1] and j by solving Equation 1 7: Update the partition P′ t ←Pt−1\{x(si,si+1]} Pt ←P′ t ∪{x(si,j], x(j,si+1]} 8: H.add(Pt) 9: Update the contribution set C with C′ t ←Ct−1 ∪{(x(si,j], ψ(x(si,j]))} Ct ←C′ t ∪{(x(j,si+1], ψ(x(j,si+1]))} 10: end for 11: Output: Cn−1, H 3.1 Generating Hierarchical Explanations For a classification task, let x = (x1, . . . , xn) denote a text with n words and ˆy be the prediction label from a well-trained model. Furthermore, we define P = {x(0,s1], x(s1,s2], . . . , x(sP −1,n]} be a partition of the word sequence with P text spans, where x(si,si+1] = (xsi+1, . . . , xsi+1). For a given text span x(si,si+1], the basic procedure of HEDGE is to divide it into two smaller text spans x(si,j] and x(j,si+1], where j is the dividing point (si < j < si+1), and then evaluate their contributions to the model prediction ˆy. Algorithm 1 describes the whole procedure of dividing x into different levels of text spans and evaluating the contribution of each of them. Starting from the whole text x, the algorithm first divides x into two segments. In the next iteration, it will pick one of the two segments and further split it into even smaller spans. As shown in algorithm 1, to perform the top-down procedure, we need to answer the questions: for the next timestep, which text span the algorithm should pick to split and where is the dividing point? Both questions can be addressed via the following optimization problem: min x(si,si+1]∈P min j∈(si,si+1) φ(x(si,j], x(j,si+1] | P), (1) where φ(x(si,j], x(j,si+1] | P) defines the interaction score between x(si,j] and x(j,si+1] given the 5581 current partition P. The detail of this score function will be explained in subsection 3.2. For a given x(si,si+1] ∈P, the inner optimization problem will find the weakest interaction point to split the text span x(si,si+1] into two smaller ones. It answers the question about where the dividing point should be for a given text span. A trivial case of the inner optimization problem is on a text span with length 2, since there is only one possible way to divide it. The outer optimization answers the question about which text span should be picked. This optimization problem can be solved by simply enumerating all the elements in a partition P. A special case of the outer optimization problem is at the first iteration t = 1, where P0 = {x(0,n]} only has one element, which is the whole input text. Once the partition is updated, it is then added to the hierarchy H. The last step in each iteration is to evaluate the contributions of the new spans and update the contribution set C as in line 9 of the algorithm 1. For each, the algorithm evaluates its contribution to the model prediction with the feature importance function ψ(·) defined in Equation 5. The final output of algorithm 1 includes the contribution set Cn−1 which contains all the produced text spans in each timestep together with their importance scores, and the hierarchy H which contains all the partitions of x along timesteps. A hierarchical explanation can be built based on Cn−1 and H by visualizing the partitions with all text spans and their importance scores along timesteps, as Figure 1 (c) shows. Note that with the feature interaction function φ(·, ·), we could also design a bottom-up approach to merge two short text spans if they have the strongest interaction. Empirically, we found that this bottom-up approach performs worse than the algorithm 1, as shown in Appendix A. 3.2 Detecting Feature Interaction For a given text span x(si,si+1] ∈ P and the dividing point j, the new partition will be N = P\{x(si,si+1]} ∪{x(si,j], x(j,si+1]} = {x(0,s1], . . . , x(si,j], x(j,si+1], . . . , x(sP −1,n]}. We consider the effects of other text spans in N when calculate the interaction between x(si,j] and x(j,si+1], since the interaction between two words/phrases is closely dependent on the context (Hu et al., 2016; Chen et al., 2016). We adopt the Shapley interaction index from coalition game theory (Owen, 1972; Grabisch, 1997; Fujimoto et al., 2006) to calculate the interaction. For simplicity, we denote x(si,j] and x(j,si+1] as j1 and j2 respectively. The interaction score is defined as (Lundberg et al., 2018), φ(j1,j2 |P)= X S⊆N \{j1,j2} |S|!(P −|S| −1)! P! γ(j1,j2,S), (2) where S represents a subset of text spans in N\{j1, j2}, |S| is the size of S, and γ(j1, j2, S) is defined as follows, γ(j1,j2,S) = E[f(x′)|S ∪{j1,j2}] −E[f(x′)|S ∪{j1}] −E[f(x′) | S ∪{j2}] + E[f(x′) | S], (3) where x′ is the same as x except some missing words that are not covered by the given subset (e.g. S), f(·) denotes the model output probability on the predicted label ˆy, and E[f(x′) | S] is the expectation of f(x′) over all possible x′ given S. In practice, the missing words are usually replaced with a special token <pad>, and f(x′) is calculated to estimate E[f(x′)|S] (Chen et al., 2018; Datta et al., 2016; Lundberg and Lee, 2017). We also adopt this method in our experiments. Another way to estimate the expectation is to replace the missing words with substitute words randomly drawn from the full dataset, and calculate the empirical mean of all the sampling data (Kononenko et al., 2010; ˇStrumbelj and Kononenko, 2014), which has a relatively high computational complexity. With the number of text spans (features) increasing, the exponential number of model evaluations in Equation 2 becomes intractable. We calculate an approximation of the interaction score based on the assumption (Chen et al., 2018; Singh et al., 2019; Jin et al., 2019): a word or phrase usually has strong interactions with its neighbours in a sentence. The computational complexity can be reduced to polynomial by only considering m neighbour text spans of j1 and j2 in N. The interaction score is rewritten as φ(j1,j2 |P)= X S⊆Nm\{j1,j2} |S|!(M −|S| −2)! (M −1)! γ(j1,j2,S), (4) where Nm is the set containing j1, j2 and their neighbours, and M = |Nm|. In section 4, we set m = 2, which performs well. The performance can be further improved by increasing m, but at the cost of increased computational complexity. 5582 3.3 Quantifying Feature Importance To measure the contribution of a feature x(si,si+1] to the model prediction, we define the importance score as ψ(x(si,si+1]) =fˆy(x(si,si+1]) − max y′̸=ˆy,y′∈Y fy′(x(si,si+1]), (5) where fˆy(x(si,si+1]) is the model output on the predicted label ˆy; maxy′̸=ˆy,y′∈Y fy′(x(si,si+1]) is the highest model output among all classes excluding ˆy. This importance score measures how far the prediction on a given feature is to the prediction boundary, hence the confidence of classifying x(si,si+1] into the predicted label ˆy. Particularly in text classification, it can be interpreted as the contribution to a specific class ˆy. The effectiveness of Equation 5 as feature importance score is verified in subsection 4.2, where HEDGE outperforms several competitive baseline methods (e.g. LIME (Ribeiro et al., 2016), SampleShapley (Kononenko et al., 2010)) in identifying important features. 4 Experiments The proposed method is evaluated on text classification tasks with three typical neural network models, a long short-term memories (Hochreiter and Schmidhuber, 1997, LSTM), a convolutional neural network (Kim, 2014, CNN), and BERT (Devlin et al., 2018), on the SST (Socher et al., 2013) and IMDB (Maas et al., 2011) datasets, via both automatic and human evaluations. 4.1 Setup Datasets. We adopt the SST-2 (Socher et al., 2013) which has 6920/872/1821 examples in the train/dev/test sets with binary labels. The IMDB (Maas et al., 2011) also has binary labels with 25000/25000 examples in the train/test sets. We hold out 10% of the training examples as the development set. Models. The CNN model (Kim, 2014) includes a single convolutional layer with filter sizes ranging from 3 to 5. The LSTM (Hochreiter and Schmidhuber, 1997) has a single layer with 300 hidden states. Both models are initialized with 300-dimensional pretrained word embeddings (Mikolov et al., 2013). We use the pretrained BERT model1 with 12 trans1https://github.com/huggingface/ pytorch-transformers former layers, 12 self-attention heads, and the hidden size of 768, which was then fine-tuned with different downstream tasks to achieve the best performance. Table 1 shows the best performance of the models on both datasets in our experiments, where BERT outperforms CNN and LSTM with higher classification accuracy. Models Dataset SST IMDB LSTM 0.842 0.870 CNN 0.850 0.901 BERT 0.924 0.930 Table 1: The classification accuracy of different models on the SST and IMDB datasets. 4.2 Quantitative Evaluation We adopt two metrics from prior work on evaluating word-level explanations: the area over the perturbation curve (AOPC) (Nguyen, 2018; Samek et al., 2016) and the log-odds scores (Shrikumar et al., 2017; Chen et al., 2018), and define a new evaluation metric called cohesion-score to evaluate the interactions between words within a given text span. The first two metrics measure local fidelity by deleting or masking top-scored words and comparing the probability change on the predicted label. They are used to evaluate Equation 5 in quantifying feature contributions to the model prediction. The cohesion-score measures the synergy of words within a text span to the model prediction by shuffling the words to see the probability change on the predicted label. AOPC. By deleting top k% words, AOPC calculates the average change in the prediction probability on the predicted class over all test data as follows, AOPC(k) = 1 N N X i=1 {p(ˆy | xi) −p(ˆy | ˜x(k) i )}, (6) where ˆy is the predicted label, N is the number of examples, p(ˆy | ·) is the probability on the predicted class, and ˜x(k) i is constructed by dropping the k% top-scored words from xi. Higher AOPCs are better, which means that the deleted words are important for model prediction. To compare with other word-level explanation generation methods under this metric, we select word-level features from the bottom level of a hierarchical explanation and sort them in the order of their estimated importance to the prediction. 5583 Datasets Methods LSTM CNN BERT AOPC Log-odds AOPC Log-odds AOPC Log-odds SST Leave-one-out 0.441 -0.443 0.434 -0.448 0.464 -0.723 CD 0.384 -0.382 LIME 0.444 -0.449 0.473 -0.542 0.134 -0.186 L-Shapley 0.431 -0.436 0.425 -0.459 0.435 -0.809 C-Shapley 0.423 -0.425 0.415 -0.446 0.410 -0.754 KernelSHAP 0.360 -0.361 0.387 -0.423 0.411 -0.765 SampleShapley 0.450 -0.454 0.487 -0.550 0.462 -0.836 HEDGE 0.458 -0.466 0.494 -0.567 0.479 -0.862 IMDB Leave-one-out 0.630 -1.409 0.598 -0.806 0.335 -0.849 CD 0.495 -1.190 LIME 0.764 -1.810 0.691 -1.091 0.060 -0.133 L-Shapley 0.637 -1.463 0.623 -0.950 0.347 -1.024 C-Shapley 0.629 -1.427 0.613 -0.928 0.331 -0.973 KernelSHAP 0.542 -1.261 0.464 -0.727 0.223 -0.917 SampleShapley 0.757 -1.597 0.707 -1.108 0.355 -1.037 HEDGE 0.783 -1.873 0.719 -1.144 0.411 -1.126 Table 2: AOPCs and log-odds scores of different interpretation methods in explaining different models on the SST and IMDB datasets. Log-odds. Log-odds score is calculated by averaging the difference of negative logarithmic probabilities on the predicted class over all of the test data before and after masking the top r% features with zero paddings, Log-odds(r) = 1 N N X i=1 log p(ˆy | ˜x(r) i ) p(ˆy | xi) . (7) The notations are the same as in Equation 6 with the only difference that ˜x(r) i is constructed by replacing the top r% word features with the special token ⟨pad⟩in xi. Under this metric, lower log-odds scores are better. Cohesion-score. We propose cohesion-score to justify an important text span identified by HEDGE. Given an important text span x(a,b], we randomly pick a position in the word sequence (x1, . . . , xa, xb+1, . . . , xn) and insert a word back. The process is repeated until a shuffled version of the original sentence ¯x is constructed. The cohesion-score is the difference between p(ˆy | x) and p(ˆy | ¯x). Intuitively, the words in an important text span have strong interactions. By perturbing such interactions, we expect to observe the output probability decreasing. To obtain a robust evaluation, for each sentence xi, we construct Q different word sequences {¯x(q) i }Q q=1 and compute the average as Cohesion-score = 1 N N X i=1 1 Q Q X q=1 (p(ˆy | xi) −p(ˆy | ¯x(q) i )), (8) where ¯x(q) i is the qth perturbed version of xi, Q is set as 100, and the most important text span in the contribution set C is considered. Higher cohesionscores are better. 4.2.1 Results We compare HEDGE with several competitive baselines, namely Leave-one-out (Li et al., 2016), LIME (Ribeiro et al., 2016), CD (Murdoch et al., 2018), Shapley-based methods, (Chen et al., 2018, L/C-Shapley), (Lundberg and Lee, 2017, KernelSHAP), and (Kononenko et al., 2010, SampleShapley), using AOPC and log-odds metrics; and use cohesion-score to compare HEDGE with another hierarchical explanation generation method ACD (Singh et al., 2019). The AOPCs and log-odds scores on different models and datasets are shown in Table 2, where k = r = 20. Additional results of AOPCs and logodds changing with different k and r are shown in Appendix B. For the IMDB dataset, we tested on a subset with 2000 randomly selected samples due to computation costs. HEDGE achieves the best performance on both evaluation metrics. Sam5584 Methods Models Cohesion-score SST IMDB HEDGE CNN 0.016 0.012 BERT 0.124 0.103 LSTM 0.020 0.050 ACD LSTM 0.015 0.038 Table 3: Cohesion scores of HEDGE and ACD in interpreting different models on the SST and IMDB datasets. For ACD, we adopt the existing application from the original paper (Singh et al., 2019) to explain LSTM on text classification. (a) HEDGE for LSTM on the SST. (b) ACD for LSTM on the SST. Figure 2: Compare HEDGE with ACD in interpreting the LSTM model on a negative movie review from the SST dataset, where LSTM makes a wrong prediction (POSITIVE). The importance scores of HEDGE and CD scores are normalized for comparison. pleShapley also achieves a good performance with the number of samples set as 100, but the computational complexity is 200 times than HEDGE. Other variants, L/C-Shapley and KernelSHAP, applying approximations to Shapley values perform worse than SampleShapley and HEDGE. LIME performs comparatively to SampleShapley on the LSTM and CNN models, but is not fully capable of interpreting the deep neural network BERT. The limitation of context decomposition mentioned by Jin et al. (2019) is validated by the worst performance of CD in identifying important words. We also observed an interesting phenomenon that the simplest baseline Leave-one-out can achieve relatively good performance, even better than HEDGE when k and r are small. And we suspect that is because the criteria of Leave-one-out for picking single keywords matches the evaluation metrics. Overall, experimental results demonstrate the effectiveness of Equation 5 in measuring feature importance. And the computational complexity is only O(n), which is much smaller than other baselines (e.g. SampleShapley, and L/C-Shapley with polynomial complexity). Table 3 shows the cohesion-scores of HEDGE and ACD with different models on the SST and IMDB datasets. HEDGE outperforms ACD with LSTM, achieving higher cohesion-scores on both datasets, which indicates that HEDGE is good at capturing important phrases. Comparing the results of HEDGE on different models, the cohesion-scores of BERT are significantly higher than LSTM and CNN. It indicates that BERT is more sensitive to perturbations on important phrases and tends to utilize context information for predictions. 4.3 Qualitative Analysis For qualitative analysis, we present two typical examples. In the first example, we compare HEDGE with ACD in interpreting the LSTM model. Figure 2 visualizes two hierarchical explanations, generated by HEDGE and ACD respectively, on a negative movie review from the SST dataset. In this case, LSTM makes a wrong prediction (POSITIVE). Figure 2(a) shows HEDGE correctly captures the sentiment polarities of bravura and emptiness, and the interaction between them as bravura exercise flips the polarity of in emptiness to positive. It explains why the model makes the wrong prediction. On the other hand, ACD incorrectly marks the two words with opposite polarities, and misses the feature interaction, as Figure 2(b) shows. In the second example, we compare HEDGE in interpreting two different models (LSTM and BERT). Figure 3 visualizes the explanations on a positive movie review. In this case, BERT gives the correct prediction (POSITIVE), while LSTM makes 5585 (a) HEDGE for LSTM on SST. (b) HEDGE for BERT on SST. Figure 3: Compare HEDGE in interpreting different models (LSTM and BERT) on a positive movie review from the SST dataset, where BERT makes the correct prediction (POSITIVE), while LSTM makes a wrong prediction (NEGATIVE). HEDGE explains that BERT captures the important phrase not a bad for making the correct prediction, while LSTM ignores it and is misled by the negative word bad. a wrong prediction (NEGATIVE). The comparison between Figure 3(a) and 3(b) shows the difference of feature interactions within the two models and explains how a correct/wrong prediction was made. Specifically, Figure 3(b) illustrates that BERT captures the key phrase not a bad at step 1, and thus makes the positive prediction, while LSTM (as shown in Figure 3(a)) misses the interaction between not and bad, and the negative word bad pushes the model making the NEGATIVE prediction. Both cases show that HEDGE is capable of explaining model prediction behaviors, which helps humans understand the decision-making. More examples are presented in Appendix C due to the page limitation. 4.4 Human Evaluation We had 9 human annotators from the Amazon Mechanical Turk (AMT) for human evaluation. The features (e.g., words or phrases) with the highest importance score given by HEDGE and other baselines are selected as the explanations. Note that HEDGE and ACD can potentially give very long top features which are not user-friendly in human evaluation, so we additionally limit the maximum length of selected features to five. We provided the input text with different explanations in the user interface (as shown in Appendix D) and asked human annotators to guess the model’s prediction (Nguyen, 2018) from {“Negative”, “Positive”, “N/A”} based on each explanation, where “N/A” was selected when annotators cannot guess the model’s prediction. We randomly picked 100 movie reviews from the IMDB dataset for human evaluation. There are two dimensions of human evaluation. We first compare HEDGE with other baselines using the predictions made by the same LSTM model. Second, we compare the explanations generated by HEDGE on three different models: LSTM, CNN, and BERT. We measure the number of human annotations that are coherent with the actual model predictions, and define the coherence score as the ratio between the coherent annotations and the total number of examples. 4.4.1 Results Table 4 shows the coherence scores of eight different interpretation methods for LSTM on the IMDB dataset. HEDGE outperforms other baselines with higher coherence score, which means that HEDGE can capture important features which are highly consistent with human interpretations. LIME is still a strong baseline in providing interpretable explanations, while ACD and Shapley-based methods perform worse. Table 5 shows both the accuracy and coherence scores of different models. HEDGE succeeds in interpreting black-box models with relatively high coherence scores. Moreover, although BERT can achieve higher prediction accuracy than the other two models, its coherence score is lower, manifesting a potential tradeoff between accuracy and interpretability of deep models. 5 Conclusion In this paper, we proposed an effective method, HEDGE, building model-agnostic hierarchical interpretations via detecting feature interactions. In 5586 Methods Coherence Score Leave-one-out 0.82 ACD 0.68 LIME 0.85 L-Shapley 0.75 C-Shapley 0.73 KernelSHAP 0.56 SampleShapley 0.78 HEDGE 0.89 Table 4: Human evaluation of different interpretation methods with LSTM model on the IMDB dataset. Models Accuracy Coherence scores LSTM 0.87 0.89 CNN 0.90 0.84 BERT 0.97 0.75 Table 5: Human evaluation of HEDGE with different models on the IMDB dataset. this work, we mainly focus on sentiment classification task. We test HEDGE with three different neural network models on two benchmark datasets, and compare it with several competitive baseline methods. The superiority of HEDGE is approved by both automatic and human evaluations. References David Alvarez-Melis and Tommi S Jaakkola. 2018. Towards robust interpretability with self-explaining neural networks. In NeurIPS. Kaylee Burns, Aida Nematzadeh, Erin Grant, Alison Gopnik, and Tom Griffiths. 2018. Exploiting attention to reveal shortcomings in memory models. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 378–380. Jianbo Chen and Michael I Jordan. 2019. Ls-tree: Model interpretation when the data are linguistic. arXiv preprint arXiv:1902.04187. Jianbo Chen, Le Song, Martin J Wainwright, and Michael I Jordan. 2018. L-shapley and c-shapley: Efficient model interpretation for structured data. arXiv preprint arXiv:1808.02610. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2016. Enhanced lstm for natural language inference. arXiv preprint arXiv:1609.06038. Anupam Datta, Shayak Sen, and Yair Zick. 2016. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP), pages 598–617. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Katsushige Fujimoto, Ivan Kojadinovic, and Jean-Luc Marichal. 2006. Axiomatic characterizations of probabilistic and cardinal-probabilistic interaction indices. Games and Economic Behavior, 55(1):72– 99. Reza Ghaeini, Xiaoli Z Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neural models: a case study on natural language inference. arXiv preprint arXiv:1808.03894. Fr´ederic Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, and Thomas Demeester. 2018. Explaining character-aware neural networks for word-level prediction: Do they discover linguistic rules? arXiv preprint arXiv:1808.09551. Michel Grabisch. 1997. K-order additive discrete fuzzy measures and their representation. Fuzzy sets and systems, 92(2):167–189. Yotam Hechtlinger. 2016. Interpretation of prediction models using the input gradient. arXiv preprint arXiv:1611.07634. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2016. Natural language object retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4555–4564. Alon Jacovi, Oren Sar Shalom, and Yoav Goldberg. 2018. Understanding convolutional neural networks for text classification. arXiv preprint arXiv:1809.08037. Xisen Jin, Junyi Du, Zhongyu Wei, Xiangyang Xue, and Xiang Ren. 2019. Towards hierarchical importance attribution: Explaining compositional semantics for neural sequence models. Jaap Jumelet and Dieuwke Hupkes. 2018. Do language models understand anything? on the ability of lstms to understand negative polarity items. arXiv preprint arXiv:1808.10627. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. 5587 Igor Kononenko et al. 2010. An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11(Jan):1–18. Jaesong Lee, Joong-Hwi Shin, and Jun-Seok Kim. 2017. Interactive visualization and manipulation of attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 121–126. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220. Zachary C Lipton. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490. Scott M Lundberg, Gabriel G Erion, and Su-In Lee. 2018. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pages 4765–4774. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pages 142–150. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. W James Murdoch, Peter J Liu, and Bin Yu. 2018. Beyond word importance: Contextual decomposition to extract interactions from lstms. arXiv preprint arXiv:1801.05453. Dong Nguyen. 2018. Comparing automatic and human evaluation of local explanations for text classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1069–1078. Guillermo Owen. 1972. Multilinear extensions of games. Management Science, 18(5-part-2):64–79. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Gregory Plumb, Denali Molitor, and Ameet S Talwalkar. 2018. Model agnostic supervised local explanations. In Advances in Neural Information Processing Systems, pages 2515–2524. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. ACM. Wojciech Samek, Alexander Binder, Gr´egoire Montavon, Sebastian Lapuschkin, and Klaus-Robert M¨uller. 2016. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11):2660–2673. Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? arXiv preprint arXiv:1906.03731. Lloyd S Shapley. 1953. A value for n-person games. Contributions to the Theory of Games, 2(28). Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3145–3153. JMLR. org. Chandan Singh, W. James Murdoch, and Bin Yu. 2019. Hierarchical interpretations for neural network predictions. In International Conference on Learning Representations. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. Erik ˇStrumbelj and Igor Kononenko. 2014. Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems, 41(3):647–665. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3319–3328. JMLR. org. Michael Tsang, Youbang Sun, Dongxu Ren, and Yan Liu. 2018. Can i trust you more? modelagnostic hierarchical explanations. arXiv preprint arXiv:1812.04801. 5588 A Comparison between Top-down and Bottom-up Approaches Given the sentence a waste of good performance for example, Figure 4 shows the hierarchical interpretations for the LSTM model using the bottom-up and top-down approaches respectively. Figure 4(a) shows that the interaction between waste and good can not be captured until the last (top) layer, while the important phrase waste of good can be extracted in the intermediate layer by top-down algorithm. We can see that waste flips the polarity of of good to negative, causing the model predicting negative as well. Top-down segmentation performs better than bottom-up in capturing feature interactions. The reason is that the bottom layer contains more features than the top layer, which incurs larger errors in calculating interaction scores. Even worse, the calculation error will propagate and accumulate during clustering. (a) Bottom-up clustering. (b) Top-down segmentation. Figure 4: Hierarchical interpretations for the LSTM model using the bottom-up and top-down approaches respectively. Red and blue colors represent the negative and positive sentiments respectively. B Results of AOPCs and log-odds changing with different k and r (a) AOPCs of LSTM on the SST dataset. (b) Log-odds of LSTM on the SST dataset. Figure 5: The AOPC and log-odds for LSTM on the SST dataset. 5589 (a) AOPCs of LSTM on the IMDB dataset. (b) Log-odds of LSTM on the IMDB dataset. Figure 6: The AOPC and log-odds for LSTM on the IMDB dataset. (a) AOPCs of CNN on the SST dataset. (b) Log-odds of CNN on the SST dataset. Figure 7: The AOPC and log-odds for CNN on the SST dataset. 5590 (a) AOPCs of CNN on the IMDB dataset. (b) Log-odds of CNN on the IMDB dataset. Figure 8: The AOPC and log-odds for CNN on the IMDB dataset. (a) AOPCs of BERT on the SST dataset. (b) Log-odds of BERT on the SST dataset. Figure 9: The AOPC and log-odds for BERT on the SST dataset. 5591 (a) AOPCs of BERT on the IMDB dataset. (b) Log-odds of BERT on the IMDB dataset. Figure 10: The AOPC and log-odds for BERT on the IMDB dataset. C Visualization of Hierarchical Interpretations Figure 11: HEDGE for BERT on a positive movie review from the SST dataset. BERT makes the correct prediction because it captures the interaction between never and fails. Figure 12: HEDGE for LSTM on a positive movie review from the SST dataset. LSTM makes the wrong prediction because it misses the interaction between never and fails. Figure 13: ACD for LSTM on a positive movie review from the SST dataset, on which LSTM makes wrong prediction. Figure 14: HEDGE for BERT on a positive movie review from the SST dataset, on which BERT makes correct prediction. 5592 Figure 15: HEDGE for LSTM on a positive movie review from the SST dataset, on which LSTM makes wrong prediction. Figure 16: ACD for LSTM on a positive movie review from the SST dataset, on which LSTM makes wrong prediction. D Human Evaluation Interface 5593 Figure 17: Interfaces of Amazon Mechanical Turk where annotators are asked to guess the model’s prediction based on different explanations.
2020
494
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5594–5608 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5594 Obtaining Faithful Interpretations from Compositional Neural Networks Sanjay Subramanian∗1 Ben Bogin∗2 Nitish Gupta∗3 Tomer Wolfson1,2 Sameer Singh4 Jonathan Berant1,2 Matt Gardner1 1Allen Institute for AI 2Tel-Aviv University 3University of Pennsylvania 4University of California, Irvine {sanjays,mattg}@allenai.org, {ben.bogin,joberant}@cs.tau.ac.il, [email protected], [email protected], [email protected] Abstract Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional structure of the problem in the network architecture. However, prior work implicitly assumed that the structure of the network modules, describing the abstract reasoning process, provides a faithful explanation of the model’s reasoning; that is, that all modules perform their intended behaviour. In this work, we propose and conduct a systematic evaluation of the intermediate outputs of NMNs on NLVR2 and DROP, two datasets which require composing multiple reasoning steps. We find that the intermediate outputs differ from the expected output, illustrating that the network structure does not provide a faithful explanation of model behaviour. To remedy that, we train the model with auxiliary supervision and propose particular choices for module architecture that yield much better faithfulness, at a minimal cost to accuracy. 1 Introduction Models that can read text and reason about it in a particular context (such as an image, a paragraph, or a table) have been recently gaining increased attention, leading to the creation of multiple datasets that require reasoning in both the visual and textual domain (Johnson et al., 2016; Suhr et al., 2017; Talmor and Berant, 2018; Yang et al., 2018a; Suhr et al., 2019; Hudson and Manning, 2019; Dua et al., 2019). Consider the example in Figure 1 from NLVR2: a model must understand the compositional sentence in order to then ground dogs in the input, count those that are black and verify that the count of all dogs in the image is equal to the number of black dogs. ∗Equal Contribution “All the dogs are black.” find[dogs] filter[black] 24% 96% Basic-NMN Faithful-NMN equal 100% 100% 76% 100% 13% 100% False (57%) count equal count False (98%) 2 0.9 count count 1.6 1.4 find[dogs] filter[black] Figure 1: An example for a visual reasoning problem where both the Basic and Faithful NMNs produce the correct answer. The Basic NMN, however, fails to give meaningful intermediate outputs for the find and filter modules, whereas our improved FaithfulNMN assigns correct probabilities in all cases. Boxes are green if probabilities are as expected, red otherwise. Both models that assume an intermediate structure (Andreas et al., 2016; Jiang and Bansal, 2019) and models without such structure (Tan and Bansal, 2019; Hu et al., 2019; Min et al., 2019) have been proposed for these reasoning problems. While good performance can be obtained without a structured representation, an advantage of structured approaches is that the reasoning process in such approaches is more interpretable. For example, a structured model can explicitly denote that there are two dogs in the image, but that one of them is not black. Such interpretability improves our scientific understanding, aids in model development, and improves overall trust in a model. 5595 In the first quarter, the Texans trailed early after QB Kerry Collins threw a 19-yard TD pass to WR Nate Washington. Second quarter started with kicker Neil Rackers made a 37-yard field goal, and the quarter closed with kicker Rob Bironas hitting a 30-yard field goal. The Texans tried to cut the lead with QB Matt Schaub getting a 8-yard TD pass to WR Andre Johnson, but the Titans would pull away with RB Javon Ringer throwing a 7-yard TD pass . The Texans tried to come back into the game in the fourth quarter, but only came away with Schaub throwing a 12-yard TD pass to WR Kevin Walter. relocate[who threw] find-max-num filter [the second half] find [touchdown pass] Who threw the longest touchdown pass in the second half? two dogs are touching a food dish with their face equal count with-relation [is touching] relocate [face] find [dog] find [food dish] number [two] Figure 2: An example for a mapping of an utterance to a gold program and a perfect execution in a reasoning problem from NLVR2 (top) and DROP (bottom). Neural module networks (NMNs; Andreas et al., 2016) parse an input utterance into an executable program composed of learnable modules that are designed to perform atomic reasoning tasks and can be composed to perform complex reasoning against an unstructured context. NMNs are appealing since their output is interpretable; they provide a logical meaning representation of the utterance and also the outputs of the intermediate steps (modules) to reach the final answer. However, because module parameters are typically learned from end-task supervision only, it is possible that the program will not be a faithful explanation of the behaviour of the model (Ross et al., 2017; Wiegreffe and Pinter, 2019), i.e., the model will solve the task by executing modules according to the program structure, but the modules will not perform the reasoning steps as intended. For example, in Figure 1, a basic NMN predicts the correct answer False, but incorrectly predicts the output of the find[dogs] operation. It does not correctly locate one of the dogs in the image because two of the reasoning steps (find and filter) are collapsed into one module (find). This behavior of the find module is not faithful to its intended reasoning operation; a human reading the program would expect find[dogs] to locate all dogs. Such unfaithful module behaviour yields an unfaithful explanation of the model behaviour. Unfaithful behaviour of modules, such as multiple reasoning steps collapsing into one, are undesirable in terms of interpretability; when a model fails to answer some question correctly, it is hard to tell which modules are the sources of error. While recent work (Yang et al., 2018b; Jiang and Bansal, 2019) has shown that one can obtain good performance when using NMNs, the accuracy of individual module outputs was mostly evaluated through qualitative analysis, rather than systematically evaluating the intermediate outputs of each module. We provide three primary contributions regarding faithfulness in NMNs. First, we propose the concept of module-wise faithfulness – a systematic evaluation of individual module performance in NMNs that judges whether they have learned their intended operations, and define metrics to quantify this for both visual and textual reasoning (§3). Empirically, we show on both NLVR2 (Suhr et al., 2019) and DROP (Dua et al., 2019) that training a NMN using end-task supervision, even using gold programs, does not yield modulewise faithfulness, i.e., the modules do not perform their intended reasoning task. Second, we provide strategies for improving module-wise faithfulness in NMNs (§4). Specifically, (a) we demonstrate how module architecture affects faithfulness (§4.1), (b) propose supervising module outputs with either a proxy task or heuristically generated data (§4.2), and (c) show that providing modules with uncontexualized token representations improves faithfulness (§4.3). Figure 1 shows an example where our approach (Faithful-NMN) results in expected module outputs as compared to the Basic-NMN. Last, we collect human-annotated intermediate outputs for 536 examples in NLVR2 and for 215 examples in DROP to measure the module-wise faithfulness of models, and publicly release them for future work. Our code and data are available at https://github.com/allenai/faithful-nmn. 5596 2 Neural Module Networks Overview Neural module networks (NMNs; Andreas et al., 2016) are a class of models that map a natural language utterance into an executable program, composed of learnable modules that can be executed against a given context (images, text, etc.), to produce the utterance’s denotation (truth value in NLVR2, or a text answer in DROP). Modules are designed to solve atomic reasoning tasks and can be composed to perform complex reasoning. For example, in Figure 1, the utterance “All the dogs are black” is mapped to the program equal(count(find[dogs]), count(filter[black](find[dogs]))). The find module is expected to find all dogs in the image and the filter module is expected to output only the black ones from its input. Figure 2 shows two other example programs with the expected output of each module in the program. A NMN has two main components: (1) parser, which maps the utterance into an executable program; and (2) executor, which executes the program against the context to produce the denotation. In our setup, programs are always trees where each tree node is a module. In this work, we focus on the executor, and specifically the faithfulness of module execution. We examine NMNs for both text and images, and describe their modules next. 2.1 Modules for visual reasoning In this task, given two images and a sentence that describes the images, the model should output True iff the sentence correctly describes the images. We base our model, the Visual-NMN, on LXMERT (Tan and Bansal, 2019), which takes as input the sentence x and raw pixels, uses Faster R-CNN (Ren et al., 2015) to propose a set of bounding boxes, B, that cover the objects in the image, and passes the tokens of x and the bounding boxes through a Transformer (Vaswani et al., 2017), encoding the interaction between both modalities. This produces a contextualized representation t ∈R|x|×h for each one of the tokens, and a representation v ∈R|B|×h for each one of the bounding boxes, for a given hidden dimension h. We provide a full list of modules and their implementation in Appendix A. Broadly, modules take as input representations of utterance tokens through an utterance attention mechanism (Hu et al., 2017), i.e., whenever the parser outputs a module, it also predicts a distribution over the utterance tokens (p1, . . . , p|x|), and the module takes as input P|x| i=1 piti, where ti is the hidden representation of token i. In addition, modules produce as output (and take as input) vectors p ∈[0, 1]|B|, indicating for each bounding box the probability that it should be output by the module (Mao et al., 2019). For example, in the program filter[black](find[dog]), the find module takes the word ‘dog’ (using utterance attention, which puts all probability mass on the word ‘dog’), and outputs a probability vector p ∈[0, 1]|B|, where ideally all bounding boxes corresponding to dogs have high probability. Then, the filter module takes p as input as well as the word ‘black’, and is meant to output high probabilities for bounding boxes with ‘black dogs’. For the Visual-NMN we do not use a parser, but rely on a collected set of gold programs (including gold utterance attention), as described in §5. We will see that despite this advantageous setup, a basic NMN does not produce interpretable outputs. 2.2 Modules for textual reasoning Our Text-NMN is used to answer questions in the DROP dataset and uses the modules as designed for DROP in prior work (Gupta et al., 2020) along with three new modules we define in this work. The modules introduced in Gupta et al. (2020) and used as is in our Text-NMN are find, filter, relocate, count, find-num, find-date, find-max-num, find-min-num, num-compare and date-compare. All these modules are probabilistic and produce, as output, a distribution over the relevant support. For example, find outputs a distribution over the passage tokens and find-num outputs a distribution over the numbers in the passage. We extend their model and introduce additional modules; addition and subtraction to add or subtract passage numbers, and extract-answer which directly predicts an answer span from the representations of passage tokens without any explicit compositional reasoning. We use BERT-base (Devlin et al., 2019) to encode the input question and passage. The Text-NMN does not have access to gold programs, and thus we implement a parser as an encoder-decoder model with attention similar to Krishnamurthy et al. (2017), which takes the utterance as input, and outputs a linearized abstract syntax tree of the predicted program. 5597 3 Module-wise Faithfulness Neural module networks (NMNs) facilitate interpretability of their predictions via the reasoning steps in the structured program and providing the outputs of those intermediate steps during execution. For example, in Figure 2, all reasoning steps taken by both the Visual-NMN and Text-NMN can be discerned from the program and the intermediate module outputs. However, because module parameters are learned from an end-task, there is no guarantee that the modules will learn to perform their intended reasoning operation. In such a scenario, when modules do not perform their intended reasoning, the program is no longer a faithful explanation of the model behavior since it is not possible to reliably predict the outputs of the intermediate reasoning steps given the program. Work on NMNs thus far (Yang et al., 2018b; Jiang and Bansal, 2019) has overlooked systematically evaluating faithfulness, performing only qualitative analysis of intermediate outputs. We introduce the concept of module-wise faithfulness aimed at evaluating whether each module has correctly learned its intended operation by judging the correctness of its outputs in a trained NMN. For example, in Figure 2 (top), a model would be judged module-wise faithful if the outputs of all the modules, find, relocate, and with relation, are correct – i.e. similar to the outputs that a human would expect. We provide gold programs when evaluating faithfulness, to not conflate faithfulness with parser accuracy. 3.1 Measuring faithfulness in Visual-NMN Modules in Visual-NMN provide for each bounding box a probability for whether it should be a module output. To evaluate intermediate outputs, we sampled examples from the development set, and annotated gold bounding boxes for each instance of find, filter, with-relation and relocate. The annotator draws the correct bounding-boxes for each module in the gold program, similar to the output in Figure 2 (top). A module of a faithful model should assign high probability to bounding-boxes that are aligned with the annotated bounding boxes and low probabilities to other boxes. Since the annotated bounding boxes do not align perfectly with the model’s bounding boxes, our evaluation must first induce an alignment. We consider two bounding boxes as “aligned” if the intersection-over-union (IOU) between them exceeds a pre-defined threshold T = 0.5. Note that it is possible for an annotated bounding box to be aligned with several proposed bounding boxes and vice versa. Next, we consider an annotated bounding box BA as “matched” w.r.t a module output if BA is aligned with a proposed bounding box BP , and BP is assigned by the module a probability > 0.5. Similarly, we consider a proposed bounding box BP as “matched” if BP is assigned by the module a probability > 0.5 and is aligned with some annotated bounding box BA. We compute precision and recall for each module type (e.g. find) in a particular example by considering all instances of the module in that example. We define precision as the ratio between the number of matched proposed bounding boxes and the number of proposed bounding boxes assigned a probability of more than 0.5. We define recall as the ratio between the number of matched annotated bounding boxes and the total number of annotated bounding boxes.1 F1 is the harmonic mean of precision and recall. Similarly, we compute an “overall” precision, recall, and F1 score for an example by considering all instances of all module types in that example. The final score is an average over all examples. Please see Appendix B.2 for further discussion on this averaging. 3.2 Measuring faithfulness in Text-NMN Each module in Text-NMN produces a distribution over passage tokens (§2.2) which is a soft distributed representation for the selected spans. To measure module-wise faithfulness in Text-NMN, we obtain annotations for the set of spans that should be output by each module in the gold program (as seen in Figure 2 (bottom)) Ideally, all modules (find, filter, etc.) should predict high probability for tokens that appear in the gold spans and zero probability for other tokens. To measure a module output’s correctness, we use a metric akin to cross-entropy loss to measure the deviation of the predicted module output patt from the gold spans S = [s1, . . . , sN]. Here each span si = (ti s, ti e) is annotated as the start and end tokens. Faithfulness of a module is measured by: I = −PN i=1 log Pti e j=tis pj att ! . Lower cross-entropy corresponds to better faithfulness of a module. 1The numerators of the precision and the recall are different. Please see Appendix B.1 for an explanation. 5598 4 Improving Faithfulness in NMNs Module-wise faithfulness is affected by various factors: the choice of modules and their implementation (§ 4.1), use of auxiliary supervision (§ 4.2), and the use of contextual utterance embeddings (§ 4.3). We discuss ways of improving faithfulness of NMNs across these dimensions. 4.1 Choice of modules Visual reasoning The count module always appears in NLVR2 as one of the top-level modules (see Figures 1 and 2).2 We now discuss how its architecture affects faithfulness. Consider the program, count(filter[black](find[dogs])). Its gold denotation (correct count value) would provide minimal feedback using which the descendant modules in the program tree, such as filter and find, need to learn their intended behavior. However, if count is implemented as an expressive neural network, it might learn to perform tasks designated for find and filter, hurting faithfulness. Thus, an architecture that allows counting, but also encourages descendant modules to learn their intended behaviour through backpropagation, is desirable. We discuss three possible count architectures, which take as input the bounding box probability vector p ∈[0, 1]|B| and the visual features v ∈R|B|×h. Layer-count module is motivated by the count architecture of Hu et al. (2017), which uses a linear projection from image attention, followed by a softmax. This architecture explicitly uses the visual features, v, giving it greater expressivity compared to simpler methods. First we compute p · v, the weighted sum of the visual representations, based on their probabilities, and then output a scalar count using: FF1(LayerNorm(FF2(p·v)), where FF1 and FF2 are feed-forward networks, and the activation function of FF1 is ReLU in order to output positive numbers only. As discussed, since this implementation has access to the visual features of the bounding boxes, it can learn to perform certain tasks itself, without providing proper feedback to descendant modules. We show in §5 this indeed hurts faithfulness. Sum-count module on the other extreme, ignores v, and simply computes the sum P|B| i=1 pi. Be2Top-level modules are Boolean quantifiers, such as number comparisons like equal (which require count) or exist. We implement exist using a call to count and greater-equal (see Appendix A), so count always occurs in the program. ing parameter-less, this architecture provides direct feedback to descendant modules on how to change their output to produce better probabilities. However, such a simple functional-form ignores the fact that bounding boxes are overlapping, which might lead to over-counting objects. In addition, we would want count to ignore boxes with low probability. For example, if filter predicts a 5% probability for 20 different bounding boxes, we would not want the output of count to be 1.0. Graph-count module (Zhang et al., 2018) is a middle ground between both approaches - the na¨ıve Sum-Count and the flexible Layer-Count. Like Sum-Count, it does not use visual features, but learns to ignore overlapping and low-confidence bounding boxes while introducing only a minimal number of parameters (less than 300). It does so by treating each bounding box as a node in a graph, and then learning to prune edges and cluster nodes based on the amount of overlap between their bounding boxes (see paper for further details). Because this is a light-weight implementation that does not access visual features, proper feedback from the module can propagate to its descendants, encouraging them to produce better predictions. Textual reasoning In the context of Text-NMN (on DROP), we study the effect of several modules on interpretability. First, we introduce an extract-answer module. This module bypasses all compositional reasoning and directly predicts an answer from the input contextualized representations. This has potential to improve performance, in cases where a question describes reasoning that cannot be captured by pre-defined modules, in which case the program can be the extract-answer module only. However, introducing extract-answer adversely affects interpretability and learning of other modules, specifically in the absence of gold programs. First, extract-answer does not provide any interpretability. Second, whenever the parser predicts the extract-answer module, the parameters of the more interpretable modules are not trained. Moreover, the parameters of the encoder are trained to perform reasoning internally in a noninterpretable manner. We study the interpretability vs. performance trade-off by training Text-NMN with and without extract-answer. Second, consider the program find-max-num(find[touchdown]) that aims to find the longest touchdown. find-max-num 5599 should sort spans by their value and return the maximal one; if we remove find-max-num, the program would reduce to find[touchdown], and the find module would have to select the longest touchdown rather than all touchdowns, following the true denotation. More generally, omitting atomic reasoning modules pushes other modules to compensate and perform complex tasks that were not intended for them, hurting faithfulness. To study this, we train Text-NMN by removing sorting and comparison modules (e.g., find-max-num and num-compare), and evaluate how this affects module-wise interpretability. 4.2 Supervising module output As explained, given end-task supervision only, modules may not act as intended, since their parameters are only trained for minimizing the end-task loss. Thus, a straightforward way to improve interpretability is to train modules with additional atomic-task supervision. Visual reasoning For Visual-NMN, we pre-train find and filter modules with explicit intermediate supervision, obtained from the GQA balanced dataset (Hudson and Manning, 2019). Note that this supervision is used only during pre-training – we do not assume we have full-supervision for the actual task at hand. GQA questions are annotated by gold programs; we focus on “exist” questions that use find and filter modules only, such as “Are there any red cars?”. Given gold annotations from Visual Genome (Krishna et al., 2017), we can compute a label for each of the bounding boxes proposed by Faster-RCNN. We label a proposed bounding box as ‘positive’ if its IOU with a gold bounding box is > 0.75, and ‘negative’ if it is < 0.25. We then train on GQA examples, minimizing both the usual denotation loss, as well as an auxiliary loss for each instance of find and filter, which is binary cross entropy for the labeled boxes. This loss rewards high probabilities for ‘positive’ bounding boxes and low probabilities for ‘negative’ ones. Textual reasoning Prior work (Gupta et al., 2020) proposed heuristic methods to extract supervision for the find-num and find-date modules in DROP. On top of the end-to-end objective, they use an auxiliary objective that encourages these modules to output the “gold” numbers and dates according to the heuristic supervision. They show that supervising intermediate module outputs helps improve model performance. In this work, we evaluate the effect of such supervision on the faithfulness of both the supervised modules, as well as other modules that are trained jointly. 4.3 Decontextualized word representations The goal of decomposing reasoning into multiples steps, each focusing on different parts of the utterance, is at odds with the widespread use of contextualized representations such as BERT or LXMERT. While the utterance attention is meant to capture information only from tokens relevant for the module’s reasoning, contextualized token representations carry global information. For example, consider the program filter[red](find[car]) for the phrase red car. Even if find attends only to the token car, its representation might also express the attribute red, so find might learn to find just red cars, rather than all cars, rendering the filter module useless, and harming faithfulness. To avoid such contextualization in Visual-NMN, we zero out the representations of tokens that are unattended, thus the input to the module is computed (with LXMERT) from the remaining tokens only. 5 Experiments We first introduce the datasets used and the experimental setup for measuring faithfulness (§ 5.1). We demonstrate that training NMNs using end-task supervision only does not yield module-wise faithfulness both for visual and textual reasoning. We then show that the methods from §4 are crucial for achieving faithfulness and how different design choices affect it (§ 5.2). Finally, we qualitatively show examples of improved faithfulness and analyze possible reasons for errors (§ 5.3). 5.1 Experimental setup Please see Appendix C for further detail about the experimental setups. Visual reasoning We automatically generate gold program annotations for 26, 311 training set examples and for 5, 772 development set examples from NLVR2. The input to this generation process is the set of crowdsourced question decompositions from the BREAK dataset (Wolfson et al., 2020). See Appendix C.1 for details. For modulewise faithfulness evaluation, 536 examples from the development set were annotated with the gold output for each module by experts. 5600 Model Performance (Accuracy) Overall Faithful. (↑) Module-wise Faithfulness F1(↑) Prec. Rec. F1 find filter with-relation relocate LXMERT 71.7 Upper Bound 1 0.84 0.89 0.89 0.92 0.95 0.75 NMN w/ Layer-count 71.2 0.39 0.39 0.11 0.12 0.20 0.37 0.27 NMN w/ Sum-count 68.4 0.49 0.31 0.28 0.31 0.32 0.44 0.26 NMN w/ Graph-count 69.6 0.37 0.39 0.28 0.31 0.29 0.37 0.19 NMN w/ Graph-count + decont. 67.3 0.29 0.51 0.33 0.38 0.30 0.36 0.13 NMN w/ Graph-count + pretraining 69.6 0.44 0.49 0.36 0.39 0.34 0.42 0.21 NMN w/ Graph-count + decont. + pretraining 68.7 0.42 0.66 0.47 0.52 0.41 0.47 0.21 Table 1: Faithfulness and accuracy on NLVR2. “decont.” refers to decontextualized word representations. Precision, recall, and F1 are averages across examples, and thus F1 is not the harmonic mean of the corresponding precision and recall. Model Performance (F1 Score) Overall Faithful. (cross-entropy∗↓) Module-wise Faithfulness∗(↓) find filter relocate min-max† find-arg† Text-NMN w/o prog-sup w/ extract-answer 63.5 9.5 13.3 9.5 3.5 2.6 9.9 w/o extract-answer 60.8 6.9 8.1 7.3 1.3 1.7 8.5 Text-NMN w/ prog-sup no auxiliary sup 65.3 11.2 13.7 16.9 1.5 2.2 13.0 w/o sorting & comparison 63.8 8.4 9.6 11.1 1.6 1.3 10.6 w/ module-output-sup 65.7 6.5 7.6 10.7 1.3 1.2 7.6 Table 2: Faithfulness and performance scores for various NMNs on DROP. ∗lower is better. †min-max is average faithfulness of find-min-num and find-max-num; find-arg of find-num and find-date. Textual reasoning We train Text-NMN on DROP, which is augmented with program supervision for 4, 000 training questions collected heuristically as described in Gupta et al. (2020). The model is evaluated on the complete development set of DROP which does not contain any program supervision. Module-wise faithfulness is measured on 215 manually-labeled questions from the development set, which are annotated with gold programs and module outputs (passage spans). 5.2 Faithfulness evaluation Visual reasoning Results are seen in Table 1. Accuracy for LXMERT, when trained and evaluated on the same subset of data, is 71.7%; slightly higher than NMNs, but without providing evidence for the compositional structure of the problem. For faithfulness, we measure an upper-bound on the faithfulness score. Recall that this score measures the similarity between module outputs and annotated outputs. Since module outputs are constrained by the bounding boxes proposed by Faster-RCNN (§2.1), while annotated boxes are not, perfect faithfulness could only be achieved by a model if there are suitable bounding boxes. Upper Bound shows the maximal faithfulness score conditioned on the proposed bounding boxes. We now compare the performance and faithfulness scores of the different components. When training our NMN with the most flexible count module, (NMN w/ Layer-count), an accuracy of 71.2% is achieved, a slight drop compared to LXMERT but with low faithfulness scores. Using Sum-count drops about 3% of performance, but increases faithfulness. Using Graph-count increases accuracy while faithfulness remains similar. Next, we analyze the effect of decontextualized word representations (abbreviated “decont.”) and pre-training. First, we observe that NMN w/ Graphcount + decont. increases faithfulness score to 0.33 F1 at the expense of accuracy, which drops to 67.3%. Pre-training (NMN w/ Graph-count + pretraining) achieves higher faithfulness scores with a higher accuracy of 69.6%. Combining the two achieves the best faithfulness (0.47 F1) with a minimal accuracy drop. We perform a paired permutation test to compare NMN w/ Graph-count + decont. + pretraining with NMN w/ Layer-count and find that the difference in F1 is statistically significant (p < 0.001). Please see Appendix D.1 for further details. 5601 Textual reasoning As seen in Table 2, when trained on DROP using question-program supervision, the model achieves 65.3 F1 performance and a faithfulness score of 11.2. When adding supervision for intermediate modules (§4.2), we find that the module-wise faithfulness score improves to 6.5. Similar to Visual-NMN, this shows that supervising intermediate modules in a program leads to better faithfulness. To analyze how choice of modules affects faithfulness, we train without sorting and comparison modules (find-max-num, num-compare, etc.). We find that while performance drops slightly, faithfulness deteriorates significantly to 8.4, showing that modules that perform atomic reasoning are crucial for faithfulness. When trained without program supervision, removing extract-answer improves faithfulness (9.5 →6.9) but at the cost of performance (63.5 →60.8 F1). This shows that such a black-box module encourages reasoning in an opaque manner, but can improve performance by overcoming the limitations of pre-defined modules. All improvements in faithfulness are significant as measured using paired permutation tests (p < 0.001). Generalization A natural question is whether models that are more faithful also generalize better. We conducted a few experiments to see whether this is true for our models. For NLVR2, we performed (1) an experiment in which programs in training have length at most 7, and programs at test time have length greater than 7, (2) an experiment in which programs in training have at most 1 filter module and programs at test time have at least 2 filter modules, and (3) an experiment in which programs in training do not have both filter and with-relation modules in the same program, while each program in test has both modules. We compared three of our models – NMN w/ Layer-count, NMN w/ Sum-count, and NMN w/ Graph-count + decont. + pretraining. We did not observe that faithful models generalize better (in fact, the most unfaithful model tended to achieve the best generalization). To measure if faithful model behavior leads to better generalization in Text-NMN we conducted the following experiment. We selected the subset of data for which we have gold programs and split the data such that questions that require maximum and greater-than operations are present in the training data while questions that require computing minimum and less-than are in the test data. We train and test our model by providing goldprograms under two conditions, in the presence and absence of additional module supervision. We find that providing auxiliary module supervision (that leads to better module faithfulness; see above) also greatly helps in model generalization (performance increases from 32.3 F1 →78.3 F1). 5.3 Qualitative analysis Model comparisons We analyze outputs of different modules in Figure 3. Figures 3a, 3b show the output of find[llamas] when trained with contextualized and decontextualized word representations. With contextualized representations (3a), the find fails to select any of the llamas, presumably because it can observe the word eating, thus effectively searching for eating llamas, which are not in the image. Conversely, the decontextualized model correctly selects the boxes. Figure 3c shows that find outputs meaningless probabilities for most of the bounding boxes when trained with Layer-count, yet the count module produces the correct value (three). Figure 3d shows that find fails to predict all relevant spans when trained without sorting modules in Text-NMN. Error analysis We analyze cases where outputs were unfaithful. First, for visual reasoning, we notice that faithfulness scores are lower for long-tail objects. For example, for dogs, a frequent noun in NLVR2, the execution of find[dogs] yields an average faithfulness score of 0.71, while items such as roll of toilet paper, barbell and safety pin receive lower scores (0.22, 0.29 and 0.05 respectively; example for a failure case for safety pin in Fig. 3e). In addition, some objects are harder to annotate with a box (water, grass, ground) and therefore receive low scores. The issue of small objects can also explain the low scores of relocate. In the gold box annotations used for evaluation, the average areas for find, filter, with-relation, and relocate (as a fraction of the total image area) are 0.19, 0.19, 0.15, and 0.07, respectively. Evidently, relocate is executed with small objects that are harder to annotate (tongue, spots, top of), and indeed the upper-bound and model scores for relocate are lowest among the module types. 6 Related Work NMNs were originally introduced for visual question answering and applied to datasets with syn5602 utt: “the llamas in both images are eating” 100% 91% 8% 6% (a) (b) find[llamas] (c) find[people] 60% 60% ... utt: “there are three people” (e) 91% 90% <1% find[safety pin]utt:“at least one safety pin is not embellished.” 35% 34% ... count 3 The Redskins obtained an early lead when RB Clinton Portis scored on a 3-yard TD run. St. Louis scored again when free safety Oshiomogho Atogwe scored a 75 yards touchdown. Washington regained the lead with ….. and a Clinton Portis 2-yard rushing TD. St. Louis would come back with a 49-yard field goal. find[touchdown run] (d) Figure 3: Comparison of module outputs between NMN versions: (a) Visual-NMN with contextualized representations, (b) Visual-NMN with decontextualized representations, (c) model using a parameter-rich count layer (Layer-Count), (d) Text-NMN trained without sorting module produces an incorrect find output (misses 2-yard rushing TD), and (e) Visual-NMN failure case with a rare object (of w/ Graph-count + decont. + pretraining) thetic language and images as well as VQA (Antol et al., 2015), whose questions require few reasoning steps (Andreas et al., 2016; Hu et al., 2017; Yang et al., 2018b). In such prior work, modulewise faithfulness was mostly assessed via qualitative analysis of a few examples (Jiang and Bansal, 2019; Gupta et al., 2020). Yang et al. (2018b) did an evaluation where humans rated the clarity of the reasoning process and also tested whether humans could detect model failures based on module outputs. In contrast, we quantitatively measure each module’s predicted output against the annotated gold outputs. A related systematic evaluation of interpretability in VQA was conducted by Trott et al. (2018). They evaluated the interpretability of their VQA counting model, where the interpretability score is given by the semantic similarity between the gold label for a bounding box and the relevant word(s) in the question. However, they studied only counting questions, which were also far less compositional than those in NLVR2 and DROP. Similar to the gold module output annotations that we provide and evaluate against, HOTPOTQA (Yang et al., 2018a) and COQA (Reddy et al., 2019) datasets include supporting facts or rationales for the answers to their questions, which can be used for both supervision and evaluation. In concurrent work, Jacovi and Goldberg (2020) recommend studying faithfulness on a scale rather than as a binary concept. Our evaluation method can be viewed as one example of this approach. 7 Conclusion We introduce the concept of module-wise faithfulness, a systematic evaluation of faithfulness in neural module networks (NMNs) for visual and textual reasoning. We show that na¨ıve training of NMNs does not produce faithful modules and propose several techniques to improve module-wise faithfulness in NMNs. We show how our approach leads to much higher module-wise faithfulness at a low cost to performance. We encourage future work to judge model interpretability using the proposed evaluation and publicly published annotations, and explore techniques for improving faithfulness and interpretability in compositional models. Acknowledgements We thank members of UCI NLP, TAU NLP, and the AllenNLP teams as well as Daniel Khashabi for comments on earlier drafts of this paper. We also thank the anonymous reviewers for their comments. This research was partially supported by The Yandex Initiative for Machine Learning, the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800), funding by the ONR under Contract No. N00014-19-12620, and by sponsorship from the LwLL DARPA program under Contract No. FA8750-19-2-0201. This work was completed in partial fulfillment for the Ph.D degree of Ben Bogin. 5603 References Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1545– 1554, San Diego, California. Association for Computational Linguistics. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378. Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, and Matt Gardner. 2020. Neural Module Networks for Reasoning over Text. In International Conference on Learning Representations (ICLR). Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415. Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019. A multi-type multi-span network for reading comprehension that requires discrete reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1596–1606, Hong Kong, China. Association for Computational Linguistics. Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 804–813. Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6700–6709. Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness? In Proceedings of the 2020 Conference of the Association for Computational Linguistics. Yichen Jiang and Mohit Bansal. 2019. Self-assembling modular networks for interpretable multi-hop reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4473–4483, Hong Kong, China. Association for Computational Linguistics. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick. 2016. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1988– 1997. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526, Copenhagen, Denmark. Association for Computational Linguistics. Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In International Conference on Learning Representations. Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4249–4257, Florence, Italy. Association for Computational Linguistics. E.W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. Wiley. Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the 28th International Conference on 5604 Neural Information Processing Systems - Volume 1, NIPS’15, pages 91–99, Cambridge, MA, USA. MIT Press. Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. In IJCAI. Howard Seltman. 2018. Approximations for mean and variance of a ratio. Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual reasoning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 217–223, Vancouver, Canada. Association for Computational Linguistics. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6418–6428, Florence, Italy. Association for Computational Linguistics. Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 641–651, New Orleans, Louisiana. Association for Computational Linguistics. Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5099–5110, Hong Kong, China. Association for Computational Linguistics. Alexander Trott, Caiming Xiong, and Richard Socher. 2018. Interpretable counting for visual question answering. In International Conference on Learning Representations. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Dan Ventura. 2007. CS478 Paired Permutation Test Overview. Accessed April 29, 2020. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 11–20, Hong Kong, China. Association for Computational Linguistics. Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gardner, Yoav Goldberg, Daniel Deutch, and Jonathan Berant. 2020. Break it down: A question understanding benchmark. Transactions of the Association for Computational Linguistics, 8:183–198. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018a. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018b. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th conference on Computational linguistics-Volume 2, pages 947–953. Association for Computational Linguistics. Yan Zhang, Jonathon Hare, and Adam Prgel-Bennett. 2018. Learning to count objects in natural images for visual question answering. In International Conference on Learning Representations. 5605 A Modules We list all modules for Visual-NMN in Table 3. For Text-NMN, as mentioned, we use all modules are described in Gupta et al. (2020). In this work, we introduce the (a) addition and subtraction modules that take as input two distributions over numbers mentioned in the passage and produce a distribution over all posssible addition and subtraction values possible. The output distribution here is the expected distribution for the random variable Z = X + Y (for addition), and (b) extract-answer that produces two distributions over the passage tokens denoting the probabilities for the start and end of the answer span. This distribution is computed by mapping the passage token representations using a simple MLP and softmax operation. B Measuring Faithfulness in Visual-NMN B.1 Numerators of Precision and Recall As stated in Section 3.1, for a given module type and a given example, precision is defined as the number of matched proposed bounding boxes divided by the number of proposed bounding boxes to which the module assigns a probability more than 0.5. Recall is defined as the number of matched annotated bounding boxes divided by the number of annotated bounding boxes. Therefore, the numerators of the precision and the recall need not be equal. In short, the reason for the discrepancy is that there is no one-to-one alignment between annotated and proposed bounding boxes. To further illustrate why we chose not to have a common numerator, we will consider two sensible choices for this shared numerator and explain the issues with them. One choice for the common numerator is the number of matched proposed bounding boxes. If we were to keep the denominator of the recall the same, then the recall would be defined as the number of matched proposed bounding boxes divided by the number of annotated bounding boxes. Consider an example in which there is a single annotated bounding box that is aligned with five proposed bounding boxes. When this definition of recall is applied to this example, the numerator would exceed the denominator. Another choice would be to set the denominator to be the number of proposed bounding boxes that are aligned with some annotated bounding box. In the example, this approach would penalize a module that gives high probability to only one of the five aligned proposed bounding boxes. However, it is not clear that a module giving high probability to all five proposed boxes is more faithful than a module giving high probability to only one bounding box (e.g. perhaps one proposed box has a much higher IOU with the annotated box than the other proposed boxes). Hence, this choice for the numerator does not make sense. Another choice for the common numerator is the number of matched annotated bounding boxes. If we were to keep the denominator of the precision the same, then the precision would be defined as the number of matched annotated bounding boxes divided by the number of proposed bounding boxes to which the module assigns probability more than 0.5. Note that since a single proposed bounding box can align with multiple annotated bounding boxes, it is possible for the numerator to exceed the denominator. Thus, these two choices for a common numerator have issues, and we avoid these issues by defining the numerators of precision and recall separately. B.2 Averaging Faithfulness Scores The method described in Section 3.1 computes a precision, recall, and F1 score for each example for every module type occurring in that example. The faithfulness scores reported in Table 1 are averages across examples. We also considered two other ways of aggregating scores across examples: 1. Cumulative P/R/F1: For each module type, we compute a single cumulative precision and recall across all examples. We then compute the dataset-wide F1 score as the harmonic mean of the precision and the recall. The results using this method are in Table 4. There are some differences between these results and those in Table 1, e.g. in these results, NMN w/ Graph-count + decont. + pretraining has the highest faithfulness score for every module type, including relocate. 2. Average over module occurrences: For each module type, for each occurrence of the module we compute a precision and recall and compute F1 as the harmonic mean of precision and recall. Then for each module type, we compute the overall precision as the average precision across module occurrences and 5606 similarly compute the overall recall and F1. Note that a module can occur multiple times in a single program and that each image is considered a separate occurrence. The results using this method are in Table 5. Again, there are some differences between these results and those in Table 1, e.g. NMN w/ Sum-count has a slightly higher score for with-relation than NMN w/ Graph-count + decont. + pretraining. With both of these alternative score aggregation methods, we still obtained p < 0.001 in our significance tests. We also noticed qualitatively that the metric can penalize modules that assign high probability to proposed bounding boxes that have a relatively high IOU that does not quite pass the IOU threshold of 0.5. In such cases, while it may not make sense to give the model credit in its recall score, it also may not make sense to penalize the model in its precision score. Consequently, we also performed an evaluation in which for the precision calculation we set a separate “negative” IOU threshold of 10−8 (effectively 0) and only penalized modules for high probabilities assigned to proposed boxes whose IOU is below this threshold. The results computed with example-wise averaging are provided in Table 6. C Details about Experiments Visual Reasoning We use the published pretrained weights and the same training configuration of LXMERT (Tan and Bansal, 2019), with 36 bounding boxes proposed per image. Due to memory constraints, we restrict training data to examples having a gold program with at most 13 modules. C.1 Program Annotations We generated program annotations for NLVR2 by automatically canonicalizing its question decompositions in the BREAK dataset (Wolfson et al., 2020). Decompositions were originally annotated by Amazon Mechanical Turk workers. For each utterance, the workers were asked to produce the correct decomposition and an utterance attention for each operator (module), whenever relevant. Limitations of Program Annotations Though our annotations for gold programs in NLVR2 are largely correct, we find that there are some examples for which the programs are unnecessarily Figure 4: An example of a gold program for NLVR2 that is unnecessarily complicated. complicated. For instance, for the sentence “the right image contains a brown dog with its tongue extended.” the gold program is shown in Figure 4. This program could be simplified by replacing the with-relation with the second argument of with-relation. Programs like this make learning more difficult for the NMNs since they use modules (in this case, with-relation) in degenerate ways. There are also several sentences that are beyond the scope of our language, e.g. comparisons such as “the right image shows exactly two virtually identical trifle desserts.” D Significance tests D.1 Visual Reasoning We perform a paired permutation test to test the hypothesis H0: NMN w/ Graph-count + decont. + pretraining has the same inherent faithfulness as NMN w/ Layer-count. We follow the procedure described by Ventura (2007), which is similar to tests described by Yeh (2000) and Noreen (1989). Specifically, we perform Ntotal = 100, 000 trials in which we do the following. For every example, with probability 1/2 we swap the F1 scores obtained by the two models for that example. Then we check whether the difference in the aggregated F1 scores for the two models is at least as extreme as the original difference in the aggregated F1 scores of the two models. The p-value is given by Nexceed/Ntotal, where Nexceed is the number of trials in which the new difference is at least as extreme as the original difference. 5607 Module Output Implementation find[qatt] p W T 1 ([x; v]) + b1 filter[qatt](p) p p ⊙(W T 1 ([x; v]) + b1) with-relation[qatt](p1, p2) p max(p2)p1 ⊙MLP([x; v1; v2]) project[qatt](p) p max(p)find(qatt) ⊙MLP([W2; v1; v2]) count(p) N number P(p), σ2 exist(p) B greater-equal(p, 1) greater-equal (a : N, b : N) B greater(a, b) + equal(a, b) less-equal (a : N, b : N) B less(a, b) + equal(a, b) equal(a : N, b : N) B PK k=0 Pr[a = k] Pr[b = k] less(a : N, b : N) B PK k=0 Pr[a = k] Pr[b > k] greater(a : N, b : N) B PK k=0 Pr[a = k] Pr[b < k] and(a : B, b : B) B a*b or(a : B, b : B) B a+b-a*b number(m : F, v : F) N Normal(mean = m, var = v) sum(a : N, b : N) N number (amean + bmean, avar + bvar) difference(a : N, b : N) N number (amean −bmean, avar + bvar) division(a : N, b : N) N number  amean bmean + bvaramean b3mean , a2 mean b2mean  avar a2mean + bvar b2mean  intersect(p1, p2) p p1 · p2 discard(p1, p2) p max(p1 −p2, 0) in-left-image(p) p p s.t. probabilities for right image are 0 in-right-image(p) p p s.t. probabilities for left image are 0 in-at-least-one-image B macro (see caption) in-each-image B macro (see caption) in-one-other-image B macro (see caption) Table 3: Implementations of modules for NLVR2 NMN. First five contain parameters, the rest are deterministic. The implementation of count shown here is the Sum-count version; please see Section 4 for a description of other count module varieties and a discussion of their differences. ‘B’ denotes the Boolean type, which is a probability value ([0..1]). ‘N’ denotes the Number type which is a probability distribution. K = 72 is the maximum count value supported by our model. To obtain probabilities, we first convert each Normal random variable X to a categorical distribution over {0, 1, ..., K} by setting Pr[X = k] = Φ(k+0.5)−Φ(k−0.5) if k ∈{1, 2, ..., K −1}. We set Pr[X = 0] = Φ(0.5) and Pr[X = K] = 1 −Φ(K −0.5). Here Φ(·) denotes the cumulative distribution function of the Normal distribution. W1, W2 are weight vectors with shapes 2h × 1 and h × 1, respectively. Here h = 768 is the size of LXMERT’s representations. b1 is a scalar weight. MLP denotes a two-layer neural network with a GeLU activation (Hendrycks and Gimpel, 2016) between layers. x denotes a question representation, and vi denotes encodings of objects in the image. x and vi have shape h × |B|, where |B| is the number of proposals. p denotes a vector of probabilities for each proposal and has shape 1 × |B|. ⊙and [; ] represent elementwise multiplication and matrix concatenation, respectively. The expressions for the mean and variance in the division module are based on the approximations in Seltman (2018). The macros execute a given program on the two input images. in-at-least-one-image macro returns true iff the program returns true when executed on at least one of the images. in-each-image returns true iff the program returns true when executed on both of the images. in-one-other-image takes two programs and returns true iff one program return true on left image and second program returns true on right image, or vice-versa. 5608 Model Performance (Accuracy) Overall Faithful.(↑) Module-wise Faithfulness(↑) Prec. Rec. F1 find filter with-relation relocate LXMERT 71.7 Upper Bound 1 0.63 0.77 0.78 0.79 0.73 0.71 NMN w/ Layer-count 71.2 0.069 0.29 0.11 0.13 0.09 0.07 0.05 NMN w/ Sum-count 68.4 0.25 0.18 0.21 0.23 0.20 0.16 0.05 NMN w/ Graph-count 69.6 0.20 0.22 0.21 0.24 0.19 0.17 0.04 NMN w/ Graph-count + decont. 67.3 0.21 0.29 0.24 0.28 0.22 0.19 0.04 NMN w/ Graph-count + pretraining 69.6 0.28 0.31 0.30 0.34 0.27 0.25 0.09 NMN w/ Graph-count + decont. + pretraining 68.7 0.34 0.43 0.38 0.43 0.34 0.29 0.11 Table 4: Faithfulness scores on NLVR2 using the cumulative precision/recall/F1 evaluation. Model Performance (Accuracy) Overall Faithful.(↑) Module-wise Faithfulness(↑) Prec. Rec. F1 find filter with-relation relocate LXMERT 71.7 Upper Bound 1 0.91 0.92 0.90 0.95 0.96 0.82 NMN w/ Layer-count 71.2 0.67 0.64 0.39 0.21 0.50 0.61 0.50 NMN w/ Sum-count 68.4 0.70 0.59 0.48 0.38 0.53 0.63 0.49 NMN w/ Graph-count 69.6 0.55 0.64 0.43 0.36 0.47 0.54 0.41 NMN w/ Graph-count + decont. 67.3 0.47 0.70 0.45 0.42 0.47 0.55 0.33 NMN w/ Graph-count + pretraining 69.6 0.58 0.70 0.47 0.42 0.49 0.58 0.41 NMN w/ Graph-count + decont. + pretraining 68.7 0.58 0.79 0.55 0.54 0.55 0.62 0.43 Table 5: Faithfulness scores on NLVR2 using the average over module occurrences evaluation. Model Performance (Accuracy) Overall Faithful.(↑) Module-wise Faithfulness(↑) Prec. Rec. F1 find filter with-relation relocate LXMERT 71.7 Upper Bound 1 0.8377 0.89 0.89 0.92 0.95 0.75 NMN w/ Layer-count 71.2 0.59 0.39 0.25 0.31 0.28 0.45 0.30 NMN w/ Sum-count 68.4 0.79 0.31 0.34 0.38 0.36 0.48 0.28 NMN w/ Graph-count 69.6 0.68 0.39 0.38 0.43 0.36 0.44 0.22 NMN w/ Graph-count + decont. 67.3 0.62 0.51 0.47 0.53 0.39 0.43 0.16 NMN w/ Graph-count + pretraining 69.6 0.70 0.49 0.47 0.52 0.41 0.51 0.27 NMN w/ Graph-count + decont. + pretraining 68.7 0.71 0.66 0.62 0.68 0.50 0.55 0.31 Table 6: Faithfulness scores on NLVR2 using a negative IOU threshold of 10−8 and example-wise averaging.
2020
495
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5609–5626 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5609 Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport Kyle Swanson* ASAPP, Inc. New York, USA [email protected] Lili Yu* ASAPP, Inc. New York, USA [email protected] Tao Lei ASAPP, Inc. New York, USA [email protected] Abstract Selecting input features of top relevance has become a popular method for building selfexplaining models. In this work, we extend this selective rationalization approach to text matching, where the goal is to jointly select and align text pieces, such as tokens or sentences, as a justification for the downstream prediction. Our approach employs optimal transport (OT) to find a minimal cost alignment between the inputs. However, directly applying OT often produces dense and therefore uninterpretable alignments. To overcome this limitation, we introduce novel constrained variants of the OT problem that result in highly sparse alignments with controllable sparsity. Our model is end-to-end differentiable using the Sinkhorn algorithm for OT and can be trained without any alignment annotations. We evaluate our model on the StackExchange, MultiNews, e-SNLI, and MultiRC datasets. Our model achieves very sparse rationale selections with high fidelity while preserving prediction accuracy compared to strong attention baseline models.† 1 Introduction The growing complexity of deep neural networks has given rise to the desire for self-explaining models (Li et al., 2016; Ribeiro et al., 2016; Zhang et al., 2016; Ross et al., 2017; Sundararajan et al., 2017; Alvarez-Melis and Jaakkola, 2018b; Chen et al., 2018a). In text classification, for instance, one popular method is to design models that can perform classification using only a rationale, which is a subset of the text selected from the model input that fully explains the model’s prediction (Lei et al., 2016; Bastings et al., 2019; Chang et al., 2019). This selective rationalization method, often *Denotes equal contribution. †Our code is publicly available at https://github. com/asappresearch/rationale-alignment. Can I find duplicate songs with different names? I have so many duplicate songs but they have different names. Is there an application I can use to find and delete the duplicates? How to find (and delete) duplicate files? I have a largish music collection and there are some duplicates in there. Is there any way to find duplicate files. At a minimum by doing a hash and seeing if two files have the same hash. … I’m happy using the command line if that is the easiest way. Figure 1: An illustration of a text matching rationale for detecting similar forum posts. trained to choose a small yet sufficient number of text spans, makes it easy to interpret the model’s prediction by examining the selected text. In contrast to classification, very little progress has been made toward rationalization for text matching models. The task of text matching encompasses a wide range of downstream applications, such as similar document recommendation (dos Santos et al., 2015), question answering (Lee et al., 2019), and fact checking (Thorne et al., 2018). Many of these applications can benefit from selecting and comparing information present in the provided documents. For instance, consider a similar post suggestion in a tech support forum as shown in Figure 1. The extracted rationales could provide deeper insights for forum users while also helping human experts validate and improve the model. In this work, we extend selective rationalization for text matching and focus on two new challenges that are not addressed in previous rationalization work. First, since text matching is fundamentally about comparing two text documents, rationale selection should be jointly modeled and optimized for matching. Second, the method should produce an interpretable alignment between the selected rationales showcasing their relations for the downstream prediction. This is very different from rationaliza5610 tion for text classification, where the selection is performed independently on each input text and an alignment between rationales is unnecessary. One popular method for aligning inputs is attention-based models (Bahdanau et al., 2015; Rocktäschel et al., 2015; Rush et al., 2015; Xu et al., 2015; Kim et al., 2018). However, a limitation of neural attention is that the alignment is rarely sparse, thus making it difficult to interpret how the numerous relations among the text spans lead to the model’s prediction. Recent work has explored sparse variants of attention (Martins and Astudillo, 2016; Niculae and Blondel, 2017; Lin et al., 2018; Malaviya et al., 2018; Niculae et al., 2018), but the number of non-zero alignments can still be large (Laha et al., 2018). Additionally, because of the heavy non-linearity following most attention layers, it is difficult to truly attribute the model’s predictions to the alignment, which means that attention-based models lack fidelity. We propose to address these challenges by directly learning sparse yet sufficient alignments using optimal transport (OT). We use OT as a building block within neural networks for determining the alignment, providing a deeper mathematical justification for the rationale selection. In order to produce more interpretable rationales, we construct novel variants of OT that have provable and controllable bounds on the sparsity of the alignments. Selecting and aligning text spans can be jointly optimized within this framework, resulting in optimal text matchings. Our model is fully end-to-end differentiable using the Sinkhorn algorithm (Cuturi, 2013) for OT and can be used with any neural network architecture. We evaluate our proposed methods on the StackExchange, MultiNews (Fabbri et al., 2019), e-SNLI (Camburu et al., 2018), and MultiRC (Khashabi et al., 2018) datasets, with tasks ranging from similar document identification to reading comprehension. Compared to other neural baselines, our methods show comparable task performance while selecting only a fraction of the number of alignments. We further illustrate the effectiveness of our method by analyzing how faithful the model’s predictions are to the selected rationales and by comparing the rationales to humanselected rationales provided by DeYoung et al. (2019) on the e-SNLI and MultiRC datasets. 2 Related Work Selective Rationalization. Model interpretability via selective rationalization has attracted considerable interest recently (Lei et al., 2016; Li et al., 2016; Chen et al., 2018a; Chang et al., 2019). Some recent work has focused on overcoming the challenge of learning in the selective rationalization regime, such as by enabling end-to-end differentiable training (Bastings et al., 2019) or by regularizing to avoid performance degeneration (Yu et al., 2019). Unlike these methods, which perform independent rationale selection on each input document, we extend selective rationalization by jointly learning selection and alignment, as it is better suited for text matching applications. Concurrent to this work, DeYoung et al. (2019) introduce the ERASER benchmark datasets with human-annotated rationales along with several rationalization models. Similarly to DeYoung et al. (2019), we measure the faithfulness of selected rationales, but our work differs in that we additionally emphasize sparsity as a necessary criterion for interpretable alignments. Alignment. Models can be made more interpretable by requiring that they explicitly align related elements of the input representation. In NLP, this is often achieved via neural attention (Bahdanau et al., 2015; Chen et al., 2015; Rush et al., 2015; Cheng et al., 2016; Parikh et al., 2016; Xie et al., 2017). Many variants of attention, such as temperature-controlled attention (Lin et al., 2018) and sparsemax (Martins and Astudillo, 2016), have been proposed to increase sparsity within the attention weights. However, it is still debatable whether attention scores are truly explanations (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019). Distance-based methods of aligning text have also been proposed (Li et al., 2019), but they similarly cannot guarantee sparsity or explainability. In this work, we explicitly optimize rationale selection and alignment as an integral part of the model and evaluate the degree to which the alignment explains the model’s predictions. Optimal Transport. The field of optimal transport (OT) began with Monge (1781), who explored the problem of determining a minimal cost assignment between sets of equal sizes. Kantorovich (1942) relaxed Monge’s problem to that of determining an optimal transport plan for moving probability mass between two probability distributions. 5611 Since the introduction of a differentiable OT solver by Cuturi (2013), OT has seen many applications in deep learning and NLP, such as topic embedding (Kusner et al., 2015), text generation (Chen et al., 2018b), cross-lingual word embedding alignment (Alvarez-Melis and Jaakkola, 2018a), graph embedding (Xu et al., 2019), and learning permutations (Mena et al., 2018). Peyré and Cuturi (2019) provides an overview of the computational aspects of OT. Unlike prior work, we develop novel additional constraints on the OT problem that produce particularly sparse and interpretable alignments. 3 Problem Formulation Consider two related text documents Dx and Dy. These documents are broken down into two sets of text spans, Sx and Sy, where the text spans can be words, sentences, paragraphs, or any other chunking of text. These text spans are then mapped to vector representations using a function g(·) (e.g., a neural network), which produces two sets of vectors representing the inputs, X = {xi}n i=1 = {g(Sx i )}n i=1 and Y = {yi}m i=1 = {g(Sy i )}m i=1, where xi, yi ∈Rd. We define an interpretable text matching as an alignment between the text spans in X and Y that explains the downstream prediction. Following common practice for previous self-explaining models (Lei et al., 2016; Alvarez-Melis and Jaakkola, 2018b), we specify that a desirable model must produce alignments satisfying the following criteria of interpretability. Explicitness. The alignment between text spans generated by the model should be an observable and understandable component of the model. Our model explicitly encodes the alignment between X and Y as a matrix P ∈Rn×m + where Pi,j indicates the degree to which xi and yj are aligned. Sparsity. In order for the alignment to be interpretable, the alignment matrix P must be sparse, meaning there are very few non-zero alignments between the text spans. A sparser alignment is easier to interpret as fewer alignments between text spans need to be examined. Faithfulness. An interpretable text matching is only meaningful if the model’s predictions are faithful to the alignment, meaning the predictions are directly dependent on it. Similarly to previous work, our model achieves faithfulness by using only the selected text spans (and their representations) for prediction. That is, the selected rationales and alignment should be sufficient to make accurate predictions. In addition to sufficiency, faithfulness also requires that the model output should be easily attributed to the choice of alignment1. For simple attribution, we define our model output as either a linear function of the alignment P or a shallow feed-forward network on top of P. In the following sections, we introduce optimal transport as a method to produce interpretable text matchings satisfying all three desiderata. 4 Background: Optimal Transport An instance of the discrete optimal transport problem consists of two point sets, X = {xi}n i=1 and Y = {yi}m i=1, with xi, yi ∈Rd. Additionally, X and Y are associated with probability distributions a ∈Σn and b ∈Σm, respectively, where Σn is the probability simplex Σn :=  p ∈Rn + : Pn i=1 pi = 1 . A cost function c(x, y) : Rd × Rd →R specifies the cost of aligning a pair of points x and y. The costs of aligning all pairs of points are summarized by the cost matrix C ∈Rn×m, where Ci,j = c(xi, yj). The goal of optimal transport is to compute a mapping that moves probability mass from the points of X (distributed according to a) to the points of Y (distributed according to b) so that the total cost of moving the mass between points is minimized according to the cost function c. This mapping is represented by a transport plan, or alignment matrix, P ∈Rn×m + , where Pi,j indicates the amount of probability mass moved from xi to yj. The space of valid alignment matrices is the set U(a, b) := {P ∈Rn×m + : P1m = a, PT1n = b} since P must marginalize out to the corresponding probability distributions a and b over X and Y . Under this formulation, the optimal transport problem is to find the alignment matrix P that minimizes the sum of costs weighted by the alignments: LC(a, b) := min P∈U(a,b)⟨C, P⟩= X i,j Ci,jPi,j . Note that this optimization is a linear programming problem over the convex set U(a, b). As a result, one of the extreme points of U(a, b) must be an optimal solution. 1For example, a linear model achieves strong attribution because the importance of each input feature is a constant parameter. 5612 4.1 Sparsity Guarantees Optimal transport is known to produce alignments that are especially sparse. In particular, the following propositions characterize the extreme point solution P∗of LC(a, b) and will be important in designing interpretable alignments in Section 5. Proposition 1 (Brualdi (2006), Thm. 8.1.2). Any extreme point P∗that solves LC(a, b) has at most n + m −1 non-zero entries. Proposition 2 (Birkhoff (1946)). If n = m and a = b = 1n/n, then every extreme point of U(a, b) is a permutation matrix. In other words, while the total number of possible aligned pairs is n × m, the optimal alignment P∗has O(n + m) non-zero entries. Furthermore, if n = m, then any extreme point solution P∗is a permutation matrix and thus only has O(n) nonzero entries. Figure 2 illustrates two alignments, including one that is a permutation matrix. Note that the optimal solution of LC(a, b) may not be unique in degenerate cases, such as when Ci,j is the same for all i, j. In such degenerate cases, any convex combination of optimal extreme points is a solution. However, it is possible to modify any OT solver to guarantee that it finds an extreme point (i.e., sparse) solution. We provide a proof in Appendix D, although experimentally we find that these modifications are unnecessary as we nearly always obtain an extreme point solution. 4.2 Sinkhorn Algorithm LC(a, b) is a linear programming problem and can be solved exactly with interior point methods. Recently, Cuturi (2013) proposed an entropyregularized objective that can be solved using a fully differentiable, iterative algorithm, making it ideal for deep learning applications. Specifically, the entropy-regularized objective is Lϵ C(a, b) := min P∈U(a,b)⟨C, P⟩−ϵH(P), where H(P) is the entropy of alignment matrix P and ϵ > 0 controls the amount of entropy regularization. In practice, ϵ can be set sufficiently small such that the solution to Lϵ C(a, b) is a good approximation of the solution to LC(a, b). Conveniently, Lϵ C(a, b) has a solution of the form P∗= diag(u) K diag(v), where K = e−C/ϵ and (u, v) ∈Rn + × Rm +. The vectors u and v can be determined using the Sinkhorn-Knopp matrix scaling algorithm (Sinkhorn and Knopp, 1967), (a) Alignment 1 (graph) (b) Alignment 2 (graph) (c) Alignment 1 (matrix) (d) Alignment 2 (matrix) Figure 2: An illustration of two different alignments between the points of X and Y , displayed both as a graph (top) and as an (unnormalized) alignment matrix P (bottom). Alignment 2 (right) corresponds to the special case where P is a permutation matrix, which produces an assignment between points in X and Y . which iteratively computes u ←a ⊘Kv and v ←b ⊘KTu where ⊘denotes element-wise division. Since each iteration consists only of matrix operations, the Sinkhorn algorithm can be used as a differentiable building block in deep learning models. For instance, in this work we take C as the distance between hidden representations given by a parameterized neural network encoder. Our model performs the Sinkhorn iterations until convergence (or a maximum number of steps) and then outputs the alignment P and the total cost ⟨C, P⟩as inputs to subsequent components of the model. 5 Learning Interpretable Alignments Using “vanilla” OT produces sparse alignments as guaranteed by Proposition 1, but the level of sparsity is insufficient to be interpretable. For instance, Alignment 1 in Figure 2 still has a significant number of non-zero alignment values. Motivated by this limitation, we propose to encourage greater sparsity and interpretability by constructing OT problems with additional constraints. General Recipe for Additional Constraints. Intuitively, an interpretable alignment should be sparse in two ways. First, each text span should be aligned to one or a very small number of spans in the other input text. Second, the total number of 5613 Figure 3: An illustration of the process of computing a one-to-two assignment between the points of X and Y . (a) The original points of X and Y . (b) ˆX and ˆY are constructed so that ˆX has two copies of each point in X and one dummy point and ˆY = Y . (c) OT is applied to ˆX and ˆY using uniform distributions a and b, which produces a one-to-one assignment between ˆX and ˆY . (d) A one-to-two assignment between X and Y is extracted from the one-to-one assignment between ˆX and ˆY . aligned pairs should be small enough so that the alignment can be easily examined by a human. We modify the OT problem in several ways to guarantee both aspects of sparsity. We start by forcing the solution to be an assignment, which is a one-to-one (or one-to-few) alignment such that every non-zero entry in the alignment matrix is equal, thereby simplifying interpretability. Alignment 2 in Figure 2 is an example of a one-to-one assignment. We also consider two other constructions, one that makes every text span in the alignment optional and another that directly limits the total number of aligned pairs. At the core of our construction are two types of auxiliary points that are added to the input point sets X and Y : • Replica points are exact copies of the original points in X or Y and can be used to control the sparsity of each point’s alignment. • Dummy points, also known as tariff-free reservoirs in prior work, are points that can be aligned to with 0 cost. Dummy points are used for absorbing unused probability mass in partial transport, where the constraints are relaxed to P1m ≤a and PT1n ≤b (Caffarelli and McCann, 2010; Figalli, 2010). The idea is to add an appropriate number of replica points and dummy points to create ˆX and ˆY with | ˆX| = | ˆY | = N for some N. Then by using uniform probability distributions a = b = 1N/N, Proposition 2 implies that one of the solutions to the OT problem will be a permutation matrix, i.e., a one-to-one assignment between the points in ˆX and ˆY . Since the points of X and Y are included in ˆX and ˆY , we can directly extract an assignment between X and Y from the assignment between ˆX and ˆY . Figure 3 illustrates the procedure. Note that the same solution can be attained without explicitly replicating any points by adjusting the probability distributions a and b, but we use replication for ease of exposition. Also note that the Sinkhorn algorithm is compatible with replica and dummy points and the model remains differentiable. We now describe three specific instances of this procedure that produce interpretable assignments with different sparsity patterns. Without loss of generality, we assume that n = |X| ≤|Y | = m. One-to-k Assignment. In this assignment, every point in the smaller set X should map to k points in the larger set Y , where k ∈{1, 2, . . . , ⌊m n ⌋}. This will result in a sparsity of kn ≤⌊m n ⌋n ≤m. To compute such an assignment, we set ˆY = Y and we construct ˆX with k copies of every point in X along with m −kn dummy points. Since | ˆX| = | ˆY | = m, applying OT to ˆX and ˆY produces a one-to-one assignment between ˆX and ˆY . As ˆX contains k replicas of each point in X, each unique point in X is mapped to k points in Y , thus producing a one-to-k assignment. The remaining m −kn mappings to dummy points are ignored. Relaxed One-to-k Assignment. In a relaxed one-to-k assignment, each point in X can map to at most k points in Y . As with the one-to-k assignment, we use k replicas of each point in X, but now we add m dummy points to X and kn dummy points to Y , meaning | ˆX| = | ˆY | = m + kn. Because of the number of replicas, this will produce at most a one-to-k assignment between X and Y . However, since there is now one dummy point in ˆY for every original point in ˆX, every original point has the option of aligning to a dummy point, resulting in at most k alignments. Note that in this case, the cost function must take both positive and negative values to prevent all original points from 5614 Constraint # R of X # D in X′ # D in Y ′ Sparsity (s) Vanilla 1 0 0 s ≤n + m −1 One-to-k k m −kn 0 s = kn ≤m R one-to-k k m kn s ≤kn ≤m Exact-k 1 m −k n −k s = k ≤n Table 1: Summary of constrained alignment construction and sparsity. # R is the number of replicas, # D is the number of dummy points, R one-to-k is the relaxed one-to-k assignment, and n = |X| ≤|Y | = m. mapping to the zero-cost dummy points. Exact-k Assignment. An exact-k assignment maps exactly k points in X to points in Y , where k ≤n. An exact-k assignment can be constructed by adding m −k dummy points to X and n −k dummy points to Y , meaning | ˆX| = | ˆY | = n + m −k. In this case, the cost function must be strictly positive so that original points map to dummy points whenever possible. This leaves exactly k alignments between original points in X and Y . Controllable Sparsity. Table 1 summarizes the differences between vanilla OT and the constrained variants. The freedom to select the type of constraint and the value of k gives fine-grained control over the level of sparsity. We evaluate the performance of all these variants in our experiments. 6 Experimental Setup Datasets. We evaluate our model and all baselines on four benchmarks: two document similarity tasks, MultiNews and StackExchange, and two classification tasks, e-SNLI and MultiRC. The eSNLI and MultiRC tasks come from the ERASER benchmark (DeYoung et al., 2019), which was created to evaluate selective rationalization models. We chose those two datasets as they are best suited for our text matching setup. StackExchange2 is an online question answering platform and has been used as a benchmark in previous work (dos Santos et al., 2015; Shah et al., 2018; Perkins and Yang, 2019). We took the June 2019 data dumps3 of the AskUbuntu and SuperUser subdomains of the platform and combined them to form our dataset. MultiNews (Fabbri et al., 2019) is a multidocument summarization dataset where 2 to 10 news articles share a single summary. We consider 2https://stackexchange.com/sites 3https://archive.org/details/ stackexchange Metric StackExchange MultiNews # docs 730,818 10,130 # similar doc pairs 187,377 22,623 Avg sents per doc 3.7 31 Max sents per doc 54 1,632 Avg words per doc 87 680 Vocab size 603,801 299,732 Table 2: Statistics for the document ranking datasets. every pair of articles that share a summary to be a similar document pair. Table 2 shows summary statistics of the two document ranking datasets. e-SNLI (Camburu et al., 2018) is an extended version of the SNLI dataset (Bowman et al., 2015) for natural language inference where the goal is to predict the textual entailment relation (entailment, neutral, or contradiction) between premise and hypothesis sentences. Human rationales are provided as highlighted words in the two sentences. MultiRC (Khashabi et al., 2018) is a reading comprehension dataset with the goal of assigning a label of true or false to a question-answer pair depending on information from a multi-sentence document. We treat the concatenated question and answer as one input and the document as the other input for text matching. Human rationales are provided as highlighted sentences in the document. For StackExchange and MultiNews, we split the documents into 80% train, 10% validation, and 10% test, while for e-SNLI and MultiRC, we use the splits from DeYoung et al. (2019). Metrics. We evaluate models according to the following three criteria. 1. Sparsity. To evaluate sparsity, we compute the average percentage of active alignments produced by each model, where an alignment is active if it exceeds a small threshold λ. This threshold is necessary to account for numerical imprecision in alignment values that are essentially zero. We set λ = 0.01 n×m unless otherwise specified, where n and m are the number of text spans in the two documents. 2. Sufficiency. If a model makes a correct prediction given only the rationales, then the rationales are sufficient. We evaluate sufficiency by providing the model only with active alignments and the aligned text representations and by masking non-active inputs (using the threshold λ). 5615 Figure 4: An illustration of our constrained OT model applied to two text documents. The final output of the model depends on a combination of the encodings, the cost matrix, and the alignment matrix. 3. Relevance. The relevance of rationales is determined by whether a human would deem them valid and relevant. We compute relevance using the token-level F1 scores of model-generated rationales compared to human-selected rationales on the e-SNLI and MultiRC datasets. We also perform a qualitative human evaluation. Baselines and Implementation Details. We use the decomposable attention model (Parikh et al., 2016) as our baseline attention model. In addition, we compare our model to two attention variants that are designed to encourage sparsity. The temperature attention variant applies a temperature term T in the softmax operator (Lin et al., 2018). The sparse attention variant adopts the sparsemax operator (Martins and Astudillo, 2016) in place of softmax to produce sparse attention masks. Our constrained OT model operates as illustrated in Figure 4. After splitting the input documents into sentences, our model independently encodes each sentence and computes pairwise costs between the encoded representations4. Dummy and replica encodings are added as needed for the desired type of constrained alignment. Our model then applies OT via the Sinkhorn algorithm to the cost matrix C to produce an optimal alignment matrix P. For the document ranking tasks, the final score is simply ⟨C, P⟩. For the classification tasks, we use the alignment P as a sparse mask to select encoded text representations, and we feed the aggregated representation to a shallow network to predict the output label, similar to our baseline attention models. For a fair comparison, our models and all baselines use the same neural encoder to encode text spans before the attention or OT operation is applied. Specifically, we use RoBERTa (Liu et al., 2019), a state-of-the-art pre-trained encoder, for 4For the e-SNLI dataset, where documents are single sentences, we use the contextualized token representations from the output of the sentence encoder following previous work (Thorne et al., 2019). 0 5 10 15 20 25 cost attention sparse attention attention(T=0.01) 0 10 0 5 10 15 20 25 OT 0 10 OT (1:1) 0 10 OT (relaxed 1:1) 0 10 OT (exact k=4) Figure 5: Attention or alignment heatmaps generated by different methods on a synthetic 30×20 cost matrix. the StackExchange and MultiRC dataset. We use use bi-directional recurrent encoders (Lei et al., 2018) for the MultiNews and e-SNLI datasets5. The value of k for the OT constraints is chosen for each dataset by visually inspecting alignments in the validation set, though model performance is robust to the choice of k. In order to compare our models’ rationales to human annotations, we use a binary thresholding procedure as described in Appendix C. We report results averaged over 3 independent runs for each model. Additional implementation details are provided in Appendix C. 7 Results Synthetic Visualizations. Before experimenting with the datasets, we first analyze the alignments obtained by different methods on a synthetic cost matrix in Figure 5. As shown in the figure, all attention baselines struggle to produce sufficiently sparse alignments, even with the use of a small temperature or the sparsemax operator. In contrast, our methods are very sparse, as a result of the provable sparsity guarantees of the constrained alignment 5The input text in the MultiNews dataset is too long for large BERT models. The e-SNLI dataset in ERASER contains human-annotated rationales at the word level while BERT models use sub-word tokenization. 5616 StackExchange MultiNews Model AUC MAP MRR P@1 # Align. AUC MAP MRR P@1 # Align. OT 98.0 91.2 91.5 86.1 8 97.5 96.8 98.1 97.2 48 OT (1:1) 97.7 89.7 90.0 83.9 4 97.8 96.7 97.9 96.8 19 OT (relaxed 1:1) 97.8 88.5 88.9 81.8 3 93.1 93.2 96.0 94.1 19 OT (exact k) 98.1 92.3 92.5 87.8 2 96.4 96.3 97.7 96.6 6 Attention 98.2 92.4 92.5 88.0 23 97.8 96.4 97.6 96.3 637 Attention (T = 0.1) 98.2 92.4 92.5 87.7 22 98.0 97.0 98.1 97.1 634 Attention (T = 0.01) 97.9 89.7 89.9 83.5 8 97.9 96.9 98.0 97.0 594 Sparse Attention 98.0 92.5 92.6 88.3 19 98.2 97.7 98.1 97.1 330 Table 3: Performance of all models on the StackExchange and MultiNews datasets. We report ranking results and the average number of active alignments (# Align.) used. For our method with the exact k alignment constraint, we set k = 2 for StackExchange and k = 6 for MultiNews, respectively. problem. For instance, the relaxed one-to-k assignment produces fewer active alignments than either the number of rows or columns, and the exact-k assignment finds exactly k = 4 alignments. StackExchange & MultiNews. Table 3 presents the results of all models on the StackExchange and MultiNews datasets. We report standard ranking and retrieval metrics including area under the curve (AUC), mean average precision (MAP), mean reciprocal rank (MRR), and precision at 1 (P@1). The results highlight the ability of our methods to obtain high interpretability while retaining ranking performance comparable to strong attention baselines. For example, our model is able to use only 6 aligned pairs to achieve a P@1 of 96.6 on the MultiNews dataset. In comparison, the sparse attention model obtains a P@1 of 97.1 but uses more than 300 alignment pairs and is thus difficult to interpret. Model complexity and speed on the StackExchange dataset are reported in Table 7 in Appendix C. e-SNLI. Table 4 shows model performance on the e-SNLI dataset. As with document similarity ranking, we evaluate classification accuracy when the model uses only the active alignments. This is to ensure faithfulness, meaning the model truly and exclusively uses the rationales to make predictions. Since attention is not explicitly trained to use only active alignments, we also report the accuracy of attention models when using all attention weights. As shown in the table, the accuracy of attention methods decreases significantly when we remove attention weights other than those deemed active by the threshold λ. In contrast, our model retains high accuracy even with just the active alignments since sparsity is naturally modeled in our contrained optimal transport framework. Figure 6 visualizes the Figure 6: Model accuracy on the e-SNLI dataset when using different percentages of tokens as rationales. The attention model values are obtained using different thresholds λ to clip the attention weights while the values for our exact-k model correspond to k = 1, 2, 3, 4. change to model accuracy when different proportions of tokens are selected by the models. Table 4 also presents the token-level F1 scores for the models’ selected rationales compared to human-annotated rationales. Note that the rationale annotations for this task are designed for token selection rather than alignment and are sometimes only on one of the input sentences. Nevertheless, our model obtains F1 scores on par with recent work (DeYoung et al., 2019; Thorne et al., 2019). MultiRC. Table 5 presents the results on the MultiRC dataset. Compared to attention models, our OT-based models achieve similar task performance with a higher rationale F1 score, despite selecting fewer rationales. The model variants from DeYoung et al. (2019) in general achieve higher task F1 performance. However, their unsupervised model suffers from degeneration due to the challenges of end-to-end training without rationale supervision. We also create supervised versions of our models that learn from the human-annotated rationales 5617 Model Accuracy Task F1 % Token Premise F1 Hypothesis F1 P&H F1 OT (relaxed 1:1) 82.4 82.4 69.1 25.1 43.7 34.6 OT (exact k = 4) 81.4 81.4 38.7 24.3 45.0 35.4 OT (exact k = 3) 81.3 81.4 29.6 28.6 50.0 39.8 OT (exact k = 2) 81.3 81.3 21.6 24.8 30.6 27.8 Attention 76.3 (82.1) 76.2 37.9 26.6 37.6 32.2 Attention (T = 0.1) 73.9 (81.5) 73.9 33.0 28.4 44.1 36.5 Attention (T = 0.01) 70.2 (81.4) 69.9 30.6 26.1 38.0 32.2 Sparse Attention 63.5 (75.0) 63.1 12.5 8.8 24.5 17.2 Thorne et al. (2019) - (81.0) 22.2 57.8 †Lei et al. (2016) 90.3 37.9 †Lei et al. (2016) (+S) 91.7 69.2 †Bert-To-Bert (+S) 73.3 70.1 Table 4: e-SNLI accuracy, macro-averaged task F1, percentage of tokens in active alignments, and token-level F1 of the model-selected rationales compared to human-annotated rationales for the premise, hypothesis, and both (P&H F1). Accuracy numbers in parentheses use all attention weights, not just active ones. (+S) denotes supervised learning of rationales. † denotes results from DeYoung et al. (2019). Model Task F1 % Token R. F1 OT (1:1) 62.3 21.6 33.7 OT (relaxed 1:1) 62.0 23.1 32.1 OT (relaxed 1:2) 62.2 24.0 35.9 OT (exact k = 2) 62.5 25.8 34.7 OT (exact k = 3) 62.0 24.6 37.3 Attention 62.6 44.7 21.3 Attention (T = 0.1) 62.6 34.7 18.2 Attention (T = 0.01) 62.7 30.1 17.3 Sparse Attention 59.3 31.3 21.2 †Lei et al. (2016) 64.8 0.0 OT (1:1) (+S) 61.5 19.0 50.0 OT (relaxed 1:1) (+S) 60.6 19.4 45.4 OT (relaxed 1:2) (+S) 61.5 28.7 46.8 OT (exact k = 2) (+S) 61.0 18.9 51.3 OT (exact k = 3) (+S) 60.9 23.1 49.3 †Lei et al. (2016) (+S) 65.5 45.6 †Lehman et al. (2019) (+S) 61.4 14.0 †Bert-To-Bert (+S) 63.3 41.2 Table 5: MultiRC macro-averaged task F1, percentage of tokens used in active alignments, and token-level F1 of the model-selected rationales compared to humanannotated rationales (R. F1). (+S) denotes supervised learning of rationales. † denotes results from DeYoung et al. (2019). during training. These supervised models achieve comparable task performance to and better rationale F1 scores than models from DeYoung et al. (2019), demonstrating the strength of a sparse rationale alignment. Supervised training details can be found in Appendix C. Qualitative Studies. We performed a human evaluation on documents from StackExchange that reveals that our model’s alignments are preferred to attention. The results of the human evaluation, along with examples of StackExchange and e-SNLI alignments, are provided in Appendix A. 8 Conclusion Balancing performance and interpretability in deep learning models has become an increasingly important aspect of model design. In this work, we propose jointly learning interpretable alignments as part of the downstream prediction to reveal how neural network models operate for text matching applications. Our method extends vanilla optimal transport by adding various constraints that produce alignments with highly controllable sparsity patterns, making them particularly interpretable. Our models show superiority by selecting very few alignments while achieving text matching performance on par with alternative methods. As an added benefit, our method is very general in nature and can be used as a differentiable hard-alignment module in larger NLP models that compare two pieces of text, such as sequence-to-sequence models. Furthermore, our method is agnostic to the underlying nature of the two objects being aligned and can therefore align disparate objects such as images and captions, enabling a wide range of future applications within NLP and beyond. Acknowledgments We thank Jesse Michel, Derek Chen, Yi Yang, and the anonymous reviewers for their valuable discussions. We thank Sam Altschul, Derek Chen, Amit Ganatra, Alex Lin, James Mullenbach, Jen Seale, Siddharth Varia, and Lei Xu for providing the human evaluation. 5618 References David Alvarez-Melis and Tommi Jaakkola. 2018a. Gromov-Wasserstein alignment of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1881–1890, Brussels, Belgium. Association for Computational Linguistics. David Alvarez-Melis and Tommi Jaakkola. 2018b. Towards robust interpretability with self-explaining neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7775–7784. Curran Associates, Inc. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations. Joost Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963–2977, Florence, Italy. Association for Computational Linguistics. Steven Bird, Edward Loper, and Ewan Klein. 2009. Natural Language Processing with Python. O’Reilly Media Inc. Garrett Birkhoff. 1946. Tres observaciones sobre el algebra lineal. Universidad Nacional de Tucumán Revista Series A, 5:147–151. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Yann Brenier. 1987. Décomposition polaire et réarrangement monotone des champs de vecteurs. C. R. Acad. Sci. Paris Sér I Math., 305:805–808. Richard A. Brualdi. 1982. Notes of the birkhoff algorithm for doubly stochastic matrices. Canadian Mathematical Bulletin, 25:191–199. Richard A Brualdi. 2006. Combinatorial Matrix Classes, volume 108. Cambridge University Press. Luis A. Caffarelli and Robert J. McCann. 2010. Free boundaries in optimal transport and mongeampère obstacle problems. Annals of Mathematics, 171:673–730. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 9539–9549. Curran Associates, Inc. Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2019. A game theoretic approach to class-wise selective rationalization. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’ Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 10055– 10065. Curran Associates, Inc. Jianbo Chen, Le Song, Martin Wainwright, and Michael Jordan. 2018a. Learning to explain: An information-theoretic perspective on model interpretation. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 883–892, Stockholmsmässan, Stockholm Sweden. PMLR. Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, and Ram Nevatia. 2015. Abccnn: An attention based convolutional neural network for visual question answering. arXiv preprint arXiv:1511.05960. Liqun Chen, Shuyang Dai, Chenyang Tao, Haichao Zhang, Zhe Gan, Dinghan Shen, Yizhe Zhang, Guoyin Wang, Ruiyi Zhang, and Lawrence Carin. 2018b. Adversarial text generation via featuremover’s distance. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 4666–4677. Curran Associates, Inc. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 551–561, Austin, Texas. Association for Computational Linguistics. Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2292–2300. Curran Associates, Inc. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models. arXiv preprint arXiv:1911.03429. Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th 5619 Annual Meeting of the Association for Computational Linguistics, pages 1074–1084, Florence, Italy. Association for Computational Linguistics. Alessio Figalli. 2010. The optimal partial transport problem. Archive for Rational Mechanics and Analysis, 195:533–560. Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. arXiv preprint arXiv:1902.10186. Leonid Kantorovich. 1942. On the transfer of masses (in russian). Doklady Akademii Nauk, 37:227–229. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 252–262, New Orleans, Louisiana. Association for Computational Linguistics. Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. 2018. Structured attention networks. International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 957–966, Lille, France. PMLR. Anirban Laha, Saneem Ahmed Chemmengath, Priyanka Agrawal, Mitesh Khapra, Karthik Sankaranarayanan, and Harish G Ramaswamy. 2018. On controllable sparse alternatives to softmax. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 6422–6432. Curran Associates, Inc. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. Eric Lehman, Jay DeYoung, Regina Barzilay, and Byron C. Wallace. 2019. Inferring which medical treatments work from reports of clinical trials. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3705–3717, Minneapolis, Minnesota. Association for Computational Linguistics. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117, Austin, Texas. Association for Computational Linguistics. Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. 2018. Simple recurrent units for highly parallelizable recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4470–4481, Brussels, Belgium. Association for Computational Linguistics. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220. Qiuchi Li, Benyou Wang, and Massimo Melucci. 2019. CNM: An interpretable complex-valued network for matching. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4139–4148, Minneapolis, Minnesota. Association for Computational Linguistics. Junyang Lin, Xu Sun, Xuancheng Ren, Muyu Li, and Qi Su. 2018. Learning when to concentrate or divert attention: Self-adaptive attention temperature for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2985–2990, Brussels, Belgium. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Chaitanya Malaviya, Pedro Ferreira, and André F. T. Martins. 2018. Sparse and constrained attention for neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 370–376, Melbourne, Australia. Association for Computational Linguistics. Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1614–1623, New York, New York, USA. PMLR. Gonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. 2018. Learning latent permutations with gumbel-sinkhorn networks. International Conference on Learning Representations. Gaspard Monge. 1781. Mémoir sur la théorie des déblais et des remblais. Histoire de l’Académie Royale des Sciences, pages 666–704. 5620 Vlad Niculae and Mathieu Blondel. 2017. A regularized framework for sparse and structured neural attention. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 3338–3348. Curran Associates, Inc. Vlad Niculae, André F. T. Martins, Mathieu Blondel, and Claire Cardie. 2018. Sparsemap: Differentiable sparse structured inference. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pages 3799–3808. PMLR. Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255, Austin, Texas. Association for Computational Linguistics. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. NIPS 2017 Autodiff Workshop. Hugh Perkins and Yi Yang. 2019. Dialog intent induction with deep multi-view clustering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4014–4023. Association for Computational Linguistics. Gabriel Peyré and Marco Cuturi. 2019. Computational optimal transport. Foundations and Trends in Machine Learning, 11:335–607. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 1135–1144, New York, NY, USA. Association for Computing Machinery. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Koˇcisk`y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664. Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 2662–2670. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Association for Computational Linguistics. Cícero dos Santos, Luciano Barbosa, Dasha Bogdanova, and Bianca Zadrozny. 2015. Learning hybrid representations to retrieve semantically equivalent questions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 694–699, Beijing, China. Association for Computational Linguistics. Bernhard Schmitzer. 2016. Stabilized sparse scaling algorithms for entropy regularized transport problems. SIAM Journal on Scientific Computing, 41:A1443– A1481. Darsh Shah, Tao Lei, Alessandro Moschitti, Salvatore Romeo, and Preslav Nakov. 2018. Adversarial domain adaptation for duplicate question detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1056–1063, Brussels, Belgium. Association for Computational Linguistics. Richard Sinkhorn and Paul Knopp. 1967. Concerning nonnegative matrices and doubly stochastic matrices. Pacific J. Math, 21:343–348. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, page 3319–3328. JMLR.org. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana. Association for Computational Linguistics. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2019. Generating token-level explanations for natural language inference. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 963–969, Minneapolis, Minnesota. Association for Computational Linguistics. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 11–20, Hong Kong, China. Association for Computational Linguistics. Qizhe Xie, Xuezhe Ma, Zihang Dai, and Eduard Hovy. 2017. An interpretable knowledge transfer model for knowledge base completion. In Proceedings of the 55th Annual Meeting of the Association for 5621 Computational Linguistics (Volume 1: Long Papers), pages 950–962, Vancouver, Canada. Association for Computational Linguistics. Hongteng Xu, Dixin Luo, Hongyuan Zha, and Lawrence Carin Duke. 2019. Gromov-Wasserstein learning for graph matching and node embedding. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6932–6941, Long Beach, California, USA. PMLR. Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, page 2048–2057. JMLR.org. Mo Yu, Shiyu Chang, Yang Zhang, and Tommi Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4094–4103, Hong Kong, China. Association for Computational Linguistics. Ye Zhang, Iain Marshall, and Byron C. Wallace. 2016. Rationale-augmented convolutional neural networks for text classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 795–804, Austin, Texas. Association for Computational Linguistics. Appendix A Qualitative Study Human Evaluation. We performed a human evaluation of rationale quality on the StackExchange dataset. We asked 8 annotators to rate 270 rationale examples selected from three models including OT (exact k = 2), Attention (T = 0.01), and Sparse Attention. For each example, we presented the human annotator with a pair of similar documents along with the extracted alignment rationales. The annotator then assigned a score of 0, 1, or 2 for each of the following categories: redundancy, relevance, and overall quality. A higher score is always better (i.e., less redundant, more relevant, higher overall quality). For attention-based models, we selected the top 2 or 3 aligned pairs (according to the attention weights) such that the number of pairs is similar to that of the OT (exact k = 2) model. The results are shown (a) Redundancy (b) Relevance (c) Overall quality Figure 7: Human evaluation of rationales extracted from StackExchange document pairs using metrics of redundancy, relevance, and overall quality. Scores are either 0 (red), 1 (gray), or 2 (blue) and higher is better. The length of each bar segment indicates the proportion of examples with that score, and the number to the right of each bar is the average score. in Figure 7. Attention models have more redundancy as well as higher relevance. This is not surprising since selecting redundant alignments can result in fewer mistakes. In comparison, our OT-based model achieves much less redundancy and a better overall score. Example Rationales. Figure 8 shows examples of rationales generated from our OT (exact k = 2) model on the StackExchange dataset. Our extracted rationales effectively identify sentences with similar semantic meaning and capture the major topics in the AskUbuntu subdomain. Figure 9 similarly shows example rationales on the e-SNLI dataset. B Additional Results MultiRC Experiments with Recurrent Encoder. Table 6 shows the experimental results on the MultiRC dataset when we replace the RoBERTa encoder (results shown in Table 5) with the bi-directional simple recurrent unit (SRU) encoder (Lei et al., 2018) that we used for the MultiNews and e-SNLI datasets. In the unsupervised rationale learning setting, the 5622 Figure 8: Examples of extracted rationales from the StackExchange dataset using the OT (exact k = 2) model. Each rationale alignment is displayed visually as lines connecting pairs of sentences from the two text documents. Figure 9: Examples of extracted rationales from the e-SNLI dataset using the OT (exact k = 3) model. We show two examples of entailment (left column), neutral (middle column) and contradiction (right column). 5623 Model Task F1 % Token R. F1 OT (1:1) 59.5 20.3 24.2 OT (1:2) 60.1 28.0 26.5 OT (relaxed 1:1) 59.7 13.6 19.5 OT (relaxed 1:2) 60.2 24.7 29.1 OT (exact k = 2) 61.0 15.2 22.7 Attention 61.4 33.2 15.7 Attention (T = 0.1) 61.0 34.7 17.5 Attention (T = 0.01) 61.0 34.4 18.5 Sparse Attention 60.7 37.5 25.0 OT (1:1) (+S) 62.1 20.5 48.1 OT (1:2) (+S) 60.0 31.3 46.0 OT (relaxed 1:1) (+S) 60.3 18.2 46.2 OT (relaxed 1:2) (+S) 60.6 25.2 44.9 OT (exact k = 2) (+S) 61.2 16.7 48.7 Table 6: MultiRC macro-averaged task F1, percentage of tokens used in active alignments, and token-level F1 of the model-selected rationales compared to humanannotated rationales (R. F1). (+S) denotes supervised learning of rationales. All models use a simplified recurrent unit (Lei et al., 2018) encoder. SRU alignment models achieve lower task F1 score and lower rationale token F1 score than the RoBERTa counterpart. Nevertheless, our models still outperform attention-based models, the unsupervised rationale extraction baseline (Lei et al., 2016) implemented in DeYoung et al. (2019), and even one supervised rationale model (Lehman et al., 2019) implemented in DeYoung et al. (2019). In the supervised rationale learning setting, the SRU alignment models achieve performance comparable to that of the RoBERTa alignment models. Both alignment models achieve higher rationale F1 score than the baseline models, regardless of the encoder architecture, demonstrating the strength of our model for learning rationales. C Implementation Details Text Span Extraction. Sentences are extracted from the documents using the sentence tokenizer from the nltk Python package6 (Bird et al., 2009). Text Embeddings. For the bi-directional recurrent encoder, we use pre-trained fastText (Bojanowski et al., 2017) word embeddings, while for the RoBERTa encoder, we use its own pre-trained BPE embeddings. 6https://www.nltk.org/ OT Cost Functions. We use negative cosine similarity as the cost function for our OT (relaxed 1:1) model to achieve both positive and negative values in the cost matrix. For all the other OT variants, we use cosine distance, which is non-negative. We found that cosinebased costs work better than euclidean and dot-product costs for our model. Sinkhorn Stability. To improve the computational stability of the Sinkhorn algorithm, we use an epsilon scaling trick (Schmitzer, 2016) which repeatedly runs the Sinkhorn iterations with progressively smaller values of epsilon down to a final epsilon of 10−4. Loss Function. For the document ranking tasks, MultiNews and StackExchange, we train our model using a contrastive loss based on the difference between the optimal transport costs of aligning similar and dissimilar documents. Given a document D, if C+ is the cost matrix between D and a similar document and {C− i }l i=1 are the cost matrices between D and l dissimilar documents, then the loss is defined as max i∈[[l]]  max(⟨C+, P+⟩−⟨C− i , P− i ⟩+ ∆, 0)  , where P+ and P− i are the OT alignment matrices computed by the Sinkhorn algorithm for C+ and C− i , respectively, and where ∆is the hinge margin. For the classification tasks, e-SNLI and MultiRC, we use the standard cross entropy loss applied to the output of a shallow network that processes the cost and alignment matrices. Specifically, our model implementation is similar to the decomposable attention model (Parikh et al., 2016), in which the attention-weighted hidden representation is given to a simple 2-layer feed-forward network to generate the classification prediction. We similarly use the alignment output P from OT as the weight mask (which will be sparse) to select and average over hidden representations. Comparison to Human-Annotated Rationales. The e-SNLI and MultiRC datasets from the ERASER benchmark provide human rationale annotations, enabling a comparison of model5624 selected rationales to human-annotated rationales. However, the rationales are provided independently for each of the two input documents without alignment information. Therefore, in order to compare our models’ rationales to the human annotations, we need to convert our pairwise alignments to independent binary selection rationales for each of the two input documents. This can be accomplished via thresholding, as described below. Given an alignment matrix P ∈Rn×m + aligning documents X = {xi}n i=1 and Y = {yi}m i=1, the goal is to determine two binary rationale selection vectors Rx ∈{0, 1}n and Ry ∈ {0, 1}m indicating which text spans in X and Y are selected. Each entry of Rx and Ry is computed as Rx i = 1[Pm j=1 1[Pi,j > δ] > 0] and Ry j = 1[Pn i=1 1[Pi,j > δ] > 0], where 1[·] is an indicator function. Intuitively, this means that Rx i = 1 if Pi,j > δ for any j = 1, . . . , m, i.e., if at least one text span in Y aligns to text span xi, and Rx i = 0 otherwise. The meaning is the equivalent for Ry j . The binary selection rationales Rx and Ry can then be compared against the humanannotated rationales as measured by the F1 score. The threshold δ is selected based on the δ which produces the greatest F1 score on the validation set. Supervised Rationale Training. Our models are designed to learn alignments in an unsupervised manner, but it is possible to alter them to learn from human-annotated rationales in a supervised way. We do this by constructing a soft version of the independent binary rationale selections described in the previous section. First, we compute eRx i = Pm j=1 Pi,j and eRy j = Pn i=1 Pi,j as soft rationale indicators. We then compute the cross entropy loss Lr between these soft predictions and the human-annotated rationales. This loss is combined with the usual task classification cross entropy loss Lc to form the total loss L = α · Lc + (1 −α) · Lr, where α is a hyperparameter. In our experiments, we set α = 0.2. Model # Parameters Train time (s) Infer time (s) OT 2.0M 600 8.0e-3 Attention 2.4M 180 4.9e-3 Table 7: Number of parameters, training time, and inference time for models on the StackExchange dataset. Training time represents training time per epoch while inference time represents the average time to encode and align one pair of documents. All models use an NVIDIA Tesla V100 GPU. Model Complexity and Speed. Table 7 compares the model complexity and model speed between OT-based and attention-based models with bi-directional recurrent encoders (Lei et al., 2018). Our model does not add any trainable parameters on top of the text encoder, making it smaller than its attention-based counterparts, which use additional parameters in the attention layer. Our model is 3.3 times slower than attention during training and 1.6 times slower than attention during inference due to the large number of iterations required by the Sinkhorn algorithm for OT. Additional Details. We use the Adam (Kingma and Ba, 2014) optimizer for training. Hyperparameters such as the hinge loss margin, dropout rate, and learning rate are chosen according to the best validation set performance. All models were implemented with PyTorch (Paszke et al., 2017). Table 7 shows model complexity, training time, and inference time for the StackExchange dataset. D Obtaining Permutation Matrix Solutions to Optimal Transport Problems Our goal in this paper is to create an optimal transport problem that results in an assignment between two sets X and Y . The core idea is to create an expanded optimal transport problem between augmented sets X′ and Y ′ such that |X′| = |Y ′| = n. Then Proposition 2 implies that the optimal transport problem with a = b = 1n/n has a permutation matrix solution. This permutation matrix represents a oneto-one assignment between X′ and Y ′ from which we can extract an assignment between X and Y . However, a problem with this approach is 5625 that the permutation matrix solution might not be the only solution. In general, linear programming problems may have many solutions, meaning we are not guaranteed to find a permutation matrix solution even if it exists. Since we require a permutation matrix solution in order to obtain our desired sparsity bounds, we are therefore interested in methods for identifying the permutation matrix solution even when other solutions exist. Although these methods were not necessary for our experiments, since the Sinkhorn algorithm almost always found a permutation matrix solution for our inputs, we present these methods to ensure that the techniques presented in this paper can be used even in cases with degenerate solutions. One option is to avoid the problem altogether by using cost functions that are guaranteed to produce unique solutions. For example, Brenier (1987) showed that under some normality conditions, the cost function c(x, y) = ||x −y||2, i.e., the Euclidean distance, produces OT problems with unique solutions. However, it is sometimes preferable to use cost functions with different properties (e.g., bounded range, negative cost, etc.) which may not guarantee a unique OT solution. To find unique solutions for general cost functions, one method is to first find any solution to the optimal transport problem (e.g., by using the Sinkhorn algorithm) and then to use Birkhoff’s algorithm (Brualdi, 1982) to express that solution as a convex combination of permutation matrices. Since the original solution is optimal, every permutation matrix that is part of the convex combination must also be optimal (otherwise the cost could be reduced further by removing the suboptimal matrix from the combination and rescaling the others). Thus we can pick any of the permutation matrices in the convex combination as our optimal permutation matrix solution. However, since Birkhoff’s algorithm is not differentiable, these procedure cannot be used in end-to-end training and can only be applied at inference time. An alternate method, which preserves the differentiability of our overall approach, is to solve a modified version of the linear programming problem that is guaranteed to have a unique permutation matrix solution that closely approximates the solution the original problem. Theorem 1 demonstrates that by adding random iid noise of at most ϵ to each element of the cost matrix C to create a new cost matrix Cϵ, then with probability one, the resulting linear programming problem on Cϵ has a unique permutation matrix solution Pϵ∗ which costs at most ϵ more than the true optimal solution P∗. Thus, we can obtain a permutation matrix solution for C that is arbitrarily close to optimal. Furthermore, Corollary 1 implies that if we know that the difference in cost between the optimal permutation matrix and the second best permutation matrix is δ, then we can choose ϵ < δ to ensure that we actually find an optimal permutation matrix. Theorem 1. Consider LC(a, b) = argmin P∈U(a,b) ⟨C, P⟩, where C ∈ Rn×n is arbitrary and a = b = 1n/n. Let Eϵ ∈Rn×n be such that Eϵ ij iid ∼U([0, ϵ]) where ϵ > 0 and U is the uniform distribution. Define Cϵ = C + Eϵ. Let P∗= argmin P∈U(a,b) ⟨C, P⟩ and Pϵ∗= argmin P∈U(a,b) ⟨Cϵ, P⟩. Then 1. 0 ≤⟨C, Pϵ∗⟩−⟨C, P∗⟩≤ϵ. 2. With probability 1, Pϵ∗is unique and is a permutation matrix. Proof. We begin by proving result 1. Since P∗is optimal for C, it must be true that ⟨C, P⟩≤⟨C, P′⟩for any P′ ∈U(a, b). As Pϵ∗∈U(a, b), we thus have ⟨C, P∗⟩≤ ⟨C, Pϵ∗⟩and so ⟨C, Pϵ∗⟩−⟨C, P∗⟩≥0. To prove the other side of the inequality, first note that for any P ∈U(a, b), we have ⟨Eϵ, P⟩≥0 since Eϵ ij, Pij ≥0 for all i, j. Combining this with the optimality of Pϵ∗for 5626 Cϵ, we can see that ⟨C, Pϵ∗⟩−⟨C, P∗⟩ ≤⟨C, Pϵ∗⟩+ ⟨Eϵ, Pϵ∗⟩−⟨C, P∗⟩ = ⟨C + Eϵ, Pϵ∗⟩−⟨C, P∗⟩ = ⟨Cϵ, Pϵ∗⟩−⟨C, P∗⟩ ≤⟨Cϵ, P∗⟩−⟨C, P∗⟩ = ⟨Cϵ −C, P∗⟩ = ⟨C + Eϵ −C, P∗⟩ = ⟨Eϵ, P∗⟩ ≤ϵ, where the final inequality holds because the entries of P∗are positive and sum to one and the entries of Eϵ are at most ϵ. Thus results 1 holds. Now we will prove result 2. Since we are solving a linear programming problem over a bounded, convex set U(1n/n, 1n/n), every solution is a convex combination of optimal extremal points. Thus, a linear program has a unique optimal solution if and only if exactly one of the extremal points is optimal. By Birkhoff’s theorem (Birkhoff, 1946), the set of extremal points of U(1n/n, 1n/n) is equal to the set of permutation matrices. Therefore, if only a single permutation matrix Pσ is optimal for LCϵ(a, b), then Pσ is the unique solution. The goal is thus to show that the event that any two permutation matrices Pσi and Pσj corresponding to permutations σi ̸= σj both solve LCϵ(a, b) has probability zero. The union bound gives P(∪σi̸=σj Pσi, Pσj both solve LCϵ(a, b)) ≤ X σi̸=σj P(Pσi, Pσj both solve LCϵ(a, b)). The number of pairs σi and σj of distinct permutations of n items is n! 2  < ∞so the sum is over a finite number of probabilities. Thus, if we can show that P(Pσi, Pσj both solve LCϵ(a, b)) = 0 for any σi ̸= σj, then the sum will also be zero and result 2 will hold. To show that this is the case, take any two permutations matrices Pσ1 and Pσ2 for σ1 ̸= σ2 which are both optimal for LCϵ(a, b). Then it must be true that n⟨Cϵ, Pσ1⟩= n⟨Cϵ, Pσ2⟩ or equivalently n n X i,j=1 Cϵ ijPσ1 ij = n n X k,l=1 Cϵ klPσ2 kl . (1) Let I1 ⊆{1, . . . , n} × {1, . . . , n} be the indices (i, j) where Pσ1 ij = 1 n and Pσ2 ij = 0 and let I2 ⊆{1, . . . , n} × {1, . . . , n} be the indices (i, j) where Pσ2 ij = 1 n and Pσ1 ij = 0. Thus, for any (i, j) /∈I1 ∪I2, P σ1 ij = P σ2 ij and so the terms corresponding to that (i, j) cancel in equation (1). This means that Equation (1) can be rewritten as n X i,j∈I1∪I2 Cϵ ijPσ1 ij = n X k,l∈I1∪I2 Cϵ klPσ2 kl or equivalently, using the definition of I1 and I2, as X i,j∈I1 Cϵ ij = X k,l∈I2 Cϵ kl. Using the definition of Cϵ, this becomes X i,j∈I1 Cij + Eϵ ij = X k,l∈I2 Ckl + Eϵ kl. Grouping terms, we get X i,j∈I1 Eϵ ij − X k,l∈I2 Eϵ kl = X k,l∈I2 Ckl − X i,j∈I1 Cij. Since the LHS is a sum/difference of independent continuous random variables and the RHS is a constant, the event that the LHS equals the RHS has probability zero. Thus, the event that any two permutation matrices Pσ1 and Pσ2 with σ1 ̸= σ2 are both optimal for LCϵ(a, b) has probability zero. Corollary 1. If ⟨C, Pσ⟩−⟨C, P∗⟩= 0 or ⟨C, Pσ⟩−⟨C, P∗⟩> ϵ for every permutation matrix Pσ, then the permutation matrix Pϵ∗is an exact solution to LC(a, b). Proof. Theorem 1 says that that ⟨C, Pϵ∗⟩− ⟨C, P∗⟩≤ϵ. Since Pϵ∗is a permutation matrix, the assumptions in this corollary thus imply that that ⟨C, Pϵ∗⟩−⟨C, P∗⟩= 0, meaning Pϵ∗is an exact solution to LC(a, b).
2020
496
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5627–5634 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5627 Benefits of Intermediate Annotations in Reading Comprehension Dheeru Dua University of California, Irvine, CA, USA [email protected] Sameer Singh University of California, Irvine, CA, USA [email protected] Matt Gardner Allen Institute for Artificial Intelligence, Irvine, CA, USA [email protected] Abstract Complex, compositional reading comprehension datasets require performing latent sequential decisions that are learned via supervision from the final answer. A large combinatorial space of possible decision paths that result in the same answer, compounded by the lack of intermediate supervision to help choose the right path, makes the learning particularly hard for this task. In this work, we study the benefits of collecting intermediate reasoning supervision along with the answer during data collection. We find that these intermediate annotations can provide two-fold benefits. First, we observe that for any collection budget, spending a fraction of it on intermediate annotations results in improved model performance, for two complex compositional datasets: DROP and Quoref. Second, these annotations encourage the model to learn the correct latent reasoning steps, helping combat some of the biases introduced during the data collection process. 1 Introduction Recently many reading comprehension datasets requiring complex and compositional reasoning over text have been introduced, including HotpotQA (Yang et al., 2018), DROP (Dua et al., 2019), Quoref (Dasigi et al., 2019), and ROPES (Lin et al., 2019). However, models trained on these datasets (Hu et al., 2019; Andor et al., 2019) only have the final answer as supervision, leaving the model guessing at the correct latent reasoning. Figure 1 shows an example from DROP, which requires first locating various operands (i.e. relevant spans) in the text and then performing filter and count operations over them to get the final answer “3”. However, the correct answer can also be obtained by extracting the span “3” from the passage, or by adding or subtracting various numbers in the passage. The lack of intermediate hints makes learning challenging and can lead the model Question: How many touchdown passes did Cutler throw in the second half? Answer: 3 .....In the third quarter, the Vikes started to rally with running back Adrian Peterson’s 1-yard touchdown run (with the extra point attempt blocked). The Bears increased their lead over the Vikings with Cutler’s 3-yard TD pass to tight end Desmond Clark. The Vikings then closed out the quarter with quarterback Brett Favre firing a 6-yard TD pass to tight end Visanthe Shiancoe. An exciting .... with kicker Ryan Longwell’s 41-yard field goal, along with Adrian Peterson’s second 1-yard TD run. The Bears then responded with Cutler firing a 20-yard TD pass to wide receiver Earl Bennett. The Vikings then completed the remarkable comeback with Favre finding wide receiver Sidney Rice on a 6-yard TD pass on 4th-and-goal with 15 seconds left in regulation. The Bears then took a knee to force overtime.... The Bears then won on Jay Cutler’s game-winning 39-yard TD pass to wide receiver Devin Aromashodu. With the loss, not only did the Vikings fall to 11-4, they also surrendered homefield advantage to the Saints. Figure 1: Example from DROP, showing the intermediate annotations that we collected via crowd-sourcing. to rely on data biases, limiting its ability to perform complex reasoning. In this paper, we present three main contributions. First, we show that annotating relevant context spans, given a question, can provide an easy and low-cost way to learn better latent reasoning. To be precise, we show that under low budget constraints, collecting these annotations for up to 10% of the training data (2-5% of the total budget) can improve the performance by 4-5% in F1. We supervise the current state-of-the-art models for DROP and Quoref, by jointly predict the relevant spans and the final answer. Even though these models were not designed with these annotations in mind, we show that they can still be successfully used to improve model performance. Models that explicitly incorporate these annotations might see greater benefits. Our results suggest that future dataset collection efforts should set aside a fraction of budget for intermediate annotations, particularly as the reasoning required becomes more complex. 5628 Question: What record do the children that Conroy teaches play back to him? Answer: Beethoven’s Fifth Symphony Conroy tries to teach them about the outside world but comes into conflict both with the principal and Mr. Skeffington, the superintendent. He teaches them how to brush their teeth, who Babe Ruth is, and has the children listen to music, including Flight of the Bumblebee and Beethoven’s Fifth Symphony. He explains that the when Beethoven wrote the Fifth Symphony, he was writing about ”what death would sound like”. He is also astounded they’ve never even heard of Halloween, and he decides to take them to Beaufort on the mainland to go trick-or-treating, which the superintendent has forbidden. He also must overcome parental fears of ”the river.” As he leaves the island for the last time, the children come out to see him leave, all of them lined up on a rickety bridge. As he is about to leave by boat, one of the students then begins playing a record, which is the beginning movement of Beethoven’s Fifth Symphony. Figure 2: Example collected annotation from Quoref, showing the intermediate steps. Second, these annotations can help combat biases that are often introduced while collecting data (Gururangan et al., 2018; Geva et al., 2019). This can take the form of label bias—in DROP, 18% of questions have answers 1, 2, or 3—or annotator bias, where a small group of crowd workers creates a large dataset with common patterns. By providing intermediate reasoning steps explicitly, the annotations we collect help the model overcome some of these biases in the training data. Finally, the intermediate annotations collected in this work, including 8,500 annotations for DROP and 2,000 annotations for Quoref, will be useful for training further models on these tasks. We have made them available at https://github.com/dDua/ Intermediate_Annotations. 2 Intermediate Annotations Intermediate annotations describe the right set of context spans that should be aggregated to answer a question. We demonstrate their impact on two datasets: DROP and Quoref. DROP often requires aggregating information from various events in the context (Figure 1). It can be challenging to identify the right set of events directly from an answer when the same answer can be derived from many possible event combinations. We annotate the entire event span including all the attributes associated with the specific event. Quoref requires understanding long chains of coreferential reasoning, as shown in Figure 2, which are often hard to disentangle, especially when the context refers to multiple entities. We specifically annotate the coreference chains which lead to the entity being queried. Collection process: We used Amazon Mechanical Turk to crowd-source the data collection. We randomly sample 8,500 and 2,000 QA pairs from the training set for DROP and Quoref respectively. We showed a QA pair and its context to the workers and asked them to highlight “essential spans” in the context. In case of DROP, crowd workers were asked to highlight complete events with all their corresponding arguments in each span. For Quoref, they were asked to highlight the coreference chains associated with the answer entity in the context. Cost of gathering intermediate annotations: Each HIT, containing ten questions, paid $1, and took approximately five minutes to complete. Overall, we spent $850 to collect 8,500 annotations for DROP and $200 to collect 2,000 annotations for Quoref. If these annotations are collected simultaneously with dataset creation, it may be feasible to collect them at a lower cost, as the time taken to read the context again will be avoided. 3 Experiments and Results In this section, we train multiple models for the DROP and Quoref datasets, and evaluate the benefits of intermediate annotations as compared to traditional QA pairs. In particular, we will focus on the cost vs benefit tradeoff of intermediate annotations, along with evaluating their ability to mitigate bias in the training data. 3.1 Setup We study the impact of annotations on DROP on two models at the top of the leaderboard: NABERT1 and MTMSN (Hu et al., 2019). Both the models employ a similar arithmetic block introduced in the baseline model (Dua et al., 2019) on top of contextual representations from BERT (Devlin et al., 2019). For Quoref, we use the baseline XLNet (Yang et al., 2019) model released with the dataset. We supervise these models with the annotations in a simple way, by jointly predicting intermediate annotation and the final answer. We add two auxiliary loss terms to the marginal loglikelihood loss function. The first is a cross-entropy loss between the gold annotations (g) and predicted annotations, which are obtained by passing the final BERT representations through a linear layer to get a score per token p, then normalizing each token’s score of being selected as an annotation 1https://github.com/raylin1000/drop_bert 5629 with a sigmoid function. L1(θ) = α1CE(g, σ(p)) (1) The second is an L1 loss on the sum of predicted annotations, encouraging the model to only select a subset of the passage. L2(θ) = α2 |tokens| X ℓ=0 σ(pl) The hyper-parameters α1 and α2 were used to balance the scale of both auxiliary loss terms with the marginal log-likelihood. 3.2 Cost vs Benefit To evaluate the cost-benefit trade-off, we fix the total collection budget and then vary the percentage of budget that should go into collecting intermediate annotations. As shown in Figure ??, the model achieves better performance (+1.7% F1) when spending $7k where 2% budget is used for collecting intermediate reasoning annotations as compared to model performance when spending $10k for collecting only QA pairs. Overall, from Figure 3 we can see that allocating even 1% of the budget to intermediate annotations provides performance gains. However, we observe that allocating a large percentage of the budget to intermediate annotations at the expense of QA pairs reduces performance. In our experiments, we find that the sweet-spot percentage of the budget and training-set that should be allocated to intermediate annotations is 2% and ∼10% respectively. 3.3 Bias Evaluation Unanticipated biases (Min et al., 2019; Manjunatha et al., 2019) are often introduced during dataset collection due to many reasons (eg., domain-specific contexts, crowd-workers distributions, etc.). These “dataset artifacts” can be picked up by the model to achieve better performance without learning the right way to reason. We explore two examples of such dataset artifacts in DROP and Quoref. In DROP, around 40% of the passages are from NFL game summaries. The frequency of counting and arithmetic questions from this portion of the data resulted in the answers 1, 2, and 3 making up 18% of the entire training set. To study the effect of biased answer distribution on model performance, we sample 10k QA pairs with answers ∈[0,9] from Dataset Baseline More QA pairs Annotations F1 (%) Conf. loss F1 (%) Conf. loss F1 (%) Conf. loss DROP 24.6 101.5 25.5 107.5 28.1 94.5 Quoref 61.8 103.0 62.7 109.0 64.3 97.0 Table 1: F1 performance and confusion loss (lower is better) of models in three settings: baseline with 10k(DROP) and 5k(Quoref) QA pairs, additional QA pairs worth $250 and $100 for DROP and Quoref respectively, and additional annotations worth $250 and $100 for DROP and Quoref respectively. To put confusion loss in perspective, the best confusion loss, i.e. perfect diffusion, is 90.1 for DROP and 87.0 for Quoref. the training set randomly as a biased training set. We also sample QA pairs from the validation set uniformly for each answer ∈[0,9] thus ensuring that each answer has equal representation in the unbiased validation set. In Quoref, we found that around 65% of the answers are entity names present in the first sentence of the context. Similar to DROP, we create a biased training set with 5k QA pairs from the original training data, and an unbiased validation set with equal representation of answers from the first sentence and the rest of the context. We investigate the effects of spending a small additional budget, either by adding more QA pairs (from the biased data distribution) or by collecting intermediate annotations, on this bias. We use two metrics to measure the extent to which bias has been mitigated. The first is the original metric for the task, i.e. F1, that measures how accurate the model is on the unbiased evaluation. Further, we also want to evaluate the extent to which the errors made by the model are unbiased; in other words, how much is the error diffused over all possible answers, rather than only over the biased labels. We compute confusion loss (Machart and Ralaivola, 2012) as the metric for this, which measures error diffusion by computing the highest singular value of the unnormalized confusion matrix after setting the diagonal elements (i.e. true positives), to zero (Koc¸o and Capponi, 2013) (lower confusion loss implies more diffusion). In an ideal scenario, all labels should have an equally likely probability of being a mis-prediction. Higher confusion loss implies that if we consider mis-classifications of a model we see that it has a tendency of overpredicting a specific label, making it biased towards that specific class. 5630 7 7.5 8 8.5 9 9.5 10 48 50 52 54 56 USD (X 1000) F1 0% 1% 2% 5% 7% (a) Fixed cost: NABERT DROP 7 7.5 8 8.5 9 9.5 10 60 62 64 66 USD (X 1000) 0% 1% 2% 5% 7% (b) Fixed cost: MTMSN DROP 3 3.5 4 4.5 5 60 62 64 USD (X 1000) 0% 1% 2% 3% (c) Fixed cost: Quoref XLNet Figure 3: Performance of model for varying percentage of budget invested in collecting intermediate annotation. The calculation were done with cost as $0.4 and $0.7 for a QA pair in DROP and Quoref respectively. Table 1 shows that along with higher improvements in F1 on providing annotations as compared to more QA pairs, we also see a reduction in the confusion loss with annotations indicating bias mitigation. Further, we also find that for DROP, the false positive rate for top-3 common labels fell down from 47.7% (baseline) to 39.6% (with annotations), while the false positive rate for the bottom-7 increased from 30.4%(baseline) to 36.3%(with annotations), further demonstrating mitigation of bias. The confusion matrices are included in Appendix. 3.4 Qualitative Result Figure 4 shows a DROP example where the model trained without annotations is not able to determine the right set of events being queried, returning an incorrect response. The model trained with annotations can understand the semantics behind the query terms “first half” and “Cowboys”, to arrive at the correct answer. The curves depicting quantiHow many times did the Cowboys score in the first half? Still searching for their first win, the Bengals flew to Texas Stadium for a Week 5 interconference duel with the Dallas Cowboys. In the first quarter, Cincinnati trailed early as Cowboys kicker Nick Folk got a 30-yard field goal, along with RB Felix Jones getting a 33-yard TD run. In the second quarter, Dallas increased its lead as QB Tony Romo completed a 4-yard TD pass to TE Jason Witten. The Bengals would end the half with kicker Shayne Graham getting a 41-yard and a 31-yard field goal. In the third quarter, Cincinnati tried to rally as QB Carson Palmer completed an 18-yard TD pass to WR T. J. Houshmandzadeh. In the fourth quarter, the Bengals got closer as Graham got a 40-yard field goal, yet the Cowboys answered with Romo completing a 57-yard TD pass to WR Terrell Owens. Cincinnati tried to come back as Palmer completed a 10-yard TD pass to Houshmandzadeh (with a failed 2-point conversion), but Dallas pulled away with Romo completing a 15-yard TD pass to WR Patrick Crayton. Figure 4: Predicted relevant spans for question answered correctly with annotation (prediction: “3”) and incorrectly without annotations (prediction: “2”) by MTMSN model trained on DROP 5631 tative performance gains with varying amounts of annotations and QA pairs are in the appendix. 4 Related Work Similar to our work, Zaidan et al. (2007) studied the impact of providing explicit supervision via rationales, rather than generating them, for varying fractions of training set in text classification. However, we study the benefits of such supervision for complex compositional reading comprehension datasets. In the field of computer vision, Donahue and Grauman (2011) collected similar annotations, for visual recognition, where crowd-workers highlighted relevant regions in images. Within reading comprehension, various works like HotpotQA (Yang et al., 2018) and CoQA (Reddy et al., 2019) have collected similar reasoning steps for entire dataset. Our work shows that collecting intermediate annotations for a fraction of dataset is cost-effective and helps alleviate dataset collection biases to a degree. Another line of work (Ning et al., 2019) explores the cost vs. benefit of collecting full vs. partial annotations for various structured predictions tasks. However, they do not focus on intermediate reasoning required to learn the task. Our auxiliary training with intermediate annotations is inspired by extensive related work on training models using side information or domain knowledge beyond labels (Mann and McCallum, 2008; Chang et al., 2007; Ganchev et al., 2010; Rocktaschel et al., 2015). Especially relevant is work on supervising models using explanations (Ross et al., 2017), which, similar to our annotations, identify parts of the input that are important for prediction (Lei et al., 2016; Ribeiro et al., 2016). 5 Conclusion We show that intermediate annotations are a costeffective way to not only boost model performance but also alleviate certain unanticipated biases introduced during the dataset collection. However, it may be unnecessary to collect these for entire dataset and there is a sweet-spot that works best depending on the task. We proposed a simple semi-supervision technique to expose the model to these annotations. We believe that in future they can be used more directly to yield better performance gains. We have also released these annotations for the research community at https: //github.com/dDua/Intermediate_Annotations. Acknowledgements This work was supported in part by Allen Institute of Artificial Intelligence, in part by Amazon, and in part by the National Science Foundation (NSF) grant #CNS-1730158. References Daniel Andor, Luheng He, Kenton Lee, and Emily Pitler. 2019. Giving bert a calculator: Finding operations and arguments with reading comprehension. Annual Meeting of the Association for Computational Linguistics (ACL). Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraint-driven learning. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 280–287. Pradeep Dasigi, Nelson F Liu, Ana Marasovic, Noah A Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. Annual Meeting of the Association for Computational Linguistics (ACL). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL. Jeff Donahue and Kristen Grauman. 2011. Annotator rationales for visual recognition. In 2011 International Conference on Computer Vision, pages 1395– 1402. IEEE. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In NAACL. Kuzman Ganchev, Joao Graca, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learning Research (JMLR). Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. Annual Meeting of the Association for Computational Linguistics (ACL). Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In NAACL. Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019. A multi-type multi-span network for reading comprehension that requires discrete reasoning. Annual Meeting of the Association for Computational Linguistics (ACL). 5632 Sokol Koc¸o and C´ecile Capponi. 2013. On multi-class classification through the minimization of the confusion matrix norm. In Asian Conference on Machine Learning, pages 277–292. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. Annual Meeting of the Association for Computational Linguistics (ACL). Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. 2019. Reasoning over paragraph effects in situations. MRQA Workshop. Pierre Machart and Liva Ralaivola. 2012. Confusion matrix stability bounds for multiclass classification. arXiv preprint arXiv:1202.6221. Varun Manjunatha, Nirat Saini, and Larry S Davis. 2019. Explicit bias discovery in visual question answering models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9562–9571. Gideon S. Mann and Andrew McCallum. 2008. Generalized expectation criteria for semi-supervised learning of conditional random fields. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 870–878. Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. Annual Meeting of the Association for Computational Linguistics (ACL). Qiang Ning, Hangfeng He, Chuchu Fan, and Dan Roth. 2019. Partial or complete, that’s the question. Annual Meeting of the Association for Computational Linguistics (ACL). Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. ACM. Tim Rocktaschel, Sameer Singh, and Sebastian Riedel. 2015. Injecting logical background knowledge into embeddings for relation extraction. In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. IJCAI. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. NeurIPS. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. Annual Meeting of the Association for Computational Linguistics (ACL). Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using annotator rationales to improve machine learning for text categorization. In Human language technologies 2007: The conference of the North American chapter of the association for computational linguistics; proceedings of the main conference, pages 260–267. 5633 20 30 40 50 60 70 50 55 60 65 Number of training sample (x 1000) F1 F1 on the full development set 0k 2k 5k 7k (a) NABERT 20 30 40 50 60 70 62 64 66 68 70 72 Number of training sample (x 1000) F1 F1 on the full development set 0k 2k 5k 7k (b) MTMSN 4 6 8 10 12 14 16 18 20 60 62 64 66 68 70 Number of training sample (x 1000) F1 F1 on the full development set 0 400 600 800 1000 (c) Quoref-XLNet Figure 5: Performance of model trained on varying amount of annotations used in training (a) 10k samples (b) Additional QA pairs worth $250 (c) Annotations worth $250 Figure 6: For the same cost intermediate annotations helps diffuse biased over-representation of number 3 as compared to adding more question-answer pairs 5634 (a) 5k training samples (b) Additional QA pairs worth $100 (c) Annotations worth $100 Figure 7: For the same cost intermediate annotations helps diffuse biased over-representation of number 3 as compared to adding more question-answer pairs Figure 8: HIT interface used for collection annotations Question: What is the full name of Mary Harriette’s father? Motteux was also without heirs and bequeathed Sandringham, together with another Norfolk estate and a property in Surrey, to the third son of his close friend, Emily Lamb, the wife of Lord Palmerston. At the time of his inheritance in 1843, Charles Spencer Cowper was a bachelor diplomat, resident in Paris. On succeeding to Motteux’s estates, he sold the other properties and based himself at Sandringham. He undertook extensions to the hall, employing Samuel Sanders Teulon to add an elaborate porch and conservatory. Cowper’s style of living was extravagant he and his wife spent much of their time on the Continent and within 10 years the estate was mortgaged for £89,000. The death of their only child, Mary Harriette, from cholera in 1854 led the couple to spend even more time abroad, mainly in Paris, and by the early 1860s Cowper was keen to sell the estate. Figure 9: Predicted relevant spans for question answered correctly with annotation (prediction:“Charles Spencer Cowper”) and incorrectly without annotations (prediction:“Lord Palmerston”) by XLNet on Quoref
2020
497
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5635–5641 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5635 Crossing Variational Autoencoders for Answer Retrieval Wenhao Yu†, Lingfei Wu‡, Qingkai Zeng†, Shu Tao‡, Yu Deng‡, Meng Jiang† †University of Notre Dame, Notre Dame, IN, USA ‡IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA †{wyu1, qzeng, mjiang2}@nd.edu ‡{wuli, shutao, dengy}@us.ibm.com Abstract Answer retrieval is to find the most aligned answer from a large set of candidates given a question. Learning vector representations of questions/answers is the key factor. Questionanswer alignment and question/answer semantics are two important signals for learning the representations. Existing methods learned semantic representations with dual encoders or dual variational auto-encoders. The semantic information was learned from language models or question-to-question (answer-to-answer) generative processes. However, the alignment and semantics were too separate to capture the aligned semantics between question and answer. In this work, we propose to cross variational auto-encoders by generating questions with aligned answers and generating answers with aligned questions. Experiments show that our method outperforms the state-of-theart answer retrieval method on SQuAD. 1 Introduction Answer retrieval is to find the most aligned answer from a large set of candidates given a question (Ahmad et al., 2019; Abbasiyantaeb and Momtazi, 2020). It has been paid increasing attention by the NLP and information retrieval community (Yoon et al., 2019; Chang et al., 2020). Sentence-level answer retrieval approaches rely on learning vector representations (i.e., embeddings) of questions and answers from pairs of questionanswer texts. The question-answer alignment and question/answer semantics are expected to be preserved in the representations. In other words, the question/answer embeddings must reflect their semantics in the texts of being aligned as pairs. One popular scheme “Dual-Encoders” (also known as “Siamese network” (Triantafillou et al., 2017; Das et al., 2016)) has two separate encoders to generate question and answer embeddings and Table 1: The answer at the bottom of this table was aligned to 17 different questions at the sentence level. Question (1): What three stadiums did the NFL decide between for the game? Question (2): What three cities did the NFL consider for the game of Super Bowl 50? ... Question (17): How many sites did the NFL narrow down Super Bowl 50’s location to? Answer: The league eventually narrowed the bids to three sites: New Orleans Mercedes-Benz Superdome, Miami Sun Life Stadium, and the San Francisco Bay Area’s Levi’s Stadium. a predictor to match two embedding vectors (Cer et al., 2018; Yang et al., 2019). Unfortunately, it has been shown difficult to train deep encoders with the weak signal of matching prediction (Bowman et al., 2015). Then there has been growing interests in developing deep generative models such as variational auto-encoders (VAEs) and generative adversial networks (GANs) for learning text embeddings (Xu et al., 2017; Xie and Ma, 2019). As shown in Figure 1(b), the scheme of “DualVAEs” has two VAEs, one for question and the other for answer (Shen et al., 2018). It used the tasks of generating reasonable question and answer texts from latent spaces for preserving semantics into the latent representations. Although Dual-VAEs was trained jointly on question-to-question and answer-to-answer reconstruction, the question and answer embeddings can only preserve isolated semantics of themselves. In the model, the Q-A alignment and Q/A semantics were too separate to capture the aligned semantics (as we mentioned at the end of the first paragraph) between question and answer. Learning the alignment with the weak Q-A matching signal, though now based on generatable embeddings, can lead to confusing results, when (1) dif5636 𝑧!~𝑝(𝑧!) 𝑧"~𝑝(𝑧") Question Answer 𝑧!~𝑝(𝑧!) 𝑝(𝑧!|𝑞) 𝑧"~𝑝(𝑧") 𝑝(𝑧"|𝑎) Question Answer 𝑝(𝑦|𝑧!, 𝑧") 𝑝(𝑞|𝑧!) 𝑝(𝑎|𝑧") 𝑧!~𝑝(𝑧!) Encoder 𝑝(𝑧!|𝑞) 𝑧"~𝑝(𝑧") 𝑝(𝑧"|𝑎) Question Answer Decoder 𝑝(𝑞|𝒛𝒂) 𝑝(𝑎|𝒛𝒒) 𝑝(𝑦|𝑧!, 𝑧") 𝑝(𝑦|𝑧!, 𝑧") Question Answer Question Answer Decoder Encoder Encoder Decoder Decoder Encoder 𝑝(𝑧!|𝑞) 𝑝(𝑧"|𝑎) Encoder Encoder (a) Dual-Encoders (Yang et al., 2019) 𝑧!~𝑝(𝑧!) 𝑧"~𝑝(𝑧") Question Answer 𝑧!~𝑝(𝑧!) 𝑝(𝑧!|𝑞) 𝑧"~𝑝(𝑧") 𝑝(𝑧"|𝑎) Question Answer 𝑝(𝑦|𝑧!, 𝑧") 𝑝(𝑞|𝑧!) 𝑝(𝑎|𝑧") 𝑧!~𝑝(𝑧!) Encoder 𝑝(𝑧!|𝑞) 𝑧"~𝑝(𝑧") 𝑝(𝑧"|𝑎) Question Answer Decoder 𝑝(𝑞|𝒛𝒂) 𝑝(𝑎|𝒛𝒒) 𝑝(𝑦|𝑧!, 𝑧") 𝑝(𝑦|𝑧!, 𝑧") Question Answer Question Answer Decoder Encoder Encoder Decoder Decoder Encoder 𝑝(𝑧!|𝑞) 𝑝(𝑧"|𝑎) Encoder Encoder (b) Dual-VAEs (Shen et al., 2018) 𝑧!~𝑝(𝑧!) 𝑧"~𝑝(𝑧") Question Answer 𝑧!~𝑝(𝑧!) 𝑝(𝑧!|𝑞) 𝑧"~𝑝(𝑧") 𝑝(𝑧"|𝑎) Question Answer 𝑝(𝑦|𝑧!, 𝑧") 𝑝(𝑞|𝑧!) 𝑝(𝑎|𝑧") 𝑧!~𝑝(𝑧!) Encoder 𝑝(𝑧!|𝑞) 𝑧"~𝑝(𝑧") 𝑝(𝑧"|𝑎) Question Answer Decoder 𝑝(𝑞|𝒛𝒂) 𝑝(𝑎|𝒛𝒒) 𝑝(𝑦|𝑧!, 𝑧") 𝑝(𝑦|𝑧!, 𝑧") Question Answer Question Answer Decoder Encoder Encoder Decoder Decoder Encoder 𝑝(𝑧!|𝑞) 𝑝(𝑧"|𝑎) Encoder Encoder (c) Dual-CrossVAEs (Ours) Figure 1: (a)–(b) The Q-A alignment and Q/A semantics were learned too separately to capture the aligned semantics between question and answer. (c) We propose to cross VAEs by generating questions with aligned answers and generating answers with aligned questions. ferent questions have similar answers and (2) similar questions have different answers. Table 1 shows an examples in SQuAD: 17 different questions share the same sentence-level answer. Our idea is that if aligned semantics were preserved, the embeddings of a question would be able to generate its answer, and the embeddings of an answer would be able to generate the corresponding question. In this work, we propose to cross variational auto-encoders, shown in Figure 1(c), by reconstructing answers from question embeddings and reconstructing questions from answer embeddings. Note that compared with DualVAEs, the encoders do not change but decoders work across the question and answer semantics. Experiments show that our method improves MRR and R@1 over the state-of-the-art method by 1.06% and 2.44% on SQuAD, respectively. On a subset of the data where any answer has at least 10 different aligned questions, our method improves MRR and R@1 by 1.46% and 3.65%, respectively. 2 Related Work Answer retrieval (AR) is defined as the answer of a candidate question is obtained by finding the most similar answer between multiple candidate answers (Abbasiyantaeb and Momtazi, 2020). While another popular task on SQuAD dataset is machine reading comprehension (MRC), which is introduced to ask the machine to answer questions based on one given context (Liu et al., 2019). In this section, we review existing work related to answer retrieval and variational autoencoders. Answer Retrieval. It has been widely studied with information retrieval techniques and has received increasing attention in the recent years by considering deep neural network approaches. Recent works have proposed different deep neural models in text-based QA which compares two segments of texts and produces a similarity score. Document-level retrieval (Chen et al., 2017; Wu et al., 2018; Seo et al., 2018, 2019) has been studied on many public datasets including including SQuAD (Rajpurkar et al., 2016), MsMarco (Nguyen et al., 2016) and NQ (Kwiatkowski et al., 2019) etc. ReQA proposed to investigate sentence-level retrieval and provided strong baselines over a reproducible construction of a retrieval evaluation set from the SQuAD data (Ahmad et al., 2019). We also focus on sentence-level answer retrieval. Variational Autoencoders. VAE consists of encoder and generator networks which encode a data example to a latent representation and generate samples from the latent space, respectively (Kingma and Welling, 2013). Recent advances in neural variational inference have manifested deep latent-variable models for natural language processing tasks (Bowman et al., 2016; Kingma et al., 2016; Hu et al., 2017a,b; Miao et al., 2016). The general idea is to map the sentence into a continuous latent variable, or code, via an inference network (encoder), and then use the generative network (decoder) to reconstruct the input sentence conditioned on samples from the latent code (via its posterior distribution). Recent work in cross-modal generation adopted cross alignment VAEs to jointly learn rep5637 resentative features from multiple modalities (Liu et al., 2017; Shen et al., 2017; Schonfeld et al., 2019). DeConv-LVM (Shen et al., 2018) and VAR-Siamese (Deudon, 2018) are most relevant to us, both of which adopt Dual-VAEs models (see Figure 1(b)) for two text sequence matching task. In our work, we propose a Cross-VAEs for questions and answers alignment to enhance QA matching performance. 3 Proposed Method Problem Definition. Suppose we have a question set Q and an answer set A. Each question and answer have only one sentence. Each question q ∈Q and answer a ∈A can be represented as (q, a, y), where y is a binary variable indicating whether q and a are aligned. Therefore, the solution of sentence-level retrieval task could be considered as a matching problem. Given a question q and a list of answer candidates C(q) ⊂A, our goal is to predict p(y|q, a) of each input question q with each answer candidate a ∈C(q). 3.1 Crossing Variational Autoencder Learning cross-domain constructions under generative assumption is essentially learning the conditional distribution p(q|za) and p(a|zq) where two continuous latent variables zq, za ∈Rdz are independently sampled from p(zq) and p(za): p(q|a) = Eza∼p(za|a)[p(q|za)], (1) p(a|q) = Ezq∼p(zq|q)[p(a|zq)]. (2) The question-answer pair matching can be represented as the conditional distribution p(y|zq, za) from latent variables p(q|za) and p(a|zq): p(y|q, a) = Ezq∼p(zq|q),za∼p(za|a)[p(y|zq, za)], (3) Objectives. We denote Eq and Ea as question and answer encoders that infer the latent variable zq and za from a given question answer pair (q, a, y), and Dq and Da as two different decoders that generate corresponding question and answer q and a from latent variables za and zq. Then, we have cross construction objective function: Lcross(θE,θD) =y · Eq∼Q[−log pD(q|a, E(a))] +y · Ea∼A[−log pD(a|q, E(q))]. (4) Variational Autoencoder (Kingma and Welling, 2013) imposes KL-divergence regularizer to align both posteriors pE(zq|q) and pE(za|a): LKL(θE) =y · Eq∼Q[DKL(pE(zq|q)||p(zq))] +y · Ea∼A[DKL(pE(za|a)||p(za))], (5) where θE, θD are all parameters to be optimized. Besides, we have question answer matching loss from fφ(y|q, a) as: Lmatching(φf) = −  y · log pfφ(y|zq, za) +(1 −y) · log(1 −pfφ(y|zq, za))  , (6) where f is a matching function and φf are parameters to be optimized. Finally, in order to allow the model to balance between maximizing the variational evidence lower bound (ELBO) and minimizing the question answer matching loss, a joint training objective is given by: J = −α · Lcross −β · LKL + γ · Lmatching, (7) where α, β and γ are introduced as hyperparameters to control the importance of each task. 3.2 Model Implementation Dual Encoders. We use Gated Recurrent Unit (GRU) as encoders to learn contextual words embeddings (Cho et al., 2014). Question and answer embeddings are reduced by weighted sum through multiple hops self-attention (Lin et al., 2017) of GRU units and then fed into two linear transition to obtain mean and standard deviation as N(zq; µq, diag(σ2 q)) and N(za; µa, diag(σ2 a)). Dual Decoders. We adopt another Gated Recurrent Unit (GRU) for generating token sequence conditioned on the latent variables zq and za. Question Answer Matching. We adopt cosine similarity with l2 normalization to measure the matching probability of a question answer pair. 4 Experiment 4.1 Dataset Our experiments were conducted on SQuAD 1.1 (Rajpurkar et al., 2016). It has over 100,000 questions composed to be answerable by text from Wikipedia documents. Each question has one corresponding answer sentence extracted from the 5638 Table 2: Performance of answer retrieval on SQuAD. Method SQuAD MRR R@1 R@5 InferSent 36.90 27.91 46.92 SenBERT 38.01 27.34 49.59 BERTQA 48.07 40.63 57.45 QA-Lite 50.29 40.69 61.38 USE-QA 61.23 53.16 69.93 Dual-GRUs 61.06 54.70 68.25 Dual-VAEs 61.48 55.01 68.49 Cross-VAEs 62.29 55.60 70.05 Table 3: Performance of answer retrieval on a subset of SQuAD in which any answer has more than 8 questions. Our method outperforms baselines much more. SSE indicates the sum of squared distances/errors between two different questions aligned to same answer. Method SQuAD Subset MRR R@1 R@5 SSE BERTQA 37.90 30.81 45.24 0.23 USE-QA 47.06 40.90 53.44 0.14 Cross-VAEs 48.52 44.55 53.52 0.09 Wikipedia document. Since the test set is not publicly available, we partition the dataset into 79,554 (training) / 7,801 (dev) / 10,539 (test) objects. 4.2 Baselines InferSent (Conneau et al., 2017). It is not explicitly designed for answer retrieval, but it produces results on semantic tasks without requiring additional fine tuning. USE-QA (Yang et al., 2019) . It is based on Universal Sentence Encoder (Cer et al., 2018), but trained with multilingual QA retrieval and two other tasks: translation ranking and natural language inference. The training corpus contains over a billion question answer pairs from popular online forums and QA websites (e.g, Reddit). QA-Lite. Like USE-QA, this model is also trained over online forum data based on transformer. The main differences are reduction in width and depth of model layers, and sub-word vocabulary size. BERTQA (Devlin et al., 2019) . BERTQA first concatenates the question and answer into a text sequence [[CLS], Q, [SEP], A, [SEP]], then passes through a 12-layers BERT and takes the [CLS] vector as input to a binary classifier. SenBERT (Reimers and Gurevych, 2019) . It consists of twin structured BERT-like encoders to represent question and answer sentence, and then applies a similarity measure at the top layer. 4.3 Experimental Settings Implementation details. We initialize each word with a 768-dim BERT token embedding vector. If a word is not in the vocabulary, we use the average vector of its sub-word embedding vectors in the vocabulary. The number of hidden units in GRU encoder are all set as 768. All decoders are multi-layer perceptions (MLP) with one 768 units hidden layer. The latent embedding size is 512. The model is trained for 100 epochs by SGD using Adam optimizer (Kingma and Ba, 2014). For the KL-divergence, we use an KL cost annealing scheme (Bowman et al., 2016), which serves the purpose of letting the VAE learn useful representations before they are smoothed out. We increase the weight β of the KL-divergence by a rate of 2/epochs per epoch until it reaches 1. We set learning rate as 1e-5, and implemented on Pytorch. Competitive Methods. We compare our proposed method cross variational autoencoder (CrossVAEs) with dual-encoder model and dual variational autoencoder (Dual-VAEs). For fair comparisons, we all use GRU as encoder and decoder, and keep all other hyperparameters the same. Evaluation Metrics. The models are evaluated on retrieving and ranking answers to questions using three metrics, mean reciprocal rank (MRR) and recall at K (R@K). R@K is the percentage of correct answers in topK out of all the relevant answers. MRR represents the average of the reciprocal ranks of results for a set of queries. Comparing performance with baselines. As shown in Table 2, two BERT based models do not perform well, which indicates fune tuning BERT may not be a good choice for answer retrieval task due to unrelated pre-training tasks (e.g, masked language model). In contrast, using BERT token embedding can perform better in our retrieval task. Our proposed method outperforms all baseline methods. Comparing with USE-QA, our method improves MRR and R@1 by +1.06% and +2.44% on SQuAD, respectively. In addition, Dual variational autoencoder (DualVAEs) does not make much improvement on question answering retrieval task because it can only preserve isolated semantics of themselves. Our 5639 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Questions Answer (a) USE-QA 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Questions Answer (b) CrossVAEs Question (1): What halftime performer previously headlined Super Bowl XLVIII? Mismatched Answer: Coincidentally, both teams were coached by John Fox in their last Super Bowl appearance prior to Super Bowl 50. Question (2): Which Super Bowl halftime show did Beyonće headline? Mismatched Answer: On December 3, the league confirmed that the show would be headlined by the British rock group Coldplay. Correct Answer of Question (1) and (2): The Super Bowl 50 halftime show was headlined by the British rock group Cold-play with special guest performers Beyonće and Bruno Mars, who headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows. (c) Two questions were incorrectly matched by USE-QA, but correctly matched by CrossVAEs. Figure 2: A case of 14 different questions aligned to the same answer. We use SVD to reduce embedding dimensions to 2, and then project them on the X-Y coordinate axis. The scale of X-Y axis is relative with no practical significance. We observe that our method makes questions that share the same answer to be closer with each other. proposed crossing variational autoencoder (CrossVAEs) could outperform dual-encoder model and dual variational autoencoder model, which improves MRR and R@1 by +1.23%/+0.81% and +0.90%/+0.59%, respectively. Analyzing performance on sub-dataset. We extract a subset of SQuAD, in which any answer has at least eight different questions. As shown in Table 3, our proposed cross variational autoencoder (Cross-VAEs) could outperform baseline methods on the subset. Our method improves MRR and R@1 by +1.46% and +3.65% over USEQA. Cross-VAEs significantly improve the performance when an answer has multiple aligned questions. Additionally, SSE of our method is smaller than that of USE-QA. Therefore, the questions of the same answer are closer in the latent space. 4.4 Case Study Figures 2(a) and 2(b) visualize embeddings of 14 questions of the same answer. We observe that crossing variational autoencoders (CrossVAE) can better capture the aligned semantics between questions and answers, making latent representations of questions and answers more prominent. Figure 2(c) demonstrates two of example questions and corresponding answers produced by USE-QA and CrossVAEs. We observe that CrossVAEs can better distinguish similar answers even though they all share several same words with the question. 5 Conclusion Given a candidate question, answer retrieval aims to find the most similar answer text between candidate answer texts. In this paper, We proposed to cross variational autoencoders by generating questions with aligned answers and generating answers with aligned questions. Experiments show that our method improves MRR and R@1 over the best baseline by 1.06% and 2.44% on SQuAD. Acknowledgements We thank Drs. Nicholas Fuller, Sinem Guven, and Ruchi Mahindru for their constructive comments and suggestions. This project was partially supported by National Science Foundation (NSF) IIS1849816 and Notre Dame Global Gateway Faculty Research Award. 5640 References Zahra Abbasiyantaeb and Saeedeh Momtazi. 2020. Text-based question answering from information retrieval and deep neural network perspectives: A survey. arXiv preprint arXiv:2002.06612. Amin Ahmad, Noah Constant, Yinfei Yang, and Daniel Cer. 2019. Reqa: An evaluation for end-to-end answer retrieval models. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Samuel Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Wei-Cheng Chang, Felix X Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In Proceedings of 8th International Conference for Learning Representation (ICLR). Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Arpita Das, Harish Yenala, Manoj Chinnakotla, and Manish Shrivastava. 2016. Together we stand: Siamese networks for similar question retrieval. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Michel Deudon. 2018. Learning semantic similarity in a continuous space. In Advances in neural information processing systems (NeurIPS), pages 986–997. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017a. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org. Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, and Eric P Xing. 2017b. On unifying deep generative models. In Proceedings of 5th International Conference for Learning Representation (ICLR). Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of 2nd International Conference for Learning Representation (ICLR). Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. In Proceedings of 1st International Conference for Learning Representation (ICLR). Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved variational inference with inverse autoregressive flow. In Advances in neural information processing systems (NeurIPS). Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In Proceedings of 5th International Conference for Learning Representation (ICLR). Ming-Yu Liu, Thomas Breuel, and Jan Kautz. 2017. Unsupervised image-to-image translation networks. In Advances in neural information processing systems (NeurIPS). Shanshan Liu, Xin Zhang, Sheng Zhang, Hui Wang, and Weiming Zhang. 2019. Neural machine reading comprehension: Methods and trends. Applied Sciences. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In International conference on machine learning. 5641 Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human-generated machine reading comprehension dataset. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Nils Reimers and Iryna Gurevych. 2019. Sentencebert: Sentence embeddings using siamese bertnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Edgar Schonfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, and Zeynep Akata. 2019. Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Minjoon Seo, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Phraseindexed question answering: A new challenge for scalable document comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, and Lawrence Carin. 2018. Deconvolutional latent-variable model for text sequence matching. In Thirty-Second AAAI Conference on Artificial Intelligence. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems (NeurIPS). Eleni Triantafillou, Richard Zemel, and Raquel Urtasun. 2017. Few-shot learning through an information retrieval lens. In Advances in neural information processing systems (NeurIPS). Lingfei Wu, Ian EH Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, and Michael J Witbrock. 2018. Word mover’s embedding: From word2vec to document embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Zhongbin Xie and Shuai Ma. 2019. Dual-view variational autoencoders for semi-supervised text matching. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press. Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In Thirty-First AAAI Conference on Artificial Intelligence. Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, et al. 2019. Multilingual universal sentence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307. Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, and Kyomin Jung. 2019. A compareaggregate model with latent clustering for answer selection. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management.
2020
498
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5642–5650 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5642 Logic-Guided Data Augmentation and Regularization for Consistent Question Answering Akari Asai† and Hannaneh Hajishirzi†‡ †University of Washington ‡Allen Institute for AI {akari, hannaneh}@cs.washington.edu Abstract Many natural language questions require qualitative, quantitative or logical comparisons between two entities or events. This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions by integrating logic rules and neural models. Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model. Improving the global consistency of predictions, our approach achieves large improvements over previous methods in a variety of question answering (QA) tasks including multiple-choice qualitative reasoning, cause-effect reasoning, and extractive machine reading comprehension. In particular, our method significantly improves the performance of RoBERTa-based models by 1-5% across datasets. We advance state of the art by around 5-8% on WIQA and QuaRel and reduce consistency violations by 58% on HotpotQA. We further demonstrate that our approach can learn effectively from limited data.1 1 Introduction Comparison-type questions (Tandon et al., 2019; Tafjord et al., 2019; Yang et al., 2018) ask about relationships between properties of entities or events such as cause-effect, qualitative or quantitative reasoning. To create comparison questions that require inferential knowledge and reasoning ability, annotators need to understand context presented in multiple paragraphs or carefully ground a question to the given situation. This makes it challenging to annotate a large number of comparison questions. Most current datasets on comparison questions are much smaller than standard machine reading comprehension (MRC) datasets (Rajpurkar 1Our code and data is available at https://github. com/AkariAsai/logic_guided_qa. Q: The ceramic vase was less flexible than the plastic ball so it was A: more breakable Q: The ceramic vase was more flexible than the plastic ball so it was A: less breakable Q: If it is silent, does the outer ear collect less sound waves? A: more [positive causal relationship] Q: If the outer ear collect less sound waves, is less sound being detected? A: more [positive causal relationship] Q: If it is silent, is less sound being detected? A: more [positive causal relationship] RoBERTa more breakable more breakable RoBERTa more more less Conflict Conflict Figure 1: Inconsistent predictions by RoBERTa. Top row shows an example of symmetric inconsistency and the second row shows an example of transitive inconsistency. The examples are partially modified. et al., 2016; Joshi et al., 2017). This poses new challenges to standard models, which are known to exploit statistical patterns or annotation artifacts in these datasets (Sugawara et al., 2018; Min et al., 2019a). Importantly, state-of-the-art models show inconsistent comparison predictions as shown in Figure 1. Improving the consistency of predictions has been previously studied in natural language inference (NLI) tasks (Minervini and Riedel, 2018; Li et al., 2019), but has not been addressed in QA. In this paper, we address the task of producing globally consistent and accurate predictions for comparison questions leveraging logical and symbolic knowledge for data augmentation and training regularization. Our data augmentation uses a set of logical and linguistic knowledge to develop additional consistent labeled training data. Subsequently, our method uses symbolic logic to incorporate consistency regularization for additional supervision signal beyond inductive bias given by data augmentation. Our method generalizes previous consistency-promoting methods for NLI tasks (Minervini and Riedel, 2018; Li et al., 2019) 5643 to adapt to substantially different question formats. Our experiments show significant improvement over the state of the art on a variety of QA tasks: a classification-based causal reasoning QA, a multiple choice QA for qualitative reasoning and an extractive MRC task with comparisons between entities. Notably, our data augmentation and consistency constrained training regularization improves performance of RoBERTa-based models (Liu et al., 2019) by 1.0%, 5.0% and 2.5% on WIQA, QuaRel and HotpotQA. Our approach advances the stateof-the-art results on WIQA and QuaRel with 4.7 and 8.4% absolute accuracy improvement, respectively, reducing inconsistent predictions. We further demonstrate that our approach can learn effectively from limited labeled data: given only 20% of the original labeled data, our method achieves performance on par with a competitive baseline learned with the full labeled data. 2 Related Work Data augmentation has been explored in a variety of tasks and domains (Krizhevsky et al., 2009; Cubuk et al., 2019; Park et al., 2019). In NLP, using backtranslation (Yu et al., 2018) or dictionary based word replacement (Zhang et al., 2015) has been studied. Most relevant to our work, Kang et al. (2018) study NLI-specific logic and knowledgebased data augmentation. Concurrent to our work, Gokhale et al. (2020) study visual QA models’ ability to answer logically composed questions, and show the effectiveness of logic-guided data augmentation. Our data augmentation does not rely on task-specific assumptions, and can be adapted to different formats of QA task. We further leverage consistency-promoting regularization, which gives improvements in accuracy and consistency. Improving prediction consistency via training regularization has been studied in NLI tasks. Minervini and Riedel (2018) present model-dependent first-order logic guided adversarial example generation and regularization. Li et al. (2019) introduce consistency-based regularization incorporating the first-order logic rules. Previous approach is modeldependent or relies on NLI-specific rules, while our method is model-agnostic and is more generally applicable by combining it with data augmentation. Regularizing loss to penalize violations of structural constraints in models’ output has been also studied in previous work on constraint satisfaction in structured learning (Lee et al., 2019; Ganchev et al., 2010). Our work regularizes models to produce globally consistent predictions among augmented data following logical constraints, while those studies incorporates structured prediction models following linguistics rules. 3 Method We present the components of our QA method: first-order logic guided data augmentation (Section 3.1 and Section 3.2), and consistency-based regularization (Section 3.3). 3.1 Consistent Question Answering For globally consistent predictions in QA, we require responses to follow two important general logical rules: symmetric consistency and transitive consistency, which are illustrated in Figure 1 and are formally described below. Let q, p, a be a question, a paragraph and an answer predicted by a model. A is a set of answer candidates. Each element of A can be a span in p, a class category, or an arbitrary answer choice. X = {q, p, a} represents a logic atom. Symmetric consistency In a comparison question, small surface variations such as replacing words with their antonyms can reverse the answer, while keeping the overall semantics of the question as before. We define symmetry of questions in the context of QA as follows: (q, p, a∗) ↔ (qsym, p, a∗ sym), where q and qsym are antonyms of each other, and a∗ sym is the opposite of the groundtruth answer a∗in A. For example, the two questions in the first row of Figure 1 are symmetric pairs. We define the symmetric consistency of predictions in QA as the following logic rule: (q, p, a) →(qsym, p, asym), (1) which indicates a system should predict asym given (qsym, p), if it predicts a for (q, p). Transitive consistency. Transitive inference between three predicates A, B, C is represented as: A →B ∧B →C then A →C (Gazes et al., 2012). In the context of QA, the transitive examples are mainly for causal reasoning questions that inquire about the effect e given the cause c. The second row of Figure 1 shows an example where transitive consistency is violated. For two questions q1 and q2 in which the effect of q1 (= e1) is equal to the cause of q2 (= c2), we define the transitive consistency of predictions as follows: (q1, p, a1)∧(q2, p, a2) →(qtrans, p, atrans). (2) 5644 WIQA QuaRel HotpotQA (Tandon et al., 2019) (Tafjord et al., 2019) (Yang et al., 2018) reasoning Causal Reasoning Qualitative Reasoning Qualitative Comparison of entities format classification multiple choice span extraction p The rain seeps into the wood surface. When rain evaporates it leaves the wood. It takes the finish of the wood with it. The wood begins to lose it’s luster. Supposed you were standing on the planet Earth and Mercury. When you look up in the sky and see the sun, Golf Magazine is a monthly golf magazine owned by Time Inc. El Nuevo Cojo Ilustrado is an American Spanish language magazine. q q1:If a tsunami happens, will wood be more moist?, q2: If wood is more moist, is more weathering occurring? Which planet would the sun appear larger? El Nuevo Cojo and Golf Magazine, which one is owned by Time Inc? A {more, less, no effects} {Mercury, Earth} {Golf Magazine, El Nuevo Cojo} a∗ a∗ 1 : more, a∗ 2 : more Mercury Golf Magazine qaug If a tsunami happens, is more weathering occurring? Which planet would the sun appear smaller? Which one is not owned by Time Inc, Golf Magazine El Nuevo Cojo? a∗ aug more Earth El Nuevo Cojo Table 1: An augmented transitive example for WIQA, and symmetric examples for QuaRel and HotpotQA. We partially modify paragraphs and questions. The bold characters denote a shared event connecting two questions. The parts written in red or blue denote antonyms, and highlighted text is negation added by our data augmentation. 3.2 Logic-guided Data Augmentation Given a set of training examples X in the form of (q, p, a∗), we automatically generate additional examples Xaug = {qaug, p, a∗ aug} using symmetry and transitivity logical rules. The goal is to augment the training data so that symmetric and transitive examples are observed during training. We provide some augmented examples in Table 1. Augmenting symmetric examples To create a symmetric question, we convert a question into an opposite one using the following operations: (a) replace words with their antonyms, (b) add, or (c) remove words. For (a), we select top frequent adjectives or verbs with polarity (e.g., smaller, increases) from training corpora, and expert annotators write antonyms for each of the frequent words (we denote this small dictionary as D). More details can be seen in Appendix A. For (b) and (c), we add negation words or remove negation words (e.g., not). For all of the questions in training data, if a question includes a word in D for the operation (a), or matches a template (e.g., which * is ↔which * is not) for operations (b) and (c), we apply the operation to generate qsym.2 We obtain a∗ sym by re-labeling the answer a∗to its opposite answer choice in A (see Appendix B). Augmenting transitive examples We first find a pair of two cause-effect questions X1 = (q1, p, a∗ 1) and X2 = (q2, p, a∗ 2), whose q1 and q2 consist of 2We observe that (b)(c) are less effective than (a) in WIQA or QuaRel, while especially (b) contributes to the performance improvements on HotpotQA as much as (a) does. (c1, e1) and (c2, e2), where e1 = c2 holds. When a∗ 1 is a positive causal relationship, we create a new example Xtrans = (q3, p, a∗ 2) for q3 = (c1, e2). Sampling augmented data Adding all consistent examples may change the data distribution from the original one, which may lead to a deterioration in performance (Xie et al., 2019). One can select the data based on a model’s prediction inconsistencies (Minervini and Riedel, 2018) or randomly sample at each epoch (Kang et al., 2018). In this work, we randomly sample augmented data at the beginning of training, and use the same examples for all epochs during training. Despite its simplicity, this yields competitive or even better performance than other sampling strategies.3 3.3 Logic-guided Consistency Regularization We regularize the learning objective (task loss, Ltask) with a regularization term that promotes consistency of predictions (consistency loss, Lcons). L = Ltask(X) + Lcons(X, Xaug). (3) The first term Ltask penalizes making incorrect predictions. The second term Lcons4 penalizes making predictions that violate symmetric and transitive logical rules as follows: Lcons = λsymLsym + λtransLtrans, (4) where λsym and λtrans are weighting scalars to balance the two consistency-promoting objectives. 3We do not add Xaug if the same pair has already exist. 4We mask the Lcons for the examples without symmetric or transitive consistent examples. 5645 Dataset WIQA QuaRel HotpotQA Dev Test v (%) Dev Test v (%) Dev v (%) x% data 20% 40% 100 % 100% 100% 20% 100% 100% 100% 20% 100 % 100 % (# of X) (6k) (12k) (30k) (30k) (30k) (0.4k) (2k) (2k) (2k) (18k) (90k) (90k) SOTA – – – 73.8 – – – 76.6 – – – – RoBERTa 61.1 74.1 74.9 77.5 12.0 56.4 81.1 80.0 19.2 71.0 75.5 65.2 DA 72.1 75.5 76.3 78.3 6.0 69.3 84.5 84.7 13.3 73.1 78.0 6.3 DA + Reg 73.9 76.1 77.0 78.5 5.8 70.9 85.1 85.0 10.3 71.9 76.9 7.2 Table 2: WIQA, QuaRel and HotpotQA results:we report test and development accuracy (%) for WIQA and QuaRel and development F1 for HotpotQA. DA and Reg denote data augmentation and consistency regularization. “SOTA” is Tandon et al. (2019) for WIQA and Mitra et al. (2019) for QuaRel. v presents violations of consistency. Previous studies focusing on NLI consistency (Li et al., 2019) calculate the prediction inconsistency between a pair of examples by swapping the premise and the hypothesis, which cannot be directly applied to QA tasks. Instead, our method leverages consistency with data augmentation to create paired examples based on general logic rules. This enables the application of consistency regularization to a variety of QA tasks. Inconsistency losses The loss computes the dissimilarity between the predicted probability for the original labeled answer and the one for the augmented data defined as follows: Lsym = |log p(a|q, p)−log p(aaug|qaug, p)|. (5) Likewise, for transitive loss, we use absolute loss with the product T-norm which projects a logical conjunction operation (q1, p, a1) ∧(q2, c, a2) to a product of probabilities of two operations, p(a1|q1, p)p(a2|q2, p), following Li et al. (2019). We calculate a transitive consistency loss as: Ltrans = |log p(a1|q1, p) + log p(a2|q2, p)− log p(atrans|qtrans, p)|. Annealing The model’s predictions may not be accurate enough at the beginning of training for consistency regularization to be effective. We perform annealing (Kirkpatrick et al., 1983; Li et al., 2019; Du et al., 2019). We first set λ{sym,trans} = 0 in Eq. (4) and train a model for τ epochs, and then train it with the full objective. 4 Experiments Datasets and experimental settings We experiment on three QA datasets: WIQA (Tandon et al., 2019), QuaRel (Tafjord et al., 2019) and HotpotQA (oracle, comparison questions5) (Yang et al., 2018). 5We train models on both bridge and comparison questions, and evaluate them on extractive comparison questions only. WIQA QuaRel metric acc v (%) acc v (%) DA (logic) + Reg 77.0 5.8 85.1 10.3 DA (logic) 76.3 6.0 84.5 13.5 DA (standard) 75.2 12.3 83.3 14.5 Reg 75.8 11.4 – – Baseline 74.9 12.0 81.1 19.2 Table 3: Ablation studies of data augmentation on WIQA and QuaRel development dataset. As shown in Table 1, these three datasets are substantially different from each other in terms of required reasoning ability and task format. In WIQA, there are 3,238 symmetric examples and 4,287 transitive examples, while 50,732 symmetric pairs and 1,609 transitive triples are missed from the original training data. HotpotQA and QuaRel do not have any training pairs requiring consistency. Our method randomly samples 50, 80, 90% of the augmented data for WIQA, QuaRel and HotpotQA, resulting in 24,715/836/3,538 newly created training examples for those datasets, respectively. We use standard F1 and EM scores for performance evaluation on HotpotQA and use accuracy for WIQA and QuaRel. We report a violation of consistency following Minervini and Riedel (2018) to evaluate the effectiveness of our approach for improving prediction consistencies. We compute the violation of consistency metric v as the percentage of examples that do not agree with symmetric and transitive logical rules. More model and experimental details are in Appendix. Main Results Table 2 demonstrates that our methods (DA and DA + Reg) constantly give 1 to 5 points improvements over the state-of-theart RoBERTa QA’s performance on all three of the datasets, advancing the state-of-the-art scores on WIQA and QuaRel by 4.7% and 8.4%, respectively. On all three datasets, our method signifi5646 WIQA Input RoBERTa DA DA+Reg p Sound enters the ears of a person. The sound hits a drum that is inside the ears. q If the person has his ears more protected, will less sound be detected? [a∗: More] More (0.79) More (0.93) More (0.93) qsym If the person has his ears less protected, will less sound be detected? [asym∗: Less] More (0.87) More (0.72) Less (0.89) p Squirrels try to eat as much as possible. Squirrel gains weight. q1 If the weather has a lot of snow, cannot squirrels eat as much as possible? [a∗ 1: More] Less (0.75) More (0.48) More (0.94) q2 If squirrels cannot eat as much as possible, will not the squirrels gain weight? [a∗ 2: More] More (0.86) More (0.94) More (0.93) qtrans If the weather has a lot of snow, will not the squirrels gain weight? [a∗ trans: More] Less (0.75) More (0.43) More (0.87) HotpotQA (comparison) Input RoBERTa DA p B. Reeves Eason is a film director, actor and screenwriter. Albert S. Rogell a film director. q Who has more scope of profession, B. Reeves Eason or Albert S. Rogell? [a∗: B. Reeves Eason] B. Reeves Eason B. Reeves Eason qsym Who has less scope of profession, B. Reeves or Albert S. Rogell? [a∗ sym: Albert S. Rogell] B. Reeves Eason Albert S. Rogell Table 4: Qualitative comparison of RoBERTa, + DA, + DA + Reg. The examples are partially modified. cantly reduces the inconsistencies in predictions, demonstrating the effects of both data augmentation and regularization components. Notably on WIQA, RoBERTa shows violation of consistency in 13.9% of the symmetric examples and 10.0% of the transitive examples. Our approach reduces the violations of symmetric and transitive consistencies to 8.3% and 2.5%, respectively. Results with limited training data Table 2 also shows that our approach is especially effective under the scarce training data setting: when only 20% of labeled data is available, our DA and Reg together gives more than 12% and 14% absolute accuracy improvements over the RoBERTa baselines on WIQA and QuaRel, respectively. Ablation study We analyze the effectiveness of each component on Table 3. DA and Reg each improves the baselines, and the combination performs the best on WIQA and QuaRel. DA (standard) follows a previous standard data augmentation technique that paraphrases words (verbs and adjectives) using linguistic knowledge, namely WordNet (Miller, 1995), and does not incorporate logical rules. Importantly, DA (standard) does not give notable improvement over the baseline model both in accuracy and consistency, which suggests that logic-guided augmentation gives additional inductive bias for consistent QA beyond amplifying the number of train data. As WIQA consists of some transitive or symmetric examples, we also report the performance with Reg only on WIQA. The performance improvements is smaller, demonstrating the importance of combining with DA. Qualitative Analysis Table 4 shows qualitative examples, comparing our method with RoBERTa baseline. Our qualitative analysis shows that DA+Reg reduces the confusion between opposite choices, and assigns larger probabilities to the ground-truth labels for the questions where DA shows relatively small probability differences. On HotpotQA, the baseline model shows large consistency violations as shown in Table 2. The HotpotQA example in Table 4 shows that RoBERTa selects the same answer to both q and qsym, while DA answers correctly to both questions, demonstrating its robustness to surface variations. We hypothesize that the baseline model exploits statistical pattern, or dataset bias presented in questions and that our method reduces the model’s tendency to exploit those spurious statistical patterns (He et al., 2019; Elkahky et al., 2018), which leads to large improvements in consistency. 5 Conclusion We introduce a logic guided data augmentation and consistency-based regularization framework for accurate and globally consistent QA, especially under limited training data setting. Our approach significantly improves the state-of-the-art models across three substantially different QA datasets. Notably, our approach advances the state-of-the-art on QuaRel and WIQA, two standard benchmarks requiring rich logical and language understanding. We further show that our approach can effectively learn from extremely limited training data. Acknowledgments This research was supported by ONR N0001418-1-2826, DARPA N66001-19-2-403, NSF (IIS1616112, IIS1252835), Allen Distinguished Investigator Award, Sloan Fellowship, and The Nakajima Foundation Fellowship. We thank Antoine Bosselut, Tim Dettmers, Rik Koncel-Kedziorski, Sewon Min, Keisuke Sakaguchi, David Wadden, Yizhong Wang, the members of UW NLP group and AI2, and the anonymous reviewers for their insightful feedback. 5647 References Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. 2019. AutoAugment: Learning augmentation strategies from data. In CVPR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Ming Ding, Chang Zhou, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In ACL. Xinya Du, Bhavana Dalvi, Niket Tandon, Antoine Bosselut, Wen-tau Yih, Peter Clark, and Claire Cardie. 2019. Be consistent! improving procedural text comprehension using label consistency. In NAACL. Ali Elkahky, Kellie Webster, Daniel Andor, and Emily Pitler. 2018. A challenge set and methods for nounverb ambiguity. In EMNLP. Kuzman Ganchev, Jennifer Gillenwater, Ben Taskar, et al. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11(Jul):2001–2049. Regina Paxton Gazes, Nicholas W Chee, and Robert R Hampton. 2012. Cognitive mechanisms for transitive inference performance in rhesus monkeys: Measuring the influence of associative strength and inferred order. Journal of Experimental Psychology: Animal Behavior Processes, 38(4):331. Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2020. VQA-LOL: Visual question answering under the lens of logic. arXiv:2002.08325. He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias for natural language inference by fitting the residual. In EMNLP Workshop on DeepLo. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In ACL. Dongyeop Kang, Tushar Khot, Ashish Sabharwal, and Eduard Hovy. 2018. AdvEntuRe: Adversarial training for textual entailment with knowledge-guided examples. In ACL. Scott Kirkpatrick, C Daniel Gelatt, and Mario P Vecchi. 1983. Optimization by simulated annealing. science. Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. Technical report, Citeseer. Jay Yoon Lee, Sanket Vaibhav Mehta, Michael Wick, Jean-Baptiste Tristan, and Jaime Carbonell. 2019. Gradient-based inference for networks with output constraints. In AAAI. Tao Li, Vivek Cupta, Maitrey Mehta, and Vivek Srikumar. 2019. A logic-driven framework for consistency of neural models. In EMNLP. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM. Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019a. Compositional questions do not necessitate multi-hop reasoning. In ACL. Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2019b. Multi-hop reading comprehension through question decomposition and rescoring. In ACL. Pasquale Minervini and Sebastian Riedel. 2018. Adversarially regularising neural NLI models to integrate logical background knowledge. In ACL. Arindam Mitra, Chitta Baral, Aurgho Bhattacharjee, and Ishan Shrivastava. 2019. A generate-validate approach to answering questions about qualitative relationships. arXiv:1908.03645. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. arXiv:1904.01038. Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. SpecAugment: A simple data augmentation method for automatic speech recognition. arXiv:1904.08779. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP. Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. 2018. What makes reading comprehension questions easier? In EMNLP. Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal. 2019. QuaRel: A dataset and models for answering questions about qualitative relationships. In AAAI. Niket Tandon, Bhavana Dalvi Mishra, Keisuke Sakaguchi, Antoine Bosselut, and Peter Clark. 2019. WIQA: A dataset for ”what if...” reasoning over procedural text. In EMNLP. 5648 Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. 2019. Transformers: State-of-the-art natural language processing. arXiv:1910.03771. Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan Yuille, and Quoc V Le. 2019. Adversarial examples improve image recognition. arXiv:1911.09665. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In EMNLP. Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. QANet: Combining local convolution with global self-attention for reading comprehension. In ICLR. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NeurIPS. A Details of Human Annotations In this section, we present the details of human annotations used for symmetric example creation (the (a) operation). We first sample the most frequent 500 verbs, 50 verb phrases and 500 adjectives from from the WIQA and QuaRel training data. Then, human annotators select words with some polarity (e.g., increase, earlier). Subsequently, they annotate the antonyms for each of the selected verbs and adjectives. Consequently, we create 64 antonym pairs mined from a comparison QA dataset. We reuse the same dictionary for all three datasets. Examples of annotated antonym pairs are shown in Table 5. adjectives verbs & verb phrases more ↔less increase ↔decrease slowly ↔quickly heat up ↔cool down stronger ↔weaker lose weight ↔gain weight later ↔earlier raise ↔drop younger ↔older remove ↔add Table 5: Ten examples of annotated antonyms for comparison type questions. B Details of answer re-labeling on WIQA and HotpotQA We present the details of answer re-labeling operations in WIQA and HotpotQA, where the number of the answer candidates is more than two. Answer re-labeling in WIQA (symmetric) In WIQA, each labeled answer a∗takes one of the following values: {more, less, no effects}. Although more and less are opposite, no effects is a neutral choice. In addition, in WIQA, a question q consists of a cause c and an effect e, and we can operate the three operations (a) replacement, (b) addition and (c) removal of words. When we add the operations to both of c and e, it would convert the question to opposite twice, and thus the original answer remains same. When we add one of the operation to either of c or e, it would convert the question once, and thus, the answer should be the opposite one. Given these two assumption, we re-label answer as: (i) if we apply only one operation to either e or c and a∗is more or less, the a∗ sym will be the opposite of a∗, (ii) if we apply only one operation to either e or c and a∗is no effect, the a∗ sym will remain no effect, and (iii) if we apply one operation to each of e and c, the asym remains the same. 5649 Answer re-labeling in WIQA (transitive) For transitive examples, we re-label answers based on two assumptions on causal relationship. A transitive questions are created from two questions, X1 = (q1, p, a∗ 1) and X2 = (q2, p, a∗ 2), where q1 and q2 consist of (c1, e1) and (c2, e2) and e1 = c2 holds. If a1 for X1 is “more”, it means that the c1 causes e1. e1 is equivalent to the cause for the second question (c2), and a∗ 2 represents the causal relationship between c2 and e2. Therefore, if a∗ 1 is a positive causal relationship, c1 and e2 have the relationship defined as a∗ 2. We assume that if the a∗ 1 is “more”, a∗ 3(= a∗ trans) will be same as a2, and re-label answer following this assumption. Answer re-labeling in HotpotQA In HotpotQA, answer candidates A are not given. Therefore, we extract possible answers from q. We extract two entities included in q by string matching with the titles of the paragraphs given by the dataset. If we find two entities to be compared and both of them are included in the gold paragraphs, we assume the two entities are possible answer candidates. The new answer a∗ sym will be determined as the one which is not the original answer a∗. C Details of Baseline Models We use RoBERTa (Li et al., 2019) as our baseline. Here, we present model details for each of the three different QA datasets. Classification-based model for WIQA As the answer candidates for WIQA questions are set to {more, less, no effects}, we use a classification based models as studied for NLI tasks. The input for this model is [CLS] p [SEP] q [SEP]. We use the final hidden vector corresponding to the first input token ([CLS]) as the aggregate representation. We then predict the probabilities of an answer being a class C in the same manner as in (Devlin et al., 2019; Liu et al., 2019). Multiple-choice QA model for QuaRel For QuaRel, two answer choices are given, and thus we formulate the task as multiple-choice QA. In the original dataset, all of the p, q and A are combined together (e.g., The fastest land animal on earth, a cheetah was having a 100m race against a rabbit. Which one won the race? (A) the cheetah (B) the rabbit), and thus we process the given combined questions into p, q and A (e.g., the question written above will be p =The fastest land animal on earth, a cheetah was having a 100m race against a rabbit. , q =Which one won the race? and A ={the cheetah, rabbit}). Then the input will be [CLS] p [SEP] “Q: ” q “A: ” ai [SEP], and we will use the final hidden vector corresponding to the first input token ([CLS]) as the aggregate representation. We then predict the probabilities of an answer being an answer choice ai in the same manner as in (Liu et al., 2019). Span QA model for HotpotQA We use the RoBERTa span QA model studied for SQuAD (Devlin et al., 2019; Liu et al., 2019) for HotpotQA. As we only consider the questions whose answers can be extracted from p, we do not add any modifications to the model unlike some previous studies in HotpotQA (Min et al., 2019b; Ding et al., 2019). D Details of Implementations and Experiments Implementations Our implementations are all based on PyTorch. In particular, to implement our classification based and span-based model, we use pytorch-transformers (Wolf et al., 2019)6. To implement our multiple choice model, we use fairseq (Ott et al., 2019)7. Hyper-parameters For HotpotQA, we train a model for six epochs in total. For the model without data augmentation or regularization, we train on the original dataset for six epochs. For the models with data augmentation, we first train them on the original HotpotQA train data (including both bridge and comparison questions) for three epochs, and then train our model with augmented data and regularization for three epochs. For HotpotQA, we train our model with both bridge and comparison questions, and evaluate on comparison questions whose answers can be extracted from the context. Due to the high variance of the performance in the early stages of the training for small datasets such as QuaRel or WIQA, for these two datasets, we set the maximum number of training epochs to 150 and 15, respectively. We terminate the training when we do not observe any performance improvements on the development set for 5 epochs for WIQA and 10 epochs for QuaRel, respectively. We use Adam as an optimizer (ϵ = 1E −8) for 6https://github.com/huggingface/ transformers 7https://github.com/pytorch/fairseq 5650 all of the datasets. Other hyper-parameters can be seen from Table 6 hyper-parameters WIQA QuaRel HotpotQA train batch size 4 16 12 gradient accumulation 16 1 1 max token length 256 512 384 doc stride – – 128 learning rate 2E-5 1E-5 5E-5 weight decay 0.01 0.01 0.0 dropout 0.1 0.1 0.1 warm up steps 0 150 0 τ for annealing 3 25 3 λsym 0.5 0.1 0.25 λtrans 0.05 – – Table 6: Ten examples of annotated antonyms for comparison type questions. E Qualitative Examples on HotpotQA As shown in Table 2, the state-of-the-art RoBERTa model produces a lot of consistency violations. Here, we present several examples where our competitive baseline model cannot answer correctly, while our RoBERTa+DA model answers correctly. A question requiring world knowledge One comparison question asks “Who has more scope of profession, B. Reeves Eason or Albert S. Rogell”, given context that B. Reeves is an American film director, actor and screenwriter and Albert S. Rogell is an American film director. The model correctly predicts “B. Reeves Eason” but fails to answer correctly to “Who has less scope of profession, B. Reeves Eason or Albert S. Rogell”, although the two questions are semantically equivalent. A question with negation We found that due to this reasoning pattern our model struggles on questions involving negation. Here we show one example. We create a question by adding a negation word, qsym,“Which species is not native to asia, corokia or rhodotypos?”, where we add negation word not and the paragraph corresponding to the question is p =“Corokia is a genus in the Argophyllaceae family comprising about ten species native to New Zealand and one native to Australia. Rhodotypos scandens is a deciduous shrub in the family Rosaceae and is native to China, possibly also Japan.”. The model predicts Rhodotypos scandens, while the model predicts the same answer to the original question q, ‘which species is native to asia, corokia or rhodotypos?”. This example shows that the model strongly relies on surface matching (i.e., “native to”) to answer the question, without understanding the rich linguistic phenomena or having world knowledge.
2020
499
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 34–40 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 34 Dialogue State Tracking with Explicit Slot Connection Modeling Yawen Ouyang1 ∗, Moxin Chen1 ∗, Xinyu Dai1 †, Yinggong Zhao2 Shujiang Huang1 and Jiajun Chen1 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2Leyan Technologies, China {ouyangyw,chenmx}@smail.nju.edu.cn {daixinyu,huangsj,chenjj}@nju.edu.cn [email protected] Abstract Recent proposed approaches have made promising progress in dialogue state tracking (DST). However, in multi-domain scenarios, ellipsis and reference are frequently adopted by users to express values that have been mentioned by slots from other domains. To handle these phenomena, we propose a Dialogue State Tracking with Slot Connections (DSTSC) model to explicitly consider slot correlations across different domains. Given a target slot, the slot connecting mechanism in DSTSC can infer its source slot and copy the source slot value directly, thus significantly reducing the difficulty of learning and reasoning. Experimental results verify the benefits of explicit slot connection modeling, and our model achieves state-of-the-art performance on MultiWOZ 2.0 and MultiWOZ 2.1 datasets. 1 Introduction Task-oriented dialogue systems assist users to achieve their certain goals, such as making a restaurant reservation or booking a taxi. To fulfill users’ goals, dialogue state tracking (DST) is employed to estimate dialogue states at each turn. Dialogue states consist of constraints and requests conveyed by user utterances, typically are represented by a set of predefined slots and their corresponding values. For instance, the user utterance “I am looking for a Korean restaurant in the centre” mentions two slots, food and area, whose values are Korean and centre respectively. Numerous methods are proposed to tackle the challenge of DST recently, and these methods can be mainly categorized into two types: fixed vocabulary and open vocabulary (Eric et al., 2019). Fixed vocabulary models are designed in the paradigm of multi-class classification, relying on a predefined ∗Equal contributions. † Corresponding author. Turns Target Slot Source Slot U0: I am looking for a Korean restaurant in the centre. S0: I have 1 restaurant name Little Seoul in the expensive price range. restaurant-food restaurant-area ... – – ... U2: Are there any places to go in the same area as the restaurant? S2: There are dozens of places to go in city centre. What type of attraction are you interested in today? attraction-area restaurant-food restaurant-area ... restaurant-area – – ... U5: I also need a taxi to commute and need it to arrive at the restaurant. S5: I have booked a cab to take you to the restaurant when you leave All Saint’s church. The booked car type is a yellow volkswagen. taxi-departure taxi-destination attraction-area restaurant-food restaurant-area ... attraction-name restaurant-name restaurant-area – – ... Table 1: An example of multi-domain dialogue with slot connections expressed by ellipsis and reference. (We omit some turns and slots for simplicity.) ontology(Henderson et al., 2014a; Mrkˇsi´c et al., 2017; Zhong et al., 2018). Open vocabulary approaches (Xu and Hu, 2018; Wu et al., 2019; Gao et al., 2019; Ren et al., 2019) break the assumption of predefined ontologies, turning to generate values only given target slots. Wu et al. (2019) propose a copy-augmented encoder-decoder model to track dialogue states, which outperforms fixed vocabulary models and achieves the state-of-the-art result in multi-domain DST. Despite significant improvements achieved by those open vocabulary models, they always suffer from understanding enormous ellipsis and reference expressions in multi-domain scenarios. As shown in Table 1, there are several slot connections across multiple domains and turns. For example, at the second turn, the value of the target slot attraction-area is informed by a referring expression “in the same area as the restaurant”. Thus, the system needs to retrieve the value of its source slot restaurant-area. The last turn shows an obscurer utterance with multiple slot connections, in which target slots taxi-departure and taxi-destination are implicitly connected to their source slots attractionname and restaurant-name respectively. For those 35 slots that need connections, existing methods attempt to find their values out from the lengthy dialogue history, which usually fail because of high learning complexity. In this paper, we formally consider the above challenge as related-slot problem and propose a novel model DST-SC (Dialogue State Tracking with Slot Connections) to address it. We follow previous work to build a copy-augmented encoderdecoder model. Specially, DST-SC is designed with a slot connecting mechanism to establish the connection between the target slot and its source slot explicitly. Thus it can take advantage of the source slot value directly instead of reasoning from preceding turns. The contributions of this work are two-fold: • To the best of our knowledge, this work is the first one to discuss the related-slot problem in multi-domain DST and address it by explicitly modeling slot connections across domains. • We demonstrate that DST-SC is more effective for handling the related-slot problem and outperforms state-of-the-art baselines. 2 Model In this section, we will describe DST-SC model in detail. DST-SC is an open vocabulary model based on the encoder-decoder architecture. As shown in Figure 1, there are three components that contribute to obtain the target slot value: (1) word generation from the vocabulary; (2) word copying from the dialogue history; (3) value copying from the source slot. To reduce the burden on the decoder, DST-SC also equips with a slot gate (Wu et al., 2019) to predict for slot values of none and dontcare. 2.1 Encoder Our model uses a bi-directional GRU (Cho et al., 2014) to encode the dialogue history x = {w1, w2, · · · , wm}, where m is the number of tokens in the dialogue history. Each input token is first embedded using a word embedding function φemb and then encoded into a fix-length vector hi. hi = GRU(φemb(wi)). (1) 2.2 Word Generation We employ another GRU to decode slot values. Each slot is comprised of a domain name and a slot name, e.g., hotel-area. While decoding the j-th slot sj, its summed embedding is fed as the first input. The last hidden state of the encoder initializes the decoder hidden state. At decoding step t, the hidden state is represented as ehj t. (The superscript j will be omitted for simplicity.) Following the vanilla attention-based decoder architecture (Bahdanau et al., 2014), eht is used to apply attention over encoder outputs and aggregate them to get the context vector ct. at i = softmax(fmlp([eht, hi])), (2) ct = m X i=1 at i hi. (3) The distribution of generating token yt is given by: Pgen(yt) = softmax(Wgen [eht, ct]). (4) 2.3 Word Copying The copy mechanism is shown to be effective in DST (Lei et al., 2018; Xu and Hu, 2018; Wu et al., 2019). Here, we follow Wu et al. (2019) to augment the vanilla attention-based decoder with pointergenerator copying, enabling it to capture slot values that explicitly occur in the dialogue history. Pwc(yt = w) = X i:wi=w at i. (5) A soft gate g1 is used to combine word copying distribution and generative distribution. g1 = sigmoid(Wg1[eht, ct, φemb(yt−1)]), (6) Porig(yt) = g1 Pgen(yt) + (1 −g1) Pwc(yt). (7) 2.4 Slot Connecting Mechanism As claimed in Section 1, connecting the target slot with its source slot helps to decrease the reasoning difficulty. Therefore, we enhance the copyaugmented encoder-decoder model with a slot connecting mechanism to model slot correlations directly. When decoding the target slot sj, DST-SC infers its source slot from last dialogue states, then copies its value for the final distribution. Last dialogue states are represented by (slot, value) tuples: {(s1, v1), (s2, v2), · · · , (sn, vn)}. We use eh0 as the query to attend the potential source slot. ak = softmax(fmlp([eh0, sk])), (8) where sk is the summed slot embedding, k ∈ {1, 2, · · · , n} \ {j}. Attention score ak measures 36 Slot Gate {none, dontcare, span} Dialogue History Bi-GRU GRU Target Slot sj Attention Attention g1 g2 Slot Connecting Mechanism Last Dialogue States w1 wm ··· ··· s1 v1 s2 v2 sn vn ··· context Pwc Pgen Pvc P Figure 1: DST-SC model architecture (best viewed in color). Three processing flows leading to Pgen, Pwc, Pvc are respectively generation (brown), copying from dialogue history (green), copying from last dialogue states (purple). how related sk is to the target slot sj. It is computed only once at the first decoding step and maintained consistency to subsequent tokens in the value vk. At the t-th decoding step, the t-th token vkt contributes to form value copying distribution Pvc(yt). Pvc(yt = w) = X k: vkt=w ak. (9) Similar to the copy-augmented decoder, we combine value copying distribution and original distributions using a soft gate g2 to get final output distribution. g2 = sigmoid(Wg2 c0), (10) P(yt) = g2 Pvc(yt) + (1 −g2) Porig(yt). (11) 3 Experimental Setup 3.1 Datasets To evaluate the effectiveness of DST-SC, we conducted experiments on MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 datasets (Eric et al., 2019). MultiWOZ 2.0 is a multi-domain dialogues corpus, and some annotation errors are corrected in MultiWOZ 2.1. 3.2 Baselines We compare DST-SC with several baseline methods. FJST and HJST (Eric et al., 2019) apply a separate feed-forward network to classify for every single state slot. HyST (Goel et al., 2019) is a hybrid approach, which combines the joint tracking fixed vocabulary approach and open vocabulary approach. COMER (Ren et al., 2019) adopts three hierarchically stacked decoders to generate dialogue states. TRADE (Wu et al., 2019) generates dialogue states from the dialogue history using a copy-augmented decoder. 3.3 Implementation Details In our experiments, we used Glove (Pennington et al., 2014) and character embeddings (Hashimoto et al., 2017) to initialize word embeddings, each word is represented by a 400-dimensional vector. The hidden sizes of all GRU layers are set to 400. In the training phase, we used ground truth prior-turn dialogue states in the slot connecting mechanism. Adam optimizer (Kingma and Ba, 2015) is applied with 0.001 learning rate initially. The learning rate then reduced by a factor of 0.2, and the training stopped early when the performance in validation set was not improved for 6 consecutive epochs. We used a batch size of 32 and dropout rate of 0.2. Greedy search strategy is used for decoding, with maximum 10 decoded tokens and 50% probability of teacher forcing. Also, we followed previous work to utilize our model with the word dropout (Wu et al., 2019) by masking input tokens with a 20% probability. All experiments are averaged across 3 seeds. 4 Results and Analysis 4.1 Experimental Results We follow previous work to compare the performance of joint goal accuracy. We get the joint goal correct if the predicted state exactly matches the ground truth state for every slot. As shown in Table 2, open vocabulary approaches achieve higher accuracy than fixed vocabulary approaches. DST-SC achieves state-of-the-art performance on 37 Model MultiWOZ 2.0 MultiWOZ 2.1 FJST† 40.20% 38.00% HJST† 38.40% 35.55% HyST† 42.33% 38.10% COMER† 48.79% – TRADE1 50.83% 48.29% DST-SC 52.24% 49.58% Table 2: Joint goal accuracy on MultiWOZ 2.0 and MultiWOZ 2.1. Results marked with † are from original papers. MultiWOZ 2.0 and MultiWOZ 2.1, with the joint goal accuracy of 52.24% and 49.58%. 4.2 Related-slot Tests We conducted further related-slot tests to verify the effectiveness of DST-SC in solving the relatedslot problem. The dataset for related-slot tests is constructed by manually extracting dialogues with the related-slot problem from MultiWOZ 2.1 test set. We made an observation that slot connections are common at target slots such as attraction-area, hotel-area, hotel-book day and so on. We only need to focus on target slot accuracy of turns with slot connections. However, some target slots occur infrequently in the extracted dataset. Considering that target slots from different domains with the same slot type always correspond to similar slot connection expressions, we can neglect their domains and calculate the accuracy of each slot type instead. For example, we can calculate the accuracy of slot type price instead of calculating the accuracy of hotel-price range and restaurant-price range separately. Table 3 lists slot types and their corresponding target slots. To make more convincing tests, we performed data augmentations to get more samples for each slot type. We used two heuristic rules to augment the extracted data and obtained 100 dialogues for each slot type. (1) Paraphrasing: we rewrote some utterances to get multiple phrases with the same intent. For example, the phrase “in the same area as the restaurant” can be rewritten as “close to the restaurant”. (2) Replacing values: we replaced some slot values to exclude the influence of overfitting. For example, the phrase “stay in the east” can be replaced as “stay in the west”. 1We re-implemented TRADE as described in section 2.2 and section 2.3 and got a stronger baseline. Slot Type Target Slots area attraction-area, hotel-area, restaurant-area day hotel-book day, train-day, restaurant-book day people hotel-book people, restaurant-book people, train-book people departure taxi-departure destination taxi-destination price hotel-price range, restaurant-price range time restaurant-book time, taxi-arrive by, taxi-leave at, train-arrive by, train-leave at Table 3: Slot types and corresponding target slots involved in related-slot tests. As shown in Table 4, DST-SC outperforms TRADE by a large margin at most slot types. Case 1 in Table 5 illustrates the advantage of DST-SC explicitly. We find that both generation and word copying miss the correct token. However, the slot connecting mechanism in DST-SC helps to find out the correct source slot and merges its value into P under the control of gate g2. Note that there are no obvious improvements on slot types departure and destination. We suspect that this is caused by lots of missing annotations for attraction-name, hotel-name and restaurant-name, which usually act as source slots for departure and destination. The absence of these critical information makes DST-SC pay less attention to values from source slots. As shown in case 2 in Table 5, even if the slot connection mechanism has inferred the correct source slot, the unconfidence of g2 leads to the final incorrect output. 5 Related Work Traditional approaches for dialogue state tracking (Henderson et al., 2014b; Sun et al., 2014; Zilka and Jurc´ıcek, 2015; Mrkˇsi´c et al., 2015) rely on manually constructed semantic dictionaries to extract features from input text, known as delexicalisation. These methods are vulnerable to linguistic variations and difficult to scale. To overcome these problems, Mrkˇsi´c et al. (2017) propose the first data-driven model for DST, the employed deep learning approaches provide stronger representa38 Model area day departure destination people price time TRADE 49.33% 16.00% 49.66% 48.33% 12.00% 26.33% 86.66% DST-SC 86.33% 92.00% 46.66% 48.66% 87.00% 53.33% 87.33% Table 4: Slot type accuracy of related-slot tests. Case 1: dialogue idx=PMUL0129 (success) Case 2: dialogue idx=MUL1228 (failure) U1: I want to book a table for 4 people ... S3: I have 1 hotel in the moderate range, cityroomz. Would you like ... · · · U4: Yes, please. Can you book a room for Friday for 1 person, 3 nights? S3: The Bridge guest house is available. Would you like ... · · · U4: Yes, please. For the same number of people, 2 nights ... U6: ... I need the taxi to take me to the hotel. Target slot: hotel-book people=4 Target slot: taxi-destination=cityroomz Source slot: restaurant-book people=4 Source slot: hotel-name=cityroomz Model Pgen Pwc Pvc g1 g2 P Model Pgen Pwc Pvc g1 g2 P TRADE “3” “people” – 0.999 – “3” TRADE “none” “peking” – 0.148 – “peking” DST-SC “1” “the” “4” 0.999 0.991 “4” DST-SC “lensfield” “hotel” “cityroomz” 0.942 0.078 “lensfield” Table 5: Case study. We only list tokens with the highest output probability in Pgen, Pwc, Pvc and P. tion learning ability. By sharing parameters among slots (Ren et al., 2018; Zhong et al., 2018; Nouri and Hosseini-Asl, 2018), the model is further improved to track rare slot values. These approaches are all designed in the paradigm of multi-class classification over predefined slot value candidates and usually referred to as fixed vocabulary approaches. Fixed vocabulary approaches always require a predefined ontology, which is usually impractical. Their applications are usually limited in a single domain. Therefore, several open vocabulary approaches in generative fashion (Xu and Hu, 2018; Wu et al., 2019; Gao et al., 2019; Ren et al., 2019) are proposed to handle unlimited slot values in more complicated dialogues. Open vocabulary models show the promising performance in multidomain DST. However, ellipsis and reference phenomena among multi-domain slots are still less explored in existing literature. 6 Conclusion In this paper, we highlight a regularly appeared yet rarely discussed problem in multi-domain DST, namely the related-slot problem. We propose a novel dialogue state tracking model DST-SC, which equips with the slot connecting mechanism to build slot connections across domains. Our model achieves significant improvements on two public datasets and shows effectiveness on relatedslot problem tests. Annotations complement for MultiWOZ dataset in the future might enable DSTSC to handle the related-slot problem more effectively and further improve the joint accuracy. Acknowledgments We would like to thank the anonymous reviewers for their constructive comments. This work is supported by the National Natural Science Foundation of China (Nos. 61976114 and 61936012), the National Key R&D Program of China (No. 2018YFB1005102). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaˇsi´c. 2018. MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111, Doha, Qatar. Association for Computational Linguistics. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyag Gao, and Dilek HakkaniTur. 2019. Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines. In arXiv preprint arXiv:1907.01669. 39 Shuyang Gao, Abhishek Sethi, Sanchit Agarwal, Tagyoung Chung, and Dilek Hakkani-Tur. 2019. Dialog state tracking: A neural reading comprehension approach. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 264–273, Stockholm, Sweden. Association for Computational Linguistics. Rahul Goel, Shachi Paul, and Dilek Hakkani-T¨ur. 2019. Hyst: A hybrid approach for flexible and accurate dialogue state tracking. arXiv preprint arXiv:1907.00883. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923–1933, Copenhagen, Denmark. Association for Computational Linguistics. Matthew Henderson, Blaise Thomson, and Steve Young. 2014a. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292–299. Matthew Henderson, Blaise Thomson, and Steve Young. 2014b. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292–299, Philadelphia, PA, U.S.A. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1437–1447, Melbourne, Australia. Association for Computational Linguistics. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multidomain dialog state tracking using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 794–799, Beijing, China. Association for Computational Linguistics. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777–1788, Vancouver, Canada. Association for Computational Linguistics. Elnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking model. CoRR, abs/1812.00899. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Liliang Ren, Jianmo Ni, and Julian McAuley. 2019. Scalable and accurate dialogue state tracking via hierarchical sequence generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1876–1885, Hong Kong, China. Association for Computational Linguistics. Liliang Ren, Kaige Xie, Lu Chen, and Kai Yu. 2018. Towards universal dialogue state tracking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2780– 2786, Brussels, Belgium. Association for Computational Linguistics. Kai Sun, Lu Chen, Su Zhu, and Kai Yu. 2014. The SJTU system for dialog state tracking challenge 2. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 318–326, Philadelphia, PA, U.S.A. Association for Computational Linguistics. Chien-Sheng Wu, Andrea Madotto, Ehsan HosseiniAsl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 808–819, Florence, Italy. Association for Computational Linguistics. Puyang Xu and Qi Hu. 2018. An end-to-end approach for handling unknown slot values in dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1448–1457, Melbourne, Australia. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1458– 1467, Melbourne, Australia. Association for Computational Linguistics. Luk´as Zilka and Filip Jurc´ıcek. 2015. Incremental lstmbased dialog state tracker. 2015 IEEE Workshop on 40 Automatic Speech Recognition and Understanding (ASRU), pages 757–762.
2020
5
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 527–537 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 527 Predicting the Topical Stance and Political Leaning of Media using Tweets Peter Stefanov1, Kareem Darwish2, Atanas Atanasov3, Preslav Nakov2 1SiteGround Hosting EOOD, Bulgaria 2Qatar Computing Research Institute, HBKU, Doha, Qatar 3Sofia University “St. Kliment Ohridski”, Sofia, Bulgaria {stefanov.peter.ps,atanas.atanasov.sf}@gmail.com, {kdarwish,pnakov}@hbku.edu.qa Abstract Discovering the stances of media outlets and influential people on current, debatable topics is important for social statisticians and policy makers. Many supervised solutions exist for determining viewpoints, but manually annotating training data is costly. In this paper, we propose a cascaded method that uses unsupervised learning to ascertain the stance of Twitter users with respect to a polarizing topic by leveraging their retweet behavior; then, it uses supervised learning based on user labels to characterize both the general political leaning of online media and of popular Twitter users, as well as their stance with respect to the target polarizing topic. We evaluate the model by comparing its predictions to gold labels from the Media Bias/Fact Check website, achieving 82.6% accuracy. 1 Introduction Online media and popular Twitter users, which we will collectively refer to as influencers, often express overt political leanings, which can be gleaned from their positions on a variety of political and cultural issues. Determining their leaning can be done through the analysis of their writing, which includes the identification of terms that are indicative of stance (Groseclose and Milyo, 2005; Gentzkow and Shapiro, 2011). Performing such analysis automatically can be done using supervised classification, which in turn would require manually labeled data (Groseclose and Milyo, 2005; Gentzkow and Shapiro, 2011; Mohammad et al., 2016). Alternatively, leanings can be inferred based on which people share the content (blogs, tweets, posts, etc.) on social media, as social media users are more likely to share content that originates from sources that generally agree with their positions (An et al., 2012; Morgan et al., 2013; Ribeiro et al., 2018; Wong et al., 2013). Here, we make use of this observation to characterize influencers, based on the stances of the Twitter users that share their content. Ascertaining the stances of users, also known as stance detection, involves identifying the position of a user with respect to a topic, an entity, or a claim (Mohammad et al., 2016). For example, on the topic of abortion in USA, the stances of left- vs. right-leaning users would typically be “pro-choice” vs. “pro-life”, respectively. In this paper, we propose to apply unsupervised stance detection to automatically tag a large number of Twitter users with their positions on specific topics (Darwish et al., 2020). The tagging identifies clusters of vocal users based on the accounts that they retweet. Although the method we use may yield more than two clusters, we retain the two largest ones, which typically include the overwhelming majority of users, and we ignore the rest. Then, we train a classifier that predicts which cluster a user belongs to, in order to expand our clusters. Once we have increased the number of users in our sets, we determine which sources are most strongly associated with each group based on sharing by each group. We apply this methodology to determine the positions of influencers and of media on eight polarizing topics along with their overall leaning: left, center or right. In doing so, we can also observe the sharing behavior of right- and leftleaning users, and we can correlate their behavior with the credibility of the sources. Further, given the user stances for these eight topics, we train a supervised classifier to predict the overall bias of sources using a variety of features, including the so-called valence (Conover et al., 2011a), graph embeddings, and contextual embeddings. Using a combination of these features, our classifier is able to predict the bias of sources with 82.6% accuracy, with valence being the most effective feature. Figure 1 outlines our overall methodology. 528 Figure 1: General outline of our methodology. Our contributions are as follows: • We use unsupervised stance detection to automatically determine the stance of Twitter users with respect to several polarizing topics. • We then use distant supervision based on these discovered user stances to accurately characterize the political leaning of media outlets and of popular Twitter accounts. For classification, we use a combination of source valence, graph embeddings, and contextualized text embeddings. • We evaluate our approach by comparing its bias predictions for a number of news outlets against gold labels from Media Bias/Fact Check. We further evaluate its predictions for popular Twitter users against manual judgments. The experimental results show sizable improvements over using graph embeddings or contextualized text embeddings. The remainder of this paper is organized as follows: Section 2 discusses related work. Section 3 describes the process of data collection. Section 4 presents our method for user stance detection. Section 5 describes how we characterize the influencers. Section 6 discusses our experiments in media bias prediction. Finally, Section 7 concludes and points to possible directions for future work. 2 Related Work Recent work that attempted to characterize the stance and the ideological leaning of media and Twitter users relied on the observation that users tend to retweet content that is consistent with their world view. This stems from selective exposure, which is a cognitive bias that leads people to avoid the cognitive overload from exposure to opposing views as well as the cognitive dissonance in which people are forced to reconcile between their views and opposing views (Morgan et al., 2013). Concerning media, Ribeiro et al. (2018) used the Facebook advertising services to infer the ideological leaning of online media based on the political leaning of Facebook users who consumed them. An et al. (2012) relied on follow relationships to online media on Twitter to ascertain ideological leaning of media and users based on the similarity between them. Wong et al. (2013) studied retweet behavior to infer the ideological leanings of online media sources and popular Twitter accounts. Barber´a and Sood (2015) proposed a statistical model based on the follower relationships to media sources and Twitter personalities in order to estimate their ideological leaning. As for individual users, much recent work focused on stance detection to determine a person’s position on a topic including the deduction of political preferences (Barber´a, 2015; Barber and Rivero, 2015; Borge-Holthoefer et al., 2015; Cohen and Ruths, 2013; Colleoni et al., 2014; Conover et al., 2011b; Fowler et al., 2011; Hasan and Ng, 2014; Himelboim et al., 2013; Magdy et al., 2016a,b; Makazhanov et al., 2014; Trabelsi and Za¨ıane, 2018; Weber et al., 2013). User stance classification is aided by the tendency of users to form so-called “echo chambers”, where they engage with like-minded users (Himelboim et al., 2013; Magdy et al., 2016a), and the tendency of users’ beliefs to be persistent over time (Borge-Holthoefer et al., 2015; Magdy et al., 2016a; Pennacchiotti and Popescu, 2011b). Studies have examined the effectiveness of different features for stance detection, including textual features such as word n-grams and hashtags, network interactions such as retweeted accounts and mentions, and profile information such as user location (Borge-Holthoefer et al., 2015; Hasan and Ng, 2013; Magdy et al., 2016a,b; Weber et al., 2013). Network interaction features were shown to yield better results compared to using textual features (Magdy et al., 2016a; Wong et al., 2013). Sridhar et al. (2015) leveraged both user interactions and textual information when modeling stance and disagreement, using a probabilistic programming system that allows models to be specified using a declarative language. Trabelsi and Za¨ıane (2018) described an unsupervised stance detection method that determines the viewpoints of comments and of their authors. It analyzes online forum discussion threads, and therefore assumes a certain structure of the posts. 529 It also assumes that users tend to reply to each others’ comments when they are in disagreement, whereas we assume the opposite in this paper. Their model leverages the posts’ contents, whereas we only use the retweet behavior of users. Many methods involving supervised learning were proposed for stance detection. Such methods require the availability of an initial set of labeled users, and they use some of the aforementioned features for classification (Darwish et al., 2018; Magdy et al., 2016b; Pennacchiotti and Popescu, 2011a). Such classification can label users with precision typically ranging between 70% and 90% (Rao et al., 2010; Pennacchiotti and Popescu, 2011a). Label propagation is a semisupervised method that starts with a seed list of labeled users and propagates the labels to other users who are similar based on the accounts they follow or retweet (Barber´a and Sood, 2015; BorgeHolthoefer et al., 2015; Weber et al., 2013). While label propagation may label users with high precision (often above 95%), it is biased towards users with more extreme views; moreover, careful choice of thresholds is often required, and post-checks are needed to ensure quality. Abu-Jbara et al. (2013) and more recently Darwish et al. (2020) used unsupervised stance detection, where users are mapped into a lower dimensional space based on user-user similarity, and then clustered to find core sets of users representing different stances. This was shown to be highly effective with nearly perfect clustering accuracy for polarizing topics, and it requires no manual labeling of users. Here, we use the same idea, but we combine it with supervised classification based on retweets in order to increase the number of labeled users (Darwish, 2018). Other methods for user stance detection include collective classification (Duan et al., 2012), where users in a network are jointly labeled and classification in a low-dimensional user-space (Darwish et al., 2017). As for predicting political leaning or sentiment, this problem was studied previously as a supervised learning problem, where a classifier learns from a set of manually labeled tweets (Pla and Hurtado, 2014; Bakliwal et al., 2013; Bermingham and Smeaton, 2011). Similarly, Volkova et al. (2014) predicted Twitter users’ political affiliation (being Republican or Democratic), using their network connections and textual information, relying on user-level annotations. 3 Data Collection We obtained data on eight topics that are considered polarizing in the USA (Darwish et al., 2020), shown in Table 1. They include a mix of long-standing issues such as racism and gun control, temporal issues such as the nomination of Judge Brett Kavanaugh to the US Supreme Court and Representative Ilhan Omar’s polarizing remarks, as well as non-political issues such as the potential dangers of vaccines. Further, though long-standing issues typically show right– left polarization, stances towards Omar’s remarks are not as clear, with divisions on the left as well. Since we are interested in US users, we filtered some tweets to retain such by users who have stated that their location was USA. We used a gazetteer that included words that indicate USA as a country (e.g., America, US), as well as state names and their abbreviations (e.g., Maryland, MD). Other data that we used in our experiments is a collection of articles that were cited by users from the tweets collection and that originate from media, whose bias is known, i.e., is discussed on the Media Bias/Fact Check website. 4 User Stance Detection In order to analyze the stance of influencers on a given topic, we first find the stances of Twitter users, and then we project them to the influencers that the users cite. A central (initial) assumption here is that if a user includes a link to some article in their tweet, they are more likely to agree or endorse the article’s message. Similarly, when a user retweets a tweet verbatim without adding any comments, they are more likely to agree with that tweet. We label a large number of users with their stance for each topic using a two-step approach, namely projection and clustering and supervised classification. For the projection and clustering step, we identify clusters of core vocal users using the unsupervised method described in (Darwish et al., 2020). In this step, users are mapped to a lower dimensional space based on their similarity, and then they are clustered. After performing this unsupervised learning step, we train a supervised classifier using the two largest identified clusters in order to tag many more users. For that, we use FastText, a deep neural network text classifier, that has been shown to be effective for various text classification tasks (Joulin et al., 2017). 530 Topic Keywords Date Range No. of Tweets Climate change #greendeal, #environment, #climate, #climatechange, #carbonfootprint, #climatehoax, #climategate, #globalwarming, #agw, #renewables Feb 25–Mar 4, 2019 1,284,902 Gun control/rights #gun, #guns, #weapon, #2a, #gunviolence, #secondamendment, #shooting, #massshooting, #gunrights, #GunReformNow, #GunControl, #NRA Feb 25–Mar 3, 2019 1,782,384 Ilhan Omar remarks on Israel lobby IlhanOmarIsATrojanHorse, #IStandWithIlhan, #ilhan, #Antisemitism, #IlhanOmar, #IlhanMN, #RemoveIlhanOmar, #ByeIlhan, #RashidaTlaib, #AIPAC, #EverydayIslamophobia, #Islamophobia, #ilhan Mar 1–9, 2019 2,556,871 Illegal immigration #border, #immigration, #immigrant, #borderwall, #migrant, #migrants, #illegal, #aliens Feb 25–Mar 4, 2019 2,341,316 Midterm midterm, election, elections Oct 25–27, 2018 520,614 Racism & police brutality #blacklivesmatter, #bluelivesmatter, #KKK, #racism, #racist, #policebrutality, #excessiveforce, #StandYourGround, #ThinBlueLine Feb 25–Mar 3, 2019 2,564,784 Kavanaugh Nomination Kavanaugh, Ford, Supreme, judiciary, Blasey, Grassley, Hatch, Graham, Cornyn, Lee, Cruz, Sasse, Flake, Crapo, Tillis, Kennedy, Feinstein, Leahy, Durbin, Whitehouse, Klobuchar, Coons, Blumenthal, Hirono, Booker, Harris Sept. 28-30, 2018 & Oct. 6-9, 2018 2,322,141 Vaccination benefits & dangers #antivax, #vaxxing, #BigPharma, #antivaxxers, #measlesoutbreak, #Antivacine, #VaccinesWork, #vaccine, #vaccines, #Antivaccine, #vaccinestudy, #antivaxx, #provaxx, #VaccinesSaveLives, #ProVaccine, #VaxxWoke, #mykidmychoice Mar 1–9, 2019 301,209 Table 1: Polarizing topics used in study. Once we have expanded our sets of labeled users, we identify influencers that are most closely associated with each group using a modified version of the so-called valence score, which varies in value between −1 and 1. If an influencer is being cited evenly between the groups, then it would be assigned a valence score close to zero. Conversely, if one group disproportionately cites an influencer compared to another group, then it would be assigned a score closer to −1 or 1. We perform these steps for each of the given topics, and finally we summarize the stances across all topics. Below, we explain each of these steps in more detail. 4.1 Projection and Clustering Given the tweets for each topic, we compute the similarity between the top 1,000 most active users. To compute similarity, we construct a vector for each user containing the number of all the accounts that a user has retweeted, and then we compute the pairwise cosine similarity between them. For example, if user A has only retweeted user B 3 times, user C 5 times and user E 8 times, then user A’s vector would be (0, 3, 5, 0, 8, 0, 0, ... 0). Solely using the retweeted accounts as features has been shown to be effective for stance classification (Darwish et al., 2020; Magdy et al., 2016a). Finally, we perform dimensionality reduction and we project the users using Uniform Manifold Approximation and Projection (UMAP). When performing dimensionality reduction, UMAP places users on a two-dimensional plane such that similar users are placed closer together and dissimilar users are pushed further apart. Figure 2 shows the top users for the “midterm” topic projected with UMAP onto the 2D plane. After the projection, we use Mean Shift to cluster the users as shown in Figure 2. This is the best setup described in (Darwish et al., 2020). 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Cluster Cluster 0 Cluster 1 Not clustered Figure 2: Top active users on the midterm topic clustered using UMAP + Mean Shift. Clustering high-dimensional data often yields suboptimal results, but can be improved by projecting to a low-dimensional space (Darwish et al., 2020). 4.2 Supervised Classification Since unsupervised stance detection is only able to classify the most vocal users, which only constitute a minority of the users, we wanted to assign stance labels to as many additional users as we can. Given the clusters of users that we obtain for each topic, we retain the two largest clusters for each topic, and we assign cluster labels to the users contained therein. Next, we use all the automatically labeled users for each topic to train a supervised classifier using the accounts that each user retweeted as features (same as the features we used to compute user similarity earlier). For classification, we train a FastText model using the default parameters, and then we classify all other users with five or more retweeted accounts, only accepting the classification if FastText was more than 80% confident (70–90% yielded nearly identical results). 531 Topic No. of Users Clustered Classified Users Users climate change 724,470 860 5,851 gun control 973,206 813 11,281 Ilhan Omar 563,706 723 25,484 immigration 940,840 901 22,456 midterm elections 312,954 860 12,765 police brutality & racism 1,175,081 891 18,978 Kavanaugh 809,835 891 10,100 vaccine 194,245 545 556 Table 2: Users per topic: total number of users, umber of clustered users, and number of automatically labeled users. In order to obtain a rough estimate of the accuracy of the model, we trained FastText using a random 80% subset of the clustered users for each topic and we tested on the remaining 20%. The accuracy was consistently above 95% for all topics. This does not mean that this model can predict the stance for all users that accurately — the clustered users were selected to be the most active ones. Rather, it shows that the classifier can successfully capture what the previous, unsupervised step has already learned. Table 2 lists the total number of users who authored the tweets for each topic, the number of users who were automatically clustered using the aforementioned unsupervised clustering technique, and the number of users who were automatically labeled afterwards using supervised classification. Given that we applied unsupervised stance detection to the most active 1,000 users, the majority of the users appeared in the largest two clusters (shown in Table 2). 4.3 Calculating Valence Scores Given all the labeled users for each topic, we computed a valence score for each influencer. As mentioned earlier, the valence score ranges between [−1,1], where a value close to 1 implies it is strongly associated with one group of users, −1 shows it is strongly associated with the other group of users, and 0 means that it is being shared or cited by both groups. The original valence score described by Conover et al. (2011a) is calculated as follows: V(u) = 2 t f(u,C0) total(C0) t f(u,C0) total(C0) + t f(u,C1) total(C1) −1 (1) where t f(u,C0) is the number of times (term frequency) item u is cited by group C0, and total(C0) is the sum of the term frequencies of all items cited by C0. t f(u,C1) and total(C1) are defined in a similar fashion. We use the above equation to compute valence scores for the retweeted accounts, but we using a modified version for calculating the score for influencers (I): V(I) = 2 t f(I,C0) total(C0) t f(I,C0) total(C0) + t f(I,C1) total(C1) −1 (2) where t f(I,Ci) = ∑a∈I TCi[ln(Cnt(a,Ci))+1] total(Ci) = ∑I t f(I,Ci) In the latter equation, Cnt(a,Ci) is the number of times article a was cited by users from cluster Ci. In essence, we are replacing term frequencies with the natural log of the term frequencies. We opted to modify the equation in order to tackle the following issue: if users from one of the clusters, say C1, cite only one single article from some media source a large number of times (e.g., 2,000 times), while users from the other cluster (C0) cite 10 other articles from the same media 50 times each, then using equation 1 would result in a valence score of −0.6. We would then regard the given media as having an opposing stance to the stance of users in C0. Alternatively, using the natural log would lead to a valence score close to 0.88. Thus, dampening term frequencies using the natural log has the desired effect of balancing between the number of articles being cited by each group and the total number of citations. We bin the valence scores between −1 and 1 into five equal size bands as follows: Cat(V) =                −−, if s ∈[−1,−0.6) −, if s ∈[−0.6,−0.2) 0, if s ∈[−0.2,0.2) +, if s ∈[0.2,0.6) ++, if s ∈[0.6,1] (3) 5 Characterizing the Influencers We use valence to characterize the leaning of all cited influencers for each of the topics. Table 3 shows the valence categories for the top-cited media sources across all topics. It also shows each media’s factuality of reporting, i.e., trustworthiness, and bias (ranging from far-left to far-right) as determined by mediaBiasFactCheck.com. Since the choice of which cluster should be C0 and which would be C1 is arbitrary, we can multiply by −1 the valence scores for any topic and the meaning of the results would stay the same. 532 EXTREME-LEFT LEFT LEFT-CENTER CENTER RIGHT-CENTER RIGHT EXTREME-RIGHT Bias -0 + ++ Valence Category 0 3 24 18 31 110 58 0 2 8 3 8 9 3 0 4 25 13 20 4 0 0 14 45 21 14 2 0 3 101 148 70 36 6 3 0 20 40 60 80 100 120 140 Figure 3: Valence category vs. bias: number of media. We resorted to doing so for some topics in order to align the extreme valence bands across all topics. Given tweet samples from users in a given cluster for a given topic, labeling that cluster manually was straightforward with almost no ambiguity. Table 4 shows the most frequently cited media source for each topic and for each valence band. Of the 5,406 unique media sources that have been cited in tweets across all topics, 806 have known political bias from mediaBiasFactCheck. com. Figure 3 shows the confusion matrix between our valence categories and the goold labels from mediaBiasFactCheck.com. We notice that many of the media that have a negative valence score (categories −and −−) are classified on the right side of the political spectrum by mediaBiasFactCheck.com, while most media with positive scores (categories + and ++) are classified as slightly left-leaning. Although there are almost no extreme-left cases, there is a correlation between bias and our valence score. mediaBiasFactCheck.com seems to rarely categorize media sources as “extreme-left”. This could be a reflection of reality or it might imply that mediaBiasFactCheck.com has an inherent bias. We also computed the valence scores for the top-200 retweeted accounts, and we assigned each account a valence category based on the score. Independently, we asked a person who is well-versed with US politics to label all the accounts as left, center, or right. When labeling accounts, right-leaning include those expressing support for Trump, the Republican party, and gun rights, opposition to abortion, and disdain for Democrats. 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Figure 4: The top-200 retweeted accounts, projected on a number line according to their average valence. As for left-leaning accounts, they include those attacking Trump and the Republicans, and expressing support for the Democratic party and for Liberal social positions. If the retweeted account happens to be a media source, we used mediaBiasFactCheck.com. Table 5 compares the per-topic valence for each retweeted account along with the average category and the true label. It is noteworthy that all top-200 retweeted accounts have extreme valence categories on average across all topics. Their average valence scores, with one exception, appear between −0.6 and −1.00 for right, and between 0.6 and 1 for left (see Figure 4). Of those manually and independently tagged accounts, all that were tagged as left-leaning have a strong positive valence score and all that were tagged as right-leaning have a strong negative valence score. Only two accounts were manually labeled as center, namely Reuters and CSPAN, which is a US channel that broadcasts Federal Government proceedings, and they had valence scores of 0.55 and 0.28, respectively. Though their absolute values are lower than those of all other sources, they are mapped to the + valence category. Table 3 summarizes the valence scores for the media across all topics. Table 4 lists the most cited media sources for each topic and for each of the five valence bands. The order of the bands from top to bottom is: ++, +, 0, −and −−. The table also includes the credibility and the political leaning tags from mediaBiasFactCheck.com. The key observations from the table as follows: 1. Most right-leaning media appear overwhelmingly in the −and −−valence categories. Conversely, left-leaning media appear in all valence categories, except for the −−category. This implies that left-leaning users cite right-leaning media sparingly. We looked at some instances where right-leaning users cited left-leaning media, and we found that in many cases the cited articles reinforced a right-leaning viewpoint. For example, right-leaning users shared a video from thehill.com, a left-center site, 2,398 times for the police racism topic. The video defended Trump against charges of racism by Lynne Patton, a longtime African-American associate of Trump. 533 Medium factuality bias Average climate change gun control ilhan immigration midterm police & racism Kavanaugh vaccine thehill.com H L-C +++ 0 ++ + + + + ++ ++ theguardian.com H L-C ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ washingtonpost.com H L-C ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ breitbart.com VL Far R −− −− −− −− −− −− −− −− −− −− −− foxnews.com M R −− −− −− −− −− −− −− −− −− −− nytimes.com H L-C ++ ++ ++ + ++ + + + ++ ++ ++ cnn.com M L +++ + ++ + ++ + + ++ + apple.news +++ 0 0 + 0 0 + + ++ dailycaller.com M R −− −− −− −− −− −− −− −− −− −− rawstory.com M L ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ huffingtonpost.com H L ++ ++ ++ ++ ++ ++ ++ + ++ ++ ++ truepundit.com L −− −− −− −− −− −− −− −− −− −− −− nbcnews.com H L-C +++ −− ++ + ++ + + ++ ++ westernjournal.com M R −− −− −− −− −− −− −− −− −− −− reuters.com VH C +++ + ++ ++ + + + + ++ washingtonexaminer.com H R −− −− −− −− −− −− −− 0 −− −− thegatewaypundit.com VL Far R −− −− −− −− −− −− −− −− −− −− politico.com H L-C +++ + + + + ++ + + ++ npr.org VH L-C +++ 0 ++ ++ ++ 0 ++ ++ ++ townhall.com M R −− −− −− −− −− −− −− −− −− −− −− msn.com H L-C +++ + + + 0 ++ 0 ++ 0 nypost.com M R-C −−− −− 0 − − + −− − vox.com H L ++ ++ ++ ++ ++ ++ ++ ++ + ++ ++ thedailybeast.com H L ++ ++ ++ ++ ++ + ++ ++ + ++ ++ bbc.com H L-C +++ + + ++ ++ 0 + + ++ independent.co.uk H L-C ++ ++ ++ ++ + ++ ++ ++ + ++ ++ ilovemyfreedom.org VL Far R −− −− −− −− −− −− −− −− −− −− thinkprogress.org M L ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ dailywire.com M R −− −− −− −− −− −− −− −− −− −− ++ pscp.tv −−− −− −− −− 0 −− 0 − dailymail.co.uk VL R −−− − 0 − − − − −− −− msnbc.com M L ++ ++ ++ ++ ++ ++ ++ + ++ ++ dailykos.com M L ++ ++ ++ ++ ++ ++ ++ + ++ ++ bloomberg.com H L-C +++ + ++ 0 ++ + 0 + ++ usatoday.com H L-C +++ + + 0 + ++ + 0 + Table 3: Media valence categories for each topic with included average column. Plus (+) and minus (−) signify left or right leaning, respectively. Factuality: Very High (VH), High (H), Mixed (M), Low (L), Very Low (VL). Bias: Left (L), Left-Center (L-C), Center (C), Right-Center (R-C), Right (R), Far Right (Far R). Blank cells mean that we did not have information. 2. Most right-leaning sources in the −−category have mixed, low, or very low factuality. Conversely, most left-leaning sites appearing in the − valence category have high or very high factuality. Similarly for the vaccine topic, where high credibility sources, such as fda.gov and nih.gov, are frequently cited by anti-vaccine users, mostly to support their beliefs. 3. The placements of sources in different categories are relatively stable across topics. For example, washingtonPost.com and theguardian.com exclusively appear in the ++ category, while breitbart.com and foxnews.com consistently appear in the −−category. 6 Predicting Media Bias Given the stances of users on the aforementioned eight topics, we leverage this information to predict media bias. Specifically, we describe in this section how we make use of the valence scores, as well as other features, namely graph and contextualized text embeddings, to train supervised classifiers for this purpose. Valence Scores. We use valence scores in two ways. First, we average the corresponding valence across the different polarizing topics to obtain an average valence score for a given target news medium. This is an unsupervised method for computing polarity. Second, we train a Logistic Regression classifier that uses the calculated valence scores as features and annotations from mediaBiasFactCheck.com as gold target labels in order to predict the general political leaning of a target news medium. We merged “left” and “extreme left”, and similarly we merged “right” and “extreme right”. We discarded media labeled as being “leftcenter” and “right-center”. Each news medium was represented by an 8-dimensional vector containing the valence scores for the above topics. In the experiments, we used the lbfgs solver and C = 0.1. We used two measures to evaluate its performance, namely accuracy and mean absolute error (MAE). The latter is calculated by considering the different classes as ordered and equally distant from each other, i.e., if the model predicts right and the true label is left, this amounts to an error equal to 2. 534 climate change gun control Ilhan Omar immigration theguardian.com H L-C thehill.com H L-C washingtonpost.com H L-C theguardian.com H L-C washingtonpost.com H L-C cnn.com M L theguardian.com H L-C washingtonpost.com H L-C independent.co.uk H L-C nytimes.com H L-C mondoweiss.net H L cnn.com M L wef.ch npr.org VH L-C thinkprogress.org M L huffingtonpost.com H L vox.com H L washingtonpost.com H L-C haaretz.com H L-C npr.org VH L-C nytimes.com H L-C politico.com H L-C nytimes.com H L-C thehill.com H L-C bbc.com H L-C usatoday.com H L-C thehill.com H L-C nytimes.com H L-C cnn.com M L msn.com H L-C politico.com H L-C reuters.com VH C reuters.com VH C bbc.com H L-C cnn.com M L politico.com H L-C bloomberg.com H L-C cnbc.com H L-C apple.news usatoday.com H L-C thehill.com H L-C apple.news mediaite.com H L apple.news apple.news sun-sentinel.com H R-C usatoday.com H L-C msn.com H L-C npr.org VH L-C nypost.com M R-C yahoo.com M L-C pscp.tv seattletimes.com H L-C dailymail.co.uk VL R timesofisrael.com H L-C whitehouse.gov M R newsweek.com M L mailchi.mp theatlantic.com H L-C texastribune.org H C change.org H L washingtontimes.com H R-C nypost.com M R-C dailymail.co.uk VL R latimes.com H L-C breaking911.com VL jpost.com H C nypost.com M R-C dailymail.co.uk VL R chicagotribune.com H R-C dailymail.co.uk VL R zerohedge.com M climatechangedispatch.com rt.com M R-C algemeiner.com H R-C ir.shareaholic.com cnbc.com H L-C forbes.com M R-C startribune.com H L-C breaking911.com VL forbes.com M R-C breitbart.com VL Far R foxnews.com M R breitbart.com VL Far R breitbart.com VL Far R foxnews.com M R breitbart.com VL Far R illegalaliencrimereport.com dailycaller.com M R ammoland.com H R townhall.com M R washingtonexaminer.com H R tambonthongchai.com dailycaller.com M R change.org H L foxnews.com M R wattsupwiththat.com L bearingarms.com M R hannity.com westernjournal.com M R midterm police & racism Kavanaugh vaccine washingtonpost.com H L-C washingtonpost.com H L-C thehill.com H L-C thehill.com H L-C theguardian.com H L-C rawstory.com M L washingtonpost.com H L-C theguardian.com H L-C rawstory.com M L huffingtonpost.com H L cnn.com M L washingtonpost.com H L-C tacticalinvestor.com theguardian.com H L-C nytimes.com H L-C vaxopedia.org vox.com H L nytimes.com H L-C huffingtonpost.com H L nytimes.com H L-C thehill.com H L-C thehill.com H L-C politico.com H L-C cnn.com M L reuters.com VH C apple.news apple.news statnews.com H C nytimes.com H L-C cnn.com M L yahoo.com M L-C latimes.com H L-C cnn.com M L nbcnews.com H L-C apnews.com VH C cbc.ca H L-C dailykos.com M L thedailybeast.com H L latimes.com H L-C usatoday.com H L-C apple.news msn.com H L-C usatoday.com H L-C cdc.gov VH sagagist.com.ng pscp.tv mediaite.com H L medium.com M L-C bbc.com H L-C bloomberg.com H L-C theweek.com H L-C newsroom.fb.com alzwaaj.com politics.theonion.com lawandcrime.com help.senate.gov washingtonexaminer.com H R rollcall.com VH C cnbc.com H L-C msn.com H L-C dailymail.co.uk VL R mediaite.com H L pscp.tv change.org H L pbs.org H L-C dailymail.co.uk VL R nypost.com M R-C fda.gov zerohedge.com M news.sky.com H L-C ir.shareaholic.com variety.com ajc.com H L-C newsone.com H L-C rollcall.com VH C veritablenouvelordre.forumcanada.org aol.com H L-C c-span.org VH C breitbart.com VL Far R breitbart.com VL Far R foxnews.com M R ncbi.nlm.nih.gov VH foxnews.com M R defensemaven.io truepundit.com L vaccineimpact.com dailycaller.com M R foxnews.com M R dailycaller.com M R naturalnews.com M ilovemyfreedom.org VL Far R thegatewaypundit.com VL Far R breitbart.com VL Far R vaccines.me westernjournal.com M R nypost.com M R-C thegatewaypundit.com VL Far R thevaccinereaction.org Table 4: Top 5 websites per valence category for each topic. Account Truth Average climate change gun control ilhan immigration midterm police & racism Kavanaugh vaccine realdonaldtrump R −− −− −− 0 −− −− −− −− −− −− charliekirk11 R −− −− −− −− −− −− −− −− −− kylegriffin1 L ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ dbongino R −− −− −− −− −− −− −− −− −− −− kamalaharris L ++ ++ ++ ++ ++ ++ ++ ++ ++ mitchellvii R −− −− −− −− −− −− −− −− −− −− realsaavedra R −− −− −− −− −− −− −− −− −− krassenstein L ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ realjack R −− −− −− −− −− −− −− −− −− −− −− nbcnews L ++ ++ ++ ++ ++ + ++ ++ ++ ++ ++ education4libs R −− −− −− −− −− −− −− −− −− −− nra R −− −− −− −− −− −− −− donaldjtrumpjr R −− −− −− −− −− −− −− −− shannonrwatts L ++ ++ ++ ++ ++ ++ ++ ++ thehill L ++ ++ ++ ++ ++ + ++ + + ++ ++ realjameswoods R −− −− −− −− −− −− −− −− −− gopchairwoman R −− −− −− −− −− −− −− jackposobiec R −− −− −− −− −− −− −− −− −− −− funder L ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ cnn L ++ ++ ++ ++ ++ ++ ++ 0 ++ ++ ++ ajplus L ++ ++ ++ ++ ++ ++ ++ ++ ++ 0 ++ rashidatlaib L ++ ++ ++ ++ ++ ++ ++ + stevescalise R −− −− −− −− −− −− jordan sather ? −− −− −− −− −− −− −− −− aoc L ++ ++ ++ ++ ++ ++ ++ Table 5: User valence categories for each topic, preceded by an average column, and a ground truth label. When a cell is blank, there is insufficient data for that particular topic. 535 No Valence With Valence Acc MAE Acc MAE Baseline 1 (majority class) 43.3 .856 43.3 .856 Baseline 2 (average valence) – – 68.0 .330 Valence scores – – 75.2 .278 BERT (article title) 60.6 .539 78.3 .264 BERT (article content) 61.1 .526 79.2 .255 BERT (title+content) 62.2 .510 80.8 .228 BERT(Tweet) 64.0 .485 73.6 .302 GraphEmbM 63.5 .468 69.1 .380 GraphEmbH 66.9 .425 71.8 .347 GraphEmbM+H 68.0 .400 79.0 .251 GraphEmbM+H+BERT (tweet) 72.5 .358 80.5 .230 GraphEmbM+H+BERT (tweet, content) 76.1 .311 81.2 .221 GraphM+H+BERT (tweet, title, content) 78.1 .284 82.6 .206 Table 6: Predicting media bias. The results are shown in Table 6, where we can see that using the average valence score yields 68.0% accuracy (0.330 MAE) compared to 75.2% accuracy (0.278 MAE) when using the eight individual valence scores as features. Graph embeddings. We further use graph embeddings, generated by building a User-to-Hashtag graph (U2H) and a User-to-Mention (U2M) graph and then running node2vec on both (Atanasov et al., 2019), producing two types of graph embeddings. When using graph embeddings, we got worse results compared to our previous setup with valence scores (see Table 6). However, when we combine them with the valence scores, we observe a sizable boost in performance, up to 11% absolute. Tweets. We also experimented with BERT-base. We used the text of the tweets that cite the media we are classifying. For classification, we fed BERT representations of tweets to a dense layer with softmax output to fine-tune it with the textual contents of the tweets. We trained at the tweet level, and we averaged the scores (from softmax) for all tweets from the same news medium to obtain an overall label for that news medium. The accuracy is much lower than for the valence scores: 64.0% accuracy vs. 75.2% for supervised and 68.0% for unsupervised. Article titles and text. Using the BERT setup for Tweets, we used the titles and the full text of up to 100 articles from each of the target media. When using the full text of articles, we balanced the number of articles per news medium. We trained two separate BERT models, one on the titles and another one on the full text (content). Both models did worse than using valence alone, but the combination improved over valence only. System Combination. We combined different setups including using all the aforementioned models in combination. Using graph embeddings (GraphH + GraphM) with BERT embeddings (Tweet+Title+Content) and valence yielded the best results with accuracy of 82.6% and MAE of .206. If we remove valence from the combination, the accuracy drops by 4.5% while MAE jumps by .078, absolute. This suggests that valence is a very effective feature that captures important information, complementary to what can be modeled using graph and contextualized text embeddings. 7 Conclusion and Future Work We have presented a method for predicting the general political leaning of media sources and popular Twitter users, as well as their stances on specific polarizing topics. Our method uses retweeted accounts, and a combination of dimensionality reduction and clustering algorithms, namely UMAP and Mean Shift, in order to produce sets of users that have opposing opinions on specific topics. Next, we expand the discovered sets using supervised learning that is trained on the automatically discovered user clusters. We are able to automatically tag large sets of users according to their stance of preset topics. Users’ stances are then projected to the influencers that are being cited in the tweets for each of the topics using the so-called valence score. The projection allows us to tag a large number of influencers with their stances on specific issues and with their political leaning in general (i.e., left vs. right) with high accuracy and with minimal human effort. The main advantage of our method is that it does not require manual labeling of entity stances, which requires both topical expertise and time. We also investigated the quality of the valence features, and we found that valence scores help to predict media bias with high accuracy. In future work, we plan to increase the number of topics that we use to characterize media. Ideally, we would like to automatically identify such polarizing topics. Doing so would enable us to easily retarget this work to new countries and languages. Acknowledgments This research is part of the Tanbih project1, which aims to limit the effect of “fake news,” propaganda and media bias by making users aware of what they are reading. 1http://tanbih.qcri.org/ 536 References Amjad Abu-Jbara, Ben King, Mona Diab, and Dragomir Radev. 2013. Identifying opinion subgroups in Arabic online discussions. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL ’13, pages 829– 835, Sofia, Bulgaria. Jisun An, Meeyoung Cha, Krishna Gummadi, Jon Crowcroft, and Daniele Quercia. 2012. Visualizing media bias through Twitter. In Proceedings of the International AAAI Conference on Web and Social Media, Dublin, Ireland, pages 2–5. Atanas Atanasov, Gianmarco De Francisci Morales, and Preslav Nakov. 2019. Predicting the role of political trolls in social media. In Proceedings of the 2019 SIGNLL Conference on Computational Natural Language Learning, CoNLL ’19, pages 1023– 1034, Hong Kong, China. Akshat Bakliwal, Jennifer Foster, Jennifer van der Puil, Ron O’Brien, Lamia Tounsi, and Mark Hughes. 2013. Sentiment analysis of political tweets: Towards an accurate classifier. In Proceedings of the Workshop on Language Analysis in Social Media, pages 49–58, Atlanta, GA, USA. Pablo Barber´a. 2015. Birds of the same feather tweet together: Bayesian ideal point estimation using Twitter data. Political Analysis, 23(1):76–91. Pablo Barber´a and Gaurav Sood. 2015. Follow your ideology: Measuring media ideology on social networks. In Proceedings of the Annual Meeting of the European Political Science Association, Vienna, Austria. Pablo Barber and Gonzalo Rivero. 2015. Understanding the political representativeness of Twitter users. Social Science Computer Review, 33(6):712–729. Adam Bermingham and Alan Smeaton. 2011. On using Twitter to monitor political sentiment and predict election results. In Proceedings of the Workshop on Sentiment Analysis where AI meets Psychology, SAAIP ’11, pages 2–10, Chiang Mai, Thailand. Javier Borge-Holthoefer, Walid Magdy, Kareem Darwish, and Ingmar Weber. 2015. Content and network dynamics behind Egyptian political polarization on Twitter. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’15, pages 700– 711, Vancouver, BC, Canada. Raviv Cohen and Derek Ruths. 2013. Classifying political orientation on Twitter: It’s not easy! In Proceedings of the 7th International AAAI Conference on Weblogs and Social Media, ICWSM ’13, pages 91–99, Cambridge, MA, USA. Elanor Colleoni, Alessandro Rozza, and Adam Arvidsson. 2014. Echo chamber or public sphere? Predicting political orientation and measuring political homophily in Twitter using big data. Journal of Communication, 64(2):317–332. Michael Conover, Jacob Ratkiewicz, Matthew R Francisco, Bruno Gonc¸alves, Filippo Menczer, and Alessandro Flammini. 2011a. Political polarization on Twitter. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, ICWSM ’11, pages 89–96, Barcelona, Spain. Michael D Conover, Bruno Gonc¸alves, Jacob Ratkiewicz, Alessandro Flammini, and Filippo Menczer. 2011b. Predicting the political alignment of Twitter users. In Proceedings of the 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social Computing (SocialCom), pages 192–199, Boston, MA, USA. Kareem Darwish. 2018. To Kavanaugh or not to Kavanaugh: That is the polarizing question. arXiv preprint arXiv:1810.06687. Kareem Darwish, Michael Aupetit, Peter Stefanov, and Preslav Nakov. 2020. Unsupervised user stance detection on Twitter. In Proceedings of the International AAAI Conference on Web and Social Media, ICWSM ’20, Atlanta, GA, USA. Kareem Darwish, Walid Magdy, Afshin Rahimi, Timothy Baldwin, and Norah Abokhodair. 2018. Predicting online islamophobic behavior after #ParisAttacks. The Journal of Web Science, 4(3):34–52. Kareem Darwish, Walid Magdy, and Tahar Zanouda. 2017. Improved stance prediction in a user similarity feature space. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM ’17, pages 145–148, Sydney, Australia. Yajuan Duan, Furu Wei, Ming Zhou, and Heung-Yeung Shum. 2012. Graph-based collective classification for tweets. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12, pages 2323–2326, Maui, HI, USA. James H Fowler, Michael T Heaney, David W Nickerson, John F Padgett, and Betsy Sinclair. 2011. Causality in political networks. American Politics Research, 39(2):437–480. Matthew Gentzkow and Jesse M Shapiro. 2011. Ideological segregation online and offline. The Quarterly Journal of Economics, 126(4):1799–1839. Tim Groseclose and Jeffrey Milyo. 2005. A measure of media bias. The Quarterly Journal of Economics, 120(4):1191–1237. 537 Kazi Saidul Hasan and Vincent Ng. 2013. Stance classification of ideological debates: Data, models, features, and constraints. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, IJCNLP ’13, pages 1348–1356, Nagoya, Japan. Kazi Saidul Hasan and Vincent Ng. 2014. Why are you taking this stance? Identifying and classifying reasons in ideological debates. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP ’14, pages 751–762, Doha, Qatar. Itai Himelboim, Stephen McCreery, and Marc Smith. 2013. Birds of a feather tweet together: Integrating network and content analyses to examine crossideology exposure on Twitter. Journal of ComputerMediated Communication, 18(2):40–60. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL ’17, pages 427– 431, Valencia, Spain. Walid Magdy, Kareem Darwish, Norah Abokhodair, Afshin Rahimi, and Timothy Baldwin. 2016a. #isisisnotislam or #deportallmuslims?: Predicting unspoken views. In Proceedings of the 8th ACM Conference on Web Science, WebSci ’16, pages 95–106, Hannover, Germany. Walid Magdy, Kareem Darwish, and Ingmar Weber. 2016b. #FailedRevolutions: Using Twitter to study the antecedents of ISIS support. First Monday, 21(2). Aibek Makazhanov, Davood Rafiei, and Muhammad Waqar. 2014. Predicting political preference of Twitter users. Social Network Analysis and Mining, 4(1):1–15. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, pages 31–41, San Diego, CA, USA. Jonathan Scott Morgan, Cliff Lampe, and Muhammad Zubair Shafiq. 2013. Is news sharing on Twitter ideologically biased? In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, CSCW 13, pages 887–896, San Antonio, TX, USA. Marco Pennacchiotti and Ana-Maria Popescu. 2011a. Democrats, Republicans and Starbucks afficionados: user classification in Twitter. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 11, pages 430–438, San Diego, CA, USA. Marco Pennacchiotti and Ana-Maria Popescu. 2011b. A machine learning approach to Twitter user classification. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, ICWSM ’11, pages 281–288, Barcelona, Spain. Ferran Pla and Llu´ıs-F. Hurtado. 2014. Political tendency identification in Twitter using sentiment analysis techniques. In Proceedings of the 25th International Conference on Computational Linguistics, COLING ’14, pages 183–192, Dublin, Ireland. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in Twitter. In Proceedings of the 2nd International Workshop on Search and Mining UserGenerated Contents, SMUC ’10, pages 37–44, Toronto, ON, Canada. Filipe N Ribeiro, Lucas Henrique, Fabricio Benevenuto, Abhijnan Chakraborty, Juhi Kulshrestha, Mahmoudreza Babaei, and Krishna P Gummadi. 2018. Media bias monitor: Quantifying biases of social media news outlets at large-scale. In Proceedings of the Twelfth International AAAI Conference on Web and Social Media, ICWSM ’18, pages 290– 299, Stanford, CA, USA. Dhanya Sridhar, James Foulds, Bert Huang, Lise Getoor, and Marilyn Walker. 2015. Joint models of disagreement and stance in online debate. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, AXLL-IJCNLP ’15, pages 116–125, Beijing, China. Amine Trabelsi and Osmar R Za¨ıane. 2018. Unsupervised model for topic viewpoint discovery in online debates leveraging author interactions. In Proceedings of the Twelfth International AAAI Conference on Web and Social Media, ICWSM ’18, pages 425– 433, Stanford, CA, USA. Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political preferences from streaming communications. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL ’14, pages 186– 196, Baltimore, MD, USA. Ingmar Weber, Venkata R. Kiran Garimella, and Alaa Batayneh. 2013. Secular vs. Islamist polarization in Egypt on Twitter. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM ’13, pages 290–297, Niagara, ON, Canada. Felix Ming Fai Wong, Chee Wei Tan, Soumya Sen, and Mung Chiang. 2013. Quantifying political leaning from tweets and retweets. In Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media, ICWSM ’13, pages 640–649, Boston, MA, USA.
2020
50
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5651–5656 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5651 On the Importance of Diversity in Question Generation for QA Md Arafat Sultan† Shubham Chandel‡ Ramón F. Astudillo† Vittorio Castelli† †IBM Research AI, T.J. Watson Research Center, New York, USA ‡New York University, New York, USA {arafat.sultan,ramon.astudillo}@ibm.com, [email protected], [email protected] Abstract Automatic question generation (QG) has shown promise as a source of synthetic training data for question answering (QA). In this paper we ask: Is textual diversity in QG beneficial for downstream QA? Using top-p nucleus sampling to derive samples from a transformer-based question generator, we show that diversity-promoting QG indeed provides better QA training than likelihood maximization approaches such as beam search. We also show that standard QG evaluation metrics such as BLEU, ROUGE and METEOR are inversely correlated with diversity, and propose a diversity-aware intrinsic measure of overall QG quality that correlates well with extrinsic evaluation on QA. 1 Question Generation and Diversity Besides areas such as dialog (Bordes et al., 2017) and tutoring systems (Lindberg et al., 2013), automatic question generation (QG) has recently been applied with great success to generating synthetic training examples for question answering (QA) (Alberti et al., 2019; Dong et al., 2019). Yet an important question has remained unexplored: Does increased textual diversity in automatically generated questions lead to better QA? In Figure 1 we show four questions generated by one of our QG models (details in Section 2) from a SQuAD (Rajpurkar et al., 2016) passage and an answer span (the QG prompt). The questions are different not only lexically, but also in what information about the answer entity they draw upon and even their use of world knowledge, e.g., Tesla’s reputation as a “mad scientist”. Intuitively, such sample diversity, if sufficiently accurate, could provide QA models with rich training signal. Existing QG work has predominantly relied on customary beam search decoding for generation and n-gram similarity metrics such as BLEU for evaluation (Du et al., 2017; Alberti et al., 2019; On Tesla’s 75th birthday in 1931, Time magazine put him on its cover. The cover caption “All the world’s his power house” noted his contribution to electrical power generation. He received congratulatory letters from more than 70 pioneers in science and engineering, including Albert Einstein. ✏Who appeared on Time magazine’s cover on his 75th birthday? ✏Which famous scientist was in the cover of Time Magazine in 1931? ✏Which mad scientist received more than a 70 people congratulating him on his birthday? ✏What famous scientist was also 75? Figure 1: A passage with an underlined answer span ("Tesla"), and corresponding questions generated by our model. The generated questions exhibit both lexical and factual diversity. Dong et al., 2019; Zhang and Bansal, 2019).1 Such methods/metrics solely optimize/reward similarity with human-generated reference questions treated as the ground truth (GT). However, in many openended generation tasks where only one or a few of many possible GTs are available through human annotation, this approach directly penalizes diversity by discouraging deviation from the GT(s). In recent years, massively pre-trained neural language models (LMs) (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019) have revolutionized NLP. In open-ended text generation, these models show remarkable robustness under sampling (Radford et al., 2019; Holtzman et al., 2020). This observation, coupled with the examples presented in Figure 1, suggests that treating QG for QA as a more open-ended generation problem and relying on the power of modern text generators to produce diverse yet accurate samples might yield better QA results than the current approach of optimizing for the “most likely” question. We test this hypothesis by fine-tuning a pretrained transformer-based masked LM (Liu et al., 1http://aqleaderboard.tomhosking.co.uk/squad 5652 2019) for QG, and sampling questions from it using top-p nucleus sampling (Holtzman et al., 2020). Other diversity-promoting text generation techniques exist—both at training time (e.g., VAEs (Kingma and Welling, 2014)) and during inference (e.g., top-k sampling and diverse beam search (Vijayakumar et al., 2018))—that have been applied to various NLP tasks: language modeling (Bowman et al., 2016), dialog (Cao and Clark, 2017), visual QG (Jain et al., 2017; Fan et al., 2018), image captioning (Vijayakumar et al., 2018) and so on. We choose nucleus sampling because of its effectiveness, simplicity and speed. Our experiments lead to the following discoveries: −Nucleus sampling indeed produces better QA results than beam search, even when only one question is generated per prompt. −QG metrics that only reward similarity with GT are negatively correlated with diversity, and as a result, are inaccurate predictors of downstream QA performance of diversity-promoting QG. −A measure of QG can be devised that combines diversity with similarity to GT, showing strong correlations with QA performance. 2 Question Generation using RoBERTa We fine-tune a RoBERTa masked LM (Liu et al., 2019) for QG given an answer span within a textual context (as shown in Figure 1), and use nucleus sampling (Holtzman et al., 2020) for generation. Model: Various transformer architectures can be used for text generation (Raffel et al., 2019). Following (Dong et al., 2019; Alberti et al., 2019), we fine-tune a pre-trained masked LM as a prefix LM (Raffel et al., 2019) to predict a question token qt given (1) a prompt p1:N: a tokenized textual context with special tokens delimiting an answer span, and (2) question tokens q1:t−1, if any, that have already been generated for the given prompt in a left-to-right order. A special separator token separates the question prefix from the prompt. The prompt is encoded using bidirectional attention and question tokens using causal (left-only) attention. We choose RoBERTa as our pre-trained model because of its extended pre-training on large amounts of text (Liu et al., 2019). Our implementation of the QG model is based on Hugging Face’s (Wolf et al., 2019) PyTorch implementation of RoBERTa. Fine-Tuning: For each QG training example, the model is asked to predict a single question token qt given the prompt p1:N, the previous question tokens q1:t−1 (teacher-forced), and the mask m at timestep t. All questions end with an EOS token that marks the end of generation. Training attempts to minimize the masked LM loss, i.e., the negative log-likelihood of the GT token qt as the prediction for m in position t: losst = −log P(qt | p1:N, q1:t−1, m) Inference: During generation, the fine-tuned RoBERTa QG model outputs a probability distribution over the entire vocabulary at each question timestep t. Top-p nucleus sampling (NS@p henceforth) samples from the (re-normalized) categorical distribution PN of the nucleus N, which is the smallest subset of vocabulary items that has (1) a cumulative probability mass greater than p, and (2) the highest probability among all such subsets: ˆqt ⇠PN(qt | p1:N, q1:t−1, m) By restricting the pool to a high-likelihood region of the vocabulary, compared to top-k sampling, NS reduces the chances of generating low-probability items when the original distribution is peaked at one or a few items. Our question generation works by repeated nucleus sampling of question tokens until ˆqt = EOS. 3 Experiments and Results To test the effect of QG diversity on QA, we generate questions with both nucleus sampling and beam search from a number of different QG models and compare their performance. General Setup: Considering that performances of different generation methods may vary across models of different capacities, we train eight QG models, each uniquely characterized by: (1) its size (# of parameters), and (2) the amount of training data it was fine-tuned on. The two model sizes are those of RoBERTa: base (125M parameters) and large (355M parameters). For fine-tuning we use the train set of the SQuAD1 split by Du et al. (2017).2 This is a three-way split of the public portion of SQuAD1 widely adopted in QG literature, with approximately 76k train, 18k dev and 12k test (prompt, question) pairs. We draw varying amounts of samples (ranging from 5% to 100%) at random from the train set to fine-tune each model on, simulating different points on the low- to high-resource 2https://github.com/xinyadu/nqg/blob/master/data/raw/ 5653 %train generator B1 R4 MT QA F1 B1 R4 MT QA F1 5 b = 5 33.9 7.9 39.1 81.1 35.9 8.5 40.7 83.2 p = .1 32.3 6.2 36.8 80.6 34.1 7.1 38.8 82.7 p = .5 32.0 6.1 36.4 81.0 33.8 7.0 38.3 82.8 p = .75 30.1 5.1 34.1 81.3 32.3 6.2 36.5 83.1 p = .95 26.5 3.9 29.7 81.6 28.7 4.6 31.9 83.1 20 b = 5 37.2 10.5 42.2 82.1 38.7 11.2 43.3 83.9 p = .1 35.9 9.0 40.9 82.8 37.6 9.8 42.3 84.3 p = .5 35.5 8.7 40.4 83.0 37.4 9.7 42.1 84.5 p = .75 33.8 7.7 38.1 83.7 35.8 8.7 40.0 84.9 p = .95 30.0 5.6 33.4 83.9 31.6 6.4 35.2 85.3 50 b = 5 39.1 11.9 44.4 82.8 40.6 12.6 45.4 84.3 p = .1 37.8 10.3 43.4 83.6 39.6 11.2 44.7 84.8 p = .5 37.4 10.0 42.9 83.8 39.4 11.1 44.4 84.9 p = .75 35.4 8.8 40.2 84.3 38.2 10.3 42.8 85.3 p = .95 31.4 6.3 35.2 84.8 33.6 7.5 37.2 85.7 100 b = 5 40.3 12.6 45.8 83.6 41.6 13.4 46.7 84.5 p = .1 38.9 11.0 44.6 83.9 40.6 12.1 46.1 84.9 p = .5 38.5 10.7 44.1 84.3 40.3 11.9 45.7 85.0 p = .75 36.7 9.6 41.7 84.8 38.8 10.8 43.7 85.5 p = .95 32.5 6.9 36.4 85.3 34.4 7.6 38.3 86.1 base model large model Table 1: Performance of beam search (BEAM) (b = 5) and nucleus sampling (NS@p; p 2 {.1, .5, .75, .95}) on the SQuAD-Du dataset. (Bold: best, underlined: worst). NS yields stronger QA results than BEAM but lower BLEU, ROUGE and METEOR scores. Moreover, QA performance of NS improves with the nucleus probability mass p. spectrum. Each model is trained for two epochs with a learning rate of 2e-5 and a batch size of 96. In-Domain Experiments: With each QG model, we generate questions for all prompts in the SQuAD1-Du dev set. These questions are first evaluated using existing generation metrics: BLEU, ROUGE and METEOR. To extrinsically evaluate on QA, we then (1) fine-tune a BERT (Devlin et al., 2019) whole-word-masked (wwm) LM for QA on the generated dev examples from each model, and (2) evaluate on test. For each of the eight QG models, we evaluate beam search (BEAM henceforth) and NS@p for different values of p. Our BEAM experiments with the RoBERTa-base model did not show significant performance differences between beam sizes 5 and 10, therefore we report results only for b = 5 in this paper. An important point to note here is that given paragraph-long input prompts in QG for QA, where large numbers of synthetic examples may also be needed in many practical use cases, large beam sizes can become prohibitively expensive from a computational standpoint for transformerbased generators. For NS, we evaluate with p 2 {.1, .5, .75, .95}. Among these, p = .1 closely approximates greedy decoding, as we observed for all models an average nucleus size of practically 1 in this setup. We also set the maximum number of vocabulary items in a nucleus to 20, which even the largest p values rarely reached in our experiments. Table 1 shows performances (mean over five different seeds) of all generators in BLEU-1 (B1), ROUGE-4 (R4) and METEOR (MT), the variant in each metric family that showed the highest correlation with downstream QA performance. We also show QA performances measured by SQuAD’s official F1 score metric, which computes the degree of lexical overlap between the predicted and the target answer. As expected, model performance improves with both model size and # of training instances, both in intrinsic evaluation and on QA. Importantly, however, while BEAM has the best intrinsic evaluation results for all eight models, it is competitive in QA only in the lowest-resource setup (5% training data). On the other hand, [email protected] has the lowest QG but the highest QA scores, especially when sufficient training data is available (20% or more). Note that in these experiments we generate a single question per prompt; yet generation diversity across different prompts yields higher-quality QA training data for NS, which is also a faster alternative to BEAM. Sampling five questions per prompt from the large-100% model with [email protected] provides additional improvement (F1 = 86.4). Out-of-Domain Experiments: As we increase p to make generation more diverse, the chances of NS@p drawing less likely candidates and thus 5654 model-%train generator R1 QA F1 base-20 b = 5 34.6 56.6 p = .1 34.6 56.3 p = .5 34.2 57.1 p = .75 32.4 57.5 p = .95 28.9 58.4 base-100 b = 5 37.9 57.5 p = .1 37.9 58.4 p = .5 37.6 59.2 p = .75 35.7 60.4 p = .95 31.5 61.3 large-20 b = 5 36.3 60.4 p = .1 36.3 59.9 p = .5 36.1 59.7 p = .75 34.7 60.8 p = .95 30.9 60.6 large-100 b = 5 39.1 60.6 p = .1 39.2 61.5 p = .5 39.0 61.9 p = .75 37.5 62.1 p = .95 33.4 63.8 Table 2: Despite lower ROUGE scores, diverse QG with nucleus sampling improves QA results over beam search in zero-shot out-of-domain generation for NewsQA. generating incorrect questions also go up. In Table 1, the gains in QA due to QG diversity are generally greater than any drop in performance likely due to decreased accuracy. To find out if the same holds in a more challenging out-of-domain setup, we perform a zero-shot application (i.e., with no further fine-tuning) of four of the above SQuAD-trained QG models to NewsQA, a reading comprehension dataset of CNN news articles (Trischler et al., 2017). Table 2 shows results on the answerable subset of NewsQA, with 76k train (from which we extract our QG prompts) and 4k test (used for QA evaluation) samples: while the absolute scores are lower than those in SQuAD, the relative performances of BEAM and NS are similar both in intrinsic (the best predictor of QA performance for NewsQA was ROUGE-4) and extrinsic (QA F1) evaluation. Comparison with and Augmentation of Human Generation: To assess the quality of our generated questions in absolute terms, in Table 3 we compare the QA performances of the best QG model above (large-100%, [email protected]) and corresponding human annotations (GT). Impressively, in-domain model performance on QA is very similar to that of GT, while zero-shot score on NewsQA is also within roughly 4 points of GT. We also evaluate the generator’s ability to augment human-generated questions. Taking an approach similar to prior augmentation experiments dataset train source QA F1 SQuAD1-Du GT (dev) 86.3 SYNTH 86.1 5⇥-SYNTH 86.4 SYNTH* + GT 88.6 NewsQA GT (train) 67.9 SYNTH 63.8 SYNTH* + GT 69.2 Table 3: Diverse QG (SYNTH; [email protected]) shows impressive QA results compared to human annotation (GT), and in augmenting GT (SYNTH* + GT). (Dong et al., 2019; Alberti et al., 2019), we generate a large synthetic dataset SYNTH* of 4 million examples from Wikipedia passages. The answer spans in these examples are extracted from their corresponding passages using a separate QA model which we train on ten SQuAD question types (instead of full-length questions): what, which, where, who, when, why, how, how many, how much, and how long. SYNTH* is used to fine-tune a BERTwwm LM for QA, which is finally fine-tuned on the target datasets (SQuAD1-Du, NewsQA). As Table 3 shows, SYNTH* achieves 1.3–2.3 absolute points improvements for the high-performance large BERT-wwm model. Summary of Results: The above results empirically show that given enough training data and sufficiently powerful QG models: (1) diverse QG leads to strong in-domain and out-of-domain QA training, (2) asking the “most likely” question (i.e., beam search) every time is less useful, and (3) existing generation metrics are inadequate for evaluating diverse question generators as sources of QA training examples. 4 Intrinsic Evaluation of Diverse QG To better understand the performance of existing generation metrics as measures of diverse QG, we take the set of all 32 samplers in Table 1 (e.g., base-100%[email protected]) and randomly generate a large number (100k) of subsets, each consisting of n samplers (2 n 32) to be evaluated. We assign each n (# of samplers) to a bin and measure performances of QG metrics separately in each bin. The process is repeated for Table 2. Note that the member sets of a given bin, say n = 5, all contain the same number of generators (5), but the actual selection of generators are generally different in different members of a bin. This setup allows us to evaluate a varying number of generators with different capacities and performance, and to average 5655 Figure 2: Performances of existing and proposed generation metrics as measures of diverse QG for QA. The proposed metric shows strong correlations (Spearman’s ⇢> 90%) with QA F1 in both in-domain and out-ofdomain evaluation. over a large number of experiments. Figure 2 shows for all bins a rather poor, for some bins negative, median Spearman’s ⇢score between the best QG metric (SQuAD1-Du: ROUGE4, NewsQA: ROUGE-1) and downstream QA F1. These results provide quantitative confirmation that ROUGE and similar metrics are inadequate evaluators of diverse QG for QA due to their sole focus on accuracy with respect to available GTs. This leads us to our final research question: How to intrinsically measure the overall quality of QG for QA under diverse nucleus sampling? Given the categorical distribution PN of vocabulary items in a model’s nucleus N, we propose to measure both its accuracy (relative to GT) and diversity of generation. Accuracy: Similarly to LM perplexity, for timestep t of evaluation example s, we take the probability PN(qs,t | p, qs,1:t−1) of the model (more precisely, its nucleus N) generating the GT token qs,t, given prompt p and GT history qs,1:t−1. We then average over all evaluation (s, t) pairs to compute model accuracy P(GT). Diversity: An intuitive measure of the diversity of a model’s nucleus N is the average entropy of PN over all evaluation timesteps. However, entropy is an unbounded measure, and has a non-linear inverse growth relative to our proposed accuracy metric, which makes their mathematical combination difficult. We instead rely on the observation that as we increase p in NS@p to make generation more diverse, the cardinality of N also goes up, on average, and so does the probability P(GT 2 N) that N contains the GT token. Our experiments on both datasets showed that this measure of diversity, computed as the proportion of times N was found to include GT across all timesteps in the QG evaluation data, has high positive correlations with the entropy of PN (Pearson’s r: 98%–99%, Spearman’s ⇢: 87%–95%). Note that unlike the accuracy metric P(GT), at each timestep t, the diversity metric P(GT 2 N) is Boolean: the GT token is either in N or it is not. But importantly, its average across many evaluation timesteps is a probability measure of diversity, which enables a straightforward convex combination with our proposed accuracy metric. Our final QG metric is a weighted sum of accuracy and diversity: w·P(GT)+(1−w)·P(GT 2 N), where w 2 [0, 1] is a tunable parameter reflecting the weight of accuracy relative to diversity. In our experiments, this metric outperforms all existing metrics by a large margin for a wide range of w values. In Figure 2, the median Spearman’s ⇢score between this metric and QA F1 in both in-domain (w=.7) and out-of-domain (w=.8) evaluation is over 90% for all bins. We observe similar performance differences between the proposed and existing metrics with Pearson’s r. Given the scope of this paper, we evaluate the combined metric only on QG, but the underlying ideas apply to diverse text generation in general. Further experiments are necessary to evaluate the metric on other generation tasks. 5 Conclusion While diversity of generation has received significant attention in other text generation problems (e.g., dialog), we show in this paper that it is also an important and measurable dimension of quality in question generation for QA. We hope that our work will encourage further exploration of diversity-promoting QG and its evaluation. Possible future directions include a systematic study of different aspects of QG diversity (e.g., lexical and factual) and controlled diversification of individual aspects in generation. Acknowledgments We thank the anonymous reviewers for their valuable feedback. 5656 References Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019. Synthetic QA Corpora Generation with Roundtrip Consistency. In ACL. Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning End-to-End Goal-Oriented Dialog. In ICLR. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating Sentences from a Continuous Space. In ICLR. Kris Cao and Stephen Clark. 2017. Latent Variable Dialogue Models and their Diversity. In EACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified Language Model Pre-training for Natural Language Understanding and Generation. In NeurIPS. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to Ask: Neural Question Generation for Reading Comprehension. In ACL. Zhihao Fan, Zhongyu Wei, Piji Li, Yanyan Lan, and Xuanjing Huang. 2018. A Question Type Driven Framework to Diversify Visual Question Generation. In IJCAI. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2020. The Curious Case of Neural Text Degeneration. In ICLR. Unnat Jain, Ziyu Zhang, and Alexander Schwing. 2017. Creativity: Generating Diverse Questions using Variational Autoencoders. In CVPR. Diederik P. Kingma and Max Welling. 2014. AutoEncoding Variational Bayes. In ICLR. David Lindberg, Fred Popowich, John Nesbit, and Phil Winne. 2013. Generating Natural Language Questions to Support Learning On-Line. In Proceedings of the European Workshop on Natural Language Generation. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Unpublished manuscript. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv preprint. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In EMNLP. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A Machine Comprehension Dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Ashwin K Vijayakumar, Michael Cogswell, Ramprasaath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse Beam Search for Improved Description of Complex Scenes. In AAAI. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. arXiv preprint. Shiyue Zhang and Mohit Bansal. 2019. Addressing Semantic Drift in Question Generation for SemiSupervised Question Answering. In EMNLP.
2020
500
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5657–5667 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5657 Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering Hao Cheng∗, Ming-Wei Chang†, Kenton Lee†, Kristina Toutanova† ∗Microsoft Research [email protected] †Google Research {mingweichang, kentonl, kristout}@google.com Abstract We address the problem of extractive question answering using document-level distant supervision, pairing questions and relevant documents with answer strings. We compare previously used probability space and distant supervision assumptions (assumptions on the correspondence between the weak answer string labels and possible answer mention spans). We show that these assumptions interact, and that different configurations provide complementary benefits. We demonstrate that a multiobjective model can efficiently combine the advantages of multiple assumptions and outperform the best individual formulation. Our approach outperforms previous state-of-the-art models by 4.3 points in F1 on TriviaQA-Wiki and 1.7 points in Rouge-L on NarrativeQA summaries.1 1 Introduction Distant supervision assumptions have enabled the creation of large-scale datasets that can be used to train fine-grained extractive short answer question answering (QA) systems. One example is TriviaQA (Joshi et al., 2017). There the authors utilized a pre-existing set of Trivia questionanswer string pairs and coupled them with relevant documents, such that, with high likelihood, the documents support answering the questions (see Fig. 1 for an illustration). Another example is the NarrativeQA dataset (Koˇcisk´y et al., 2018), where crowd-sourced abstractive answer strings were used to weakly supervise answer mentions in the text of movie scripts or their summaries. In this work, we focus on the setting of documentlevel extractive QA, where distant supervision is specified as a set A of answer strings for an input question-document pair. 1Based on the TriviaQA-Wiki leaderboard, our approach was the SOTA when this work was submitted on Dec 04, 2019. Question: How is Joan Molinsky better known? Answer: Joan Rivers : { Joan Rivers, Diary of a Mad Diva } P1: Joan Alexandra Molinsky, known professionally as Joan Rivers, was an American comedian, actress, writer, producer, and television host. … Joan Rivers was strongly influenced by Lenny Bruce. … P2: … She received a Grammy Award for Best Spoken Word Album for her book, Diary of a Mad Diva. … P3: Joan Alexandra Molinsky was born on June 8, 1933, in Brooklyn, New York. … Before entering show business, she chose Joan Rivers as her stage name. … Question: Where do the dancers purify themselves? Answer: in the spring at mount helicon mount helicon : { in the spring at mount helicon, mount helicon } P1: The play begins with three pages … P2: The courtiers … She sentences them to make reparation and to purify themselves by bathing in the spring at mount helicon. The figure of Actaeon in the play may represent ... TriviaQA NarrativeQA Figure 1: TriviaQA and NarrativeQA examples. In the TriviaQA example, there are three occurrences of the original answer string “Joan Rivers” (blue), and one alternate but incorrect alias “Diary of a Mad Diva” (purple). Only two “Joan Rivers” mentions (shown in blue boxes) support answering the question. In the NarrativeQA example, there are two answer stings in A: “in the spring at mount helicon” (blue) and “mount helicon” (orange), with the latter being a substring of the former. Both mentions in P2 are correct answer spans. Depending on the data generation process, the properties of the resulting supervision from the sets A may differ. For example, the provided answer sets in TriviaQA include aliases of original trivia question answers, aimed at capturing semantically equivalent answers but liable to introducing semantic drift. In Fig. 1, the possible answer string “Diary of a Mad Diva” is related to “Joan Rivers”, but is not a valid answer for the given question. On the other hand, the sets of answer strings in NarrativeQA are mostly valid since they have high overlap with human-generated answers for the given question/document pair. As shown in Fig. 1, “in the spring at mount helicon” and “mount helicon” are both valid answers with relevant mentions. In this case, the annotators chose answers 5658 that appear verbatim in the text but in the more general case, noise may come from partial phrases and irrelevant mentions. While distant supervision reduces the annotation cost, increased coverage often comes with increased noise (e.g., expanding entity answer strings with aliases improves coverage but also increases noise). Even for fixed document-level distant supervision in the form of a set of answers A, different interpretations of the partial supervision lead to different points in the coverage/noise space and their relative performance is not well understood. This work systematically studies methods for learning and inference with document-level distantly supervised extractive QA models. Using a BERT (Devlin et al., 2019) joint question-passage encoder, we study the compound impact of: • Probability space (§2): ways to define the model’s probability space based on independent paragraphs or whole documents. • Distant supervision assumptions (§3): ways to translate the supervision from possible strings A to possible locations of answer mentions in the document. • Optimization and inference (§4): ways to define corresponding training objectives (e.g. Hard EM as in Min et al. (2019) vs. Maximum Marginal Likelihood) and make answer string predictions during inference (Viterbi or marginal inference). We show that the choice of probability space puts constraints on the distant supervision assumptions that can be captured, and that all three choices interact, leading to large differences in performance. Specifically, we provide a framework for understanding different distant supervision assumptions and the corresponding trade-off among the coverage, quality and strength of distant supervision signal. The best configuration depends on the properties of the possible annotations A and is thus data-dependent. Compared with recent work also using BERT representations, our study show that the model with most suitable probabilistic treatment achieves large improvements of 4.6 F1 on TriviaQA and 1.7 Rouge-L on NarrativeQA respectively. Additionally, we design an efficient multi-loss objective that can combine the benefits of different formulations, leading to significant improvements in accuracy, surpassing the best previously reported results on the two studied BERT … 𝒑𝟏 𝒒 (“Joan Rivers”| 𝒑𝟏) Begin and End Probabilities (𝑷𝒃, 𝑷𝒆) (“Joan Rivers”| 𝒑𝟑) … … … Span Probabilities (𝑷𝒔) String Probabilities (𝑷𝒂) 𝑷𝒂(“Joan Rivers”) … 𝑷𝒂(“Diary of a Mad Diva”) Contextualized Representation 𝚵 𝚵 BERT 𝒑𝟑 𝒒 … … … Figure 2: The document-level QA model as used for test-time inference. The lower part is a BERT-based paragraph-level answer scoring component, and the upper part illustrates the probability aggregation across answer spans sharing the same answer string. Ξ refers to either a sum or a max operator. In the given example, “John Rivers” is derived from two paragraphs. tasks. Results are further strengthened by transfer learning from fully labeled short-answer extraction data in SQuAD 2.0 (Rajpurkar et al., 2018), leading to a final state-of-the-art performance of 76.3 F1 on TriviaQA-Wiki and 62.9 on the NarrativeQA summaries task.2 2 Probability Space Here, we first formalize both paragraph-level and document-level models, which have been previously used for document-level extractive QA. Typically, paragraph-level models consider each paragraph in the document independently, whereas document models integrate some dependencies among paragraphs. To define the model, we need to specify the probability space, consisting of a set of possible outcomes and a way to assign probabilities to individual outcomes. For extractive QA, the probability space outcomes consist of token positions of answer mention spans. The overall model architecture is shown in Fig. 2. We use BERT (Devlin et al., 2019) to derive representations of document tokens. As is standard in state-of-the-art extractive QA models (Devlin et al., 2019; Lee et al., 2019; Min et al., 2019), the BERT model is used to encode a pair of a given question with one paragraph from a given document into neural text representations. These representations are then used to 2The code is available at https://github.com/ hao-cheng/ds_doc_qa 5659 define scores/probabilities of possible answer begin and end positions, which are in turn used to define probabilities over possible answer spans. Then the answer string probabilities can be defined as the aggregation over all possible answer spans/mentions. In the following, we show that paragraph-level and document-level models differ only in the space of possible outcomes and the way of computing answer span probabilities from answer position begin and end scores. Scoring answer begin and end positions Given a question q and a document d consisting of K paragraphs p1, . . . , pK, the BERT encoder produces contextualized representations for each question-paragraph pair (q, pk). Specifically, for each token position ik in pk, the final hidden vector h(i,k) ∈Rd is used as the contextualized token embedding, where d is the vector dimension. The span-begin score is computed as sb(ik) = wT b h(i,k) using a weight vector wb ∈Rd. The span-end score se(jk) is defined in the same way. The probabilities for a start position ik and an end position jk are Pb(ik) = exp(sb(ik)) Zb , (1) Pe(jk) = exp(se(jk)) Ze , (2) where Zb, Ze are normalizing factors, depending on the probability space definition (detailed below). The probability of an answer span from ik to jk is defined as Ps(ik, jk) = Pb(ik)Pe(jk). The partition functions Zb and Ze depend on whether we use a paragraph-level or documentlevel probability space. Paragraph-level model In paragraph level models, we assume that for a given question against a document d, each of its paragraphs p1, . . . , pK independently selects a pair of answer positions (ik, jk), which are the begin and end of the answer from paragraph pk. In the case that pk does not support answering the question q, special NULL positions are selected (following the SQuAD 2.0 BERT implementation3). Thus, the set of possible outcomes Ωin the paragraph-level probability space is the set of lists of begin/end position pairs, one from each paragraph: {[(i1, j1), . . . , (iK, jK)]}, where ik 3https://github.com/google-research/bert and jk range over positions in the respective paragraphs. The answer positions in different paragraphs are independent, and the probability of each paragraph’s answer begin and end is computed by normalizing over all possible positions in that paragraph, i.e., Zk b = X i∈Ik∪{NULL} exp(sb(i)), (3) Zk e = X j∈Ik∪{NULL} exp(se(j)), (4) where Ik is the set of all positions in the paragraph pk. The probability of an answer begin at ik is Pb(ik) = exp(sb(ik))/Zbk and the probability of an end at jk is defined analogously. The probability of a possible answer position assignment for the document d is then defined as P([(i1, j1), . . . , (iK, jK)]) = Q k Pb(ik)Pe(jk). As we can see from the above definition, due to the independence assumption, models using paragraph-level normalization do not learn to directly calibrate candidate answers from different paragraphs against each other. Document-level model In document-level models, we assume that for a given question against document d, a single answer span is selected (as opposed to one for each paragraph in the paragraph-level models).4 Here, the possible positions in all paragraphs are a part of a joint probability space and directly compete against each other. In this case, Ωis the set of token spans {(i, j)}, where i and j are the begin and end positions of the selected answer. The normalizing factors are therefore aggregated over all paragraphs, i.e., Z∗ b = K X k=1 X i∈Ik exp(sb(i)), (5) Z∗ e = K X k=1 X j∈Ik exp(se(j)). (6) Compared with (3) and (4), since there is always a valid answer in the document for the tasks studied here, NULL is not necessary for documentlevel models and thus can be excluded from the 4In this paper, we focus on datasets where the document is known to contain a valid answer. It is straightforward to remove this assumption and consider document-level NULL for future work. 5660 Coverage Quality Strength H1 ↗ ↘ ↗ H2 −→ −→ −→ H3 ↘ ↗ ↘ Table 1: Distant supervision assumptions and their corresponding tradeoffs. (↗) indicates highest value, (→) medium, and (↘) lowest value. inner summation of (5) and (6). The probability of a possible outcome, i.e. an answer span, is P(i, j) = exp(sb(i) + se(j))/(Z∗ b Z∗ e). 3 Distant Supervision Assumptions There are multiple ways to interpret the distant supervision signal from A as possible outcomes in our paragraph-level and document-level probability spaces, leading to corresponding training loss functions. Although several different paragraphlevel and document-level losses (Chen et al., 2017; Kadlec et al., 2016; Clark and Gardner, 2018; Lin et al., 2018; Min et al., 2019) have been studied in the literature, we want to point out that when interpreting the distant supervision signal, there is a tradeoff among multiple desiderata: • Coverage: maximize the number of instances of relevant answer spans, which we can use to provide positive examples to our model. • Quality: maximize the quality of annotations by minimizing noise from irrelevant answer strings or mentions. • Strength: maximize the strength of the signal by reducing uncertainty and pointing the model more directly at correct answer mentions. We introduce three assumptions (H1, H2, H3) for how the distant supervision signal should be interpreted, which lead to different tradeoffs among the desiderata above (see Table 1). We begin with setting up additional useful notation. Given a document-question pair (d, q) and a set of answer strings A, we define the set of A-consistent token spans YA in d as follows: for each paragraph pk, span (ik, jk) ∈Yk A if and only if the string spanning these positions in the paragraph is in A. For paragraph-level models, if for paragraph pk the set Yk A is empty, we redefine Yk A to be {NULL}. Similarly, we define the set of Aconsistent begin positions Yk b,A as the start positions of consistent spans: Yk b,A = ∪(i,j)∈Yk A{i}. Yk e,A for A-consistent end positions is defined analogously. In addition, we term an answer span (i, j) correct for question q, if its corresponding answer string is a correct answer to q, and the context of the specific mention of that answer string from positions i to j entails this answer. Similarly, we term an answer begin/end position correct if there exists a correct answer span starting/ending at that position. H1: All A-consistent answer spans are correct. While this assumption is evidently often incorrect (low on the quality dimension ↘), especially for TriviaQA, as seen from Fig. 1, it provides a large number of positive examples and a strong supervision signal (high on coverage ↗and strength ↗). We include this in our study for completeness. H1 translates differently into possible outcomes for corresponding models depending on the probability space (paragraph or document). Paragraphlevel models select multiple answer spans, one for each paragraph, to form a possible outcome. Thus, multiple A-consistent answer spans can occur in a single outcome, as long as they are in different paragraphs. For multiple A-consistent answer spans in the same paragraph, these can be seen as mentions that can be selected with equal probability (e.g., by different annotators). Document-level models select a single answer span in the document and therefore multiple A-consistent answer spans can be seen as occurring in separate annotation events. Table 2 shows in row one the logprobability of outcomes consistent with H1. H2: Every positive paragraph has a correct answer in its A-consistent set. Under this assumption, each paragraph with a non-empty set of Aconsistent spans (termed a positive paragraph) has a correct answer. As we can see from the TriviaQA example in Fig. 1, this assumption is correct for the first and third paragraph, but not the second one, as it only contains a mention of a noisy answer alias. This assumption has medium coverage (→), as it generates positive examples from multiple paragraphs but does not allow multiple positive mentions in the same paragraph. It also decreases noise (higher quality →) (e.g. does not claim that all the mentions of “Joan Rivers” in the first paragraph support answering the question). The strength of the supervision signal is weakened (→) relative to H1, as now the model needs to figure out which of the multiple A-consistent mentions in each paragraph is correct. H2 has two variations: correct span, assuming 5661 Span-Based Position-Based H1 P k∈K P (ik,jk)∈Yk A log Ps(ik, jk) P k∈K P ik∈Yk b,A log Pb(ik) + P k∈K P jk∈Yk e,A log Pe(jk) H2 P k∈K log Ξ(ik,jk)∈Yk APs(ik, jk) P k∈K log Ξik∈Yk b,APb(ik) + P k∈K log Ξjk∈Yk e,APe(jk) H3 log Ξk∈KΞ(ik,jk)∈Yk APs(ik, jk) log Ξk∈KΞik∈Yk b,APb(ik) + log Ξk∈KΞjk∈Yk e,APe(jk) Table 2: Objective functions for a document-question pair (d, q) under different distant supervision assumptions. Ξ refers to P and max for MML and HardEM, respectively. that one of the answer spans (ik, jk) in Yk A is correct, and correct position, assuming that the paragraph has a correct answer begin position from Yk b,A and a correct answer end position from Yk e,A, but its selected answer span may not necessarily belong to Yk A. For example, if A contains {abcd, bc}, then abc would have correct begin and end, but not be a correct span. It does not make sense for modeling to assume the paragraph has correct begin and end positions instead of a correct answer span (i.e., we don’t really want to get inconsistent answers like abc above), but given that our probabilistic model assumes independence of begin and end answer positions, it may not be able to learn well with span-level weak supervision. Some prior work (Clark and Gardner, 2018) uses an H2 position-based distant supervision assumption with a pair-paragraph model akin to our document-level ones. Lin et al. (2018) use an H2 span-based distant supervision assumption. The impact of position vs. span-based modeling of the distant supervision is not well understood. As we will see in the experiments, for the majority of settings, position-based weak supervision is more effective than span-based for our model. For paragraph-level and document-level models, H2 corresponds differently to possible outcomes. For paragraph models, one outcome can select answer spans in all positive paragraphs and NULL in negative ones. For document-level models, we view answers in different paragraphs as outcomes of multiple draws from the distribution. The identity of the particular correct span or begin/end position is unknown, but we can compute the probability of the event comprising the consistent outcomes. Table 2 shows the log-probability of the outcomes consistent with H2 in row two (right for span-based and left for position-based interpretation, when plugging in P for Ξ). H3: The document has a correct answer in its A-consistent set YA. This assumption posits that the document has a correct answer span (or begin/end positions), but not every positive paragraph needs to have one. It further improves supervision quality (↗), because for example, it allows the model to filter out the noise in paragraph two in Fig. 1. Since the model is given a choice of any of the A-consistent mentions, it has the capability to assign zero probability mass on the supervisionconsistent mentions in that paragraph. On the other hand, H3 has lower coverage (↘) than H1 and H2, because it provides a single positive example for the whole document, rather than one for each positive paragraph. It also reduces the strength of the supervision signal (↘), as the model now needs to figure out which mention to select from the larger document-level set YA. Note that we can only use H3 coupled with a document-level model, because a paragraph-level model cannot directly tradeoff answers from different paragraphs against each other, to select a single answer span from the document. As with the other distant supervision hypotheses, spanbased and position-based definitions of the possible consistent outcomes can be formulated. The log-probabilities of these events are defined in row three of Table 2, when using P for Ξ. H3 was used by Kadlec et al. (2016) for cloze-style distantly supervised QA with recurrent neural network models. The probability-space (paragraph vs. documentlevel) and the distant supervision assumption (H1, H2, and H3, each position or span-based) together define our interpretation of the distant supervision signal resulting in definitions of probability space outcomes consistent with the supervision. Next, we define corresponding optimization objectives to train a model based on this supervision and describe the inference methods to make predictions with a trained model. 4 Optimization and Inference Methods For each distant supervision hypothesis, we maximize either the marginal log-likelihood of A5662 consistent outcomes (MML) or the log-likelihood of the most likely outcome (HardEM). The latter was found effective for weakly supervised tasks including QA and semantic parsing by Min et al. (2019). Table 2 shows the objective functions for all distant supervision assumptions, each comprising a pairing of a distant supervision hypothesis (H1, H2, H3) and position-based vs. span-based interpretation. The probabilities are defined according to the assumed probability space (paragraph or document). In the table, K denotes the set of all paragraphs in the document, and Yk denotes the set of weakly labeled answer spans for the paragraph pk (which can be {NULL} for paragraph-level models). Note that span-based and position-based objective functions are equivalent for H1 because of the independence assumption, i.e. Ps(ik, jk) = Pb(ik)Pe(jk). Inference: Since the task is to predict an answer string rather than a particular mention for a given question, it is potentially beneficial to aggregate information across answer spans corresponding to the same string during inference. The score of a candidate answer string can be obtained as Pa(x) = Ξ(i,j)∈X Ps(i, j), where X is the set of spans corresponding to the answer string x, and Ξ can be either P or max.5 It is usually beneficial to match the training objective with the corresponding inference method, i.e. MML with marginal inference Ξ = P, and HardEM with max (Viterbi) inference Ξ = max. Min et al. (2019) showed HardEM optimization was useful when using an H2 span-level distant supervision assumption coupled with max inference, but it is unclear whether this trend holds when P inference is useful or other distant supervision assumptions perform better. We therefore study exhaustive combinations of probability space, distant supervision assumption, and training and inference methods. 5 Experiments 5.1 Data and Implementation Two datasets are used in this paper: TriviaQA (Joshi et al., 2017) in its Wikipedia formulation, and NarrativeQA (summaries setting) (Koˇcisk´y et al., 2018). Using the same preprocessing as 5For inference with marginal (P) scoring, we use an approximate scheme where we only aggregate probabilities of candidates strings generated from a 20-best list of begin/end answer positions for each paragraph. Clark and Gardner (2018) for TriviaQA-Wiki6, we only keep the top 8 ranked paragraphs up to 400 tokens for each document-question pair for both training and evaluation. Following Min et al. (2019), for NarrativeQA we define the possible answer string sets A using Rouge-L (Lin, 2004) similarity with crouwdsourced abstractive answer strings. We use identical data preprocessing and the evaluation script provided by the authors. In this work, we use the BERT-base model for text encoding and train our model with the default configuration as described in (Devlin et al., 2019), fine-tuning all parameters. We fine-tune for 3 epochs on TriviaQA and 2 epochs on NarrativeQA. 5.2 Optimization and Inference for Latent Variable Models Here we look at the cross product of optimization (HardEM vs MML) and inference (Max vs Sum) for all distant supervision assumptions that result in models with latent variables. We therefore exclude H1 and look at the other two hypotheses, H2 and H3, each coupled with a span-based (Span) or position-based (Pos) formulation and a paragraphlevel (P) or a document level (D) probability space. The method used in Min et al. (2019) corresponds to span-based H2-P with HardEM training and Max inference. The results are shown in Fig. 3. First, we observe that inference with Sum leads to significantly better results on TriviaQA under H2-P and H2-D, and slight improvement under H3-D. On NarrativeQA, inference with Max is better. We attribute this to the fact that correct answers often have multiple relevant mentions for TriviaQA (also see §5.6), whereas for NarrativeQA this is rarely the case. Thus, inference with Sum in NarrativeQA could potentially boost the probability of irrelevant frequent strings. Consistent with (Min et al., 2019), we observe that span-based HardEM works better than spanbased MML under H2-P, with a larger advantage on NarrativeQA than on TriviaQA. However, under H2-D and H3-D, span-based MML performs consistently better than span-based HardEM. For position-based objectives, MML is consistently better than HardEM (potentially because HardEM may decide to place its probability mass on beginend position combinations that do not contain mentions of strings in A). Finally, it can be ob6https://github.com/allenai/document-qa 5663 0.62 0.64 0.66 0.68 0.7 0.72 0.74 0.76 HardEM-Span HardEM-Pos MML-Span MML-Pos HardEM-Span HardEM-Pos MML-Span MML-Pos HardEM-Span HardEM-Pos MML-Span MML-Pos H2-P H2-D H3-D Max Sum (a) TriviaQA F1 0.48 0.5 0.52 0.54 0.56 0.58 0.6 0.62 HardEM-Span HardEM-Pos MML-Span MML-Pos HardEM-Span HardEM-Pos MML-Span MML-Pos HardEM-Span HardEM-Pos MML-Span MML-Pos H2-P H2-D H3-D Max Sum (b) NarrativeQA Rouge-L Figure 3: Comparison of different optimization and inference choices grouped by distant supervision hypothesis based on dev set results for TriviaQA and NarrativeQA. served that under each distant supervision hypothesis/probability space combination, the positionbased MML is always the best among the four objectives. Position-based objectives may perform better due to the independence assumptions for begin/end positions of the model we use and future work may arrive at different conclusions if position dependencies are integrated. Based on this thorough exploration, we focus on experimenting with position-based objectives with MML for the rest of this paper. 5.3 Probability Space and Distant Supervision Assumptions In this subsection, we compare probability space and distant supervision assumptions. Table 3 shows the dev set results, where the upper section compares paragraph-level models (H1-P, H2P), and the lower section compares documentlevel models (H1-D, H2-D, H3-D). The performance of models with both Max and Sum inference is shown. We report F1 and Exact Match (EM) scores for TriviaQA, and Rouge-L scores for NarrativeQA. For TriviaQA, H3-D achieves significantly betObjective Infer TriviaQA NarrativeQA F1 EM Rouge-L Paragraph-level Models H1-P Max 67.9 63.3 55.3 Sum 70.4 66.0 53.6 H2-P Max 71.9 67.7 59.2 Sum 73.0 69.0 57.8 Document-level Models H1-D Max 55.8 51.0 59.4 Sum 65.2 61.2 59.1 H2-D Max 70.3 66.2 60.1 Sum 72.4 68.4 59.9 H3-D Max 75.1 70.6 59.1 Sum 75.3 70.8 59.2 Table 3: Comparison of distant supervision hypotheses using MML-Pos objectives on TriviaQA and NarrativeQA dev sets. ter results than other formulations. Only H3D is capable of “cleaning” noise from positive paragraphs that don’t have a correct answer (e.g. paragraph two in Fig. 1), by deciding which Aconsistent mention to trust. The paragraph-level models H1-P and H2-P outperform their corresponding document-level counterparts H1-D and H2-D. This may be due to the fact that without H3, and without predicting NULL, D models do not learn to detect irrelevant paragraphs. Unlike for TriviaQA, H2-D models achieve the best performance for NarrativeQA. We hypothesize this is due to the fact that positive paragraphs that don’t have a correct answer are very rare in NarrativeQA (as summaries are relatively short and answer strings are human-annotated for the specific documents). Therefore, H3 is not needed to clean noisy supervision, and it is not useful since it also leads to a reduction in the number of positive examples (coverage) for the model. Here, document-level models always improve over their paragraph counterparts, by learning to calibrate paragraphs directly against each other. 5.4 Multi-Objective Formulations and Clean Supervision Here we study two methods to further improve weakly supervised QA models. First, we combine two distant supervision objectives in a multitask manner, i.e. H2-P and H3-D for TriviaQA, and H2-P and H2-D for NarrativeQA, chosen based on the results in §5.3. H2 objectives have higher coverage than H3 while being more susceptible 5664 Objective Clean Infer TriviaQA NarrativeQA F1 EM Rouge-L Single-objective Par X Max 71.9 67.7 59.2 Sum 73.0 69.0 57.8 ✓ Max 74.2 70.1 61.7 Sum 74.9 70.9 61.7 Doc X Max 75.1 70.6 60.1 Sum 75.3 70.8 59.9 ✓ Max 75.5 70.8 62.8 Sum 75.5 70.9 62.9 Multi-objective Par + Doc X Max 75.6 71.2 60.5 Sum 75.9 71.6 60.5 ✓ Max 75.8 71.2 63.0 Sum 76.2 71.7 63.1 Table 4: Dev set results comparing multi-objectives and clean supervison. ✓indicates the QA model is pre-trained on SQUAD. to noise. Paragraph-level models have the advantage of learning to score irrelevant paragraphs (via NULL outcomes). Note that we use the same parameters for the two objectives and the multiobjective formulation does not have more parameters and is no less efficient than the individual models. Second, we use external clean supervision from SQUAD 2.0 (Rajpurkar et al., 2018) to train the BERT-based QA model for 2 epochs. This model matches the P probability space and is able to detect both NULL and extractive answer spans. The resulting network is used to initialize the models for TriviaQA and NarrativeQA. The results are shown in Table 4. It is not surprising that using external clean supervision improves model performance (e.g. (Min et al., 2017)). We note that, interestingly, this external supervision narrows the performance gap between paragraph-level and document-level models, and reduces the difference between the two inference methods. Compared with their single-objective components, multi-objective formulations improve performance on both TriviaQA and NarrativeQA. 5.5 Test Set Evaluation Table 5 reports test set results on TriviaQA and NarrativeQA for our best models, in comparison to recent state-of-art (SOTA) models. For TriviaQA, we report F1 and EM scores on the full test set and the verified subset. For NarrativeQA, RougeTriviaQA Wiki Full Verified F1 EM F1 EM Ours (H2-P+H3-D) 76.3 72.1 85.5 82.2 w/o SQUAD 75.7 71.6 83.6 79.6 (Wang et al., 2018b) 71.4 66.6 78.7 74.8 (Clark and Gardner, 2018) 68.9 64.0 72.9 68.0 (Min et al., 2019) 67.1 – – – NarrativeQA Summary Rouge-L Ours (H2-P+H2-D) 62.9 w/o SQUAD 60.5 (Nishida et al., 2019) 59.9 w/o external data 54.7 (Min et al., 2019) 58.8 Table 5: Test set results on TriviaQA Wiki and NarrativeQA Summaries. “w/o SQUAD” refers to our best model without pretraining on SQUAD 2.0. “w/o external data” refers to the model from (Nishida et al., 2019) without using MS MARCO data (Bajaj et al., 2018). L scores are reported. Compared to recent TriviaQA SOTA (Wang et al., 2018b), our best models achieve 4.9 F1 and 5.5 EM improvement on the full test set, and 6.8 F1 and 7.4 EM improvement on the verified subset. On the NarrativeQA test set, we improve Rouge-L by 3.0 over (Nishida et al., 2019). The large improvement, even without additional fully labeled data, demonstrates the importance of selecting an appropriate probability space and interpreting the distant-supervision in a way cognizant of the properties of the data, as well as selecting a strong optimization and inference method. With external fully labeled data to initialize the model, performance is further significantly improved. 5.6 Analysis In this subsection, we carry out analyses to study the relative performance of paragraph-level and document-level models, depending on the size of answer string set |A| and the number of Aconsistent spans, which are hypothesized to correlate with label noise. We use the TriviaQA dev set and the best performing models, i.e. H2-P and H3-D with Sum inference. We categorize examples based on the size of their answer string set, |A|, and the size of their corresponding set of A-consistent spans, |I|. Specifically, we divide the data into 4 subsets and 5665 Subset |A| |I| Size H2-P H3-D ∆ Qss = 1 ≤5 2585 66.8 67.4 0.6 Qls > 1 ≤5 853 68.7 70.1 1.4 Qsl = 1 > 5 1149 82.0 84.9 2.9 Qll > 1 > 5 3034 86.3 88.4 2.1 Table 6: F1 scores on 4 subsets of TriviaQA dev, grouped by the size of their answer string sets A and corresponding set of possible mentions I. ∆indicates the improvement from H2-P to H3-D. report performance separately on each subset, as shown in Table 6. In general, we expect Qsl and Qll to be noisier due to the larger I, where Qsl potentially includes many irrelevant mentions while Qll likely contains more incorrect answer strings (false aliases). We can observe that the improvement is more significant for these noisier subsets, suggesting document-level modeling is crucial for handling both types of label noise. 6 Related Work Distant supervision has been successfully used for decades for information extraction tasks such as entity tagging and relation extraction (Craven and Kumlien, 1999; Mintz et al., 2009). Several ways have been proposed to learn with DS, e.g., multi-label multi-instance learning (Surdeanu et al., 2012), assuming at least one supporting evidence (Hoffmann et al., 2011), integration of label-specific priors (Ritter et al., 2013), and adaption to shifted label distributions (Ye et al., 2019). Recent work has started to explore distant supervision to scale up QA systems, particularly for open-domain QA where the evidence has to be retrieved rather than given as input. Reading comprehension (RC) with evidence retrieved from information retrieval systems establishes a weakly-supervised QA setting due to the noise in the heuristics-based span labels (Chen et al., 2017; Joshi et al., 2017; Dunn et al., 2017; Dhingra et al., 2017). One line of work jointly learns RC and evidence ranking using either a pipeline system (Wang et al., 2018a; Lee et al., 2018; Kratzwald and Feuerriegel, 2018) or an end-to-end model (Lee et al., 2019). Another line of work focuses on improving distantly-supervised RC models by developing learning methods and model architectures that can better use noisy labels. Clark and Gardner (2018) propose a paragraph-pair ranking objective, which has components of both our H2-P and H3-D position-based formulations. They don’t explore multiple inference methods or combinations of objectives and use less powerful representations. In (Lin et al., 2018), a coarse-to-fine model is proposed to handle label noise by aggregating information from relevant paragraphs and then extracting answers from selected ones. Min et al. (2019) propose a hard EM learning scheme which we included in our experimental evaluation. Our work focuses on examining probabilistic assumptions for document-level extractive QA. We provide a unified view of multiple methods in terms of their probability space and distant supervision assumptions and evaluate the impact of their components in combination with optimization and inference methods. To the best of our knowledge, the three DS hypotheses along with position and span-based interpretations have not been formalized and experimentally compared on multiple datasets. In addition, the multi-objective formulation is new. 7 Conclusions In this paper, we demonstrated that the choice of probability space and interpretation of the distant supervision signal for document-level QA have a large impact, and that they interact. Depending on the properties of the data, different configurations are best, and a combined multi-objective formulation can reap the benefits of its constituents. A future direction is to extend this work to question answering tasks that require reasoning over multiple documents, e.g., open-domain QA. In addition, the findings may generalize to other tasks, e.g., corpus-level distantly-supervised relation extraction. Acknowledgement Some of the ideas in this work originated from Hao Cheng’s internship with Google Research. We would like to thank Ankur Parikh, Michael Collins, and William Cohen for discussion and detailed feedback on this work, as well as other members from the Google Research Language team and the anonymous reviewers for valuable suggestions. We would also like to thank Sewon Min for generously sharing the processed data and evaluation script for NarrativeQA. 5666 References Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. MS MARCO: A human generated machine reading comprehension dataset. CoRR, abs/1611.09268. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870– 1879. Association for Computational Linguistics. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 845–855. Association for Computational Linguistics. Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Bhuwan Dhingra, Kathryn Mazaitis, and William W. Cohen. 2017. Quasar: Datasets for question answering by search and reading. CoRR, abs/1707.03904. Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur G¨uney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. CoRR, abs/1704.05179. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 541–550. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611. Association for Computational Linguistics. Rudolf Kadlec, Martin Schmid, Ondˇrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 908–918. Association for Computational Linguistics. Tom´aˇs Koˇcisk´y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317– 328. Bernhard Kratzwald and Stefan Feuerriegel. 2018. Adaptive document retrieval for deep question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 576–581. Association for Computational Linguistics. Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 565–569. Association for Computational Linguistics. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81. Association for Computational Linguistics. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736– 1745. Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. A discrete hard EM approach for weakly supervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2844– 2857. Association for Computational Linguistics. Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question answering through transfer learning from large fine-grained supervision data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 510–517. Association for Computational Linguistics. 5667 Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011. Association for Computational Linguistics. Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, and Junji Tomita. 2019. Multi-style generative reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2273–2284. Association for Computational Linguistics. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789. Association for Computational Linguistics. Alan Ritter, Luke Zettlemoyer, Oren Etzioni, et al. 2013. Modeling missing data in distant supervision for information extraction. Transactions of the Association for Computational Linguistics, 1:367–378. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLPCoNLL ’12, pages 455–465, Stroudsburg, PA, USA. Association for Computational Linguistics. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018a. R3: Reinforced reader-ranker for open-domain question answering. In AAAI Conference on Artificial Intelligence. Wei Wang, Ming Yan, and Chen Wu. 2018b. Multigranularity hierarchical attention fusion networks for reading comprehension and question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1705–1714. Association for Computational Linguistics. Qinyuan Ye, Liyuan Liu, Maosen Zhang, and Xiang Ren. 2019. Looking beyond label noise: Shifted label distribution matters in distantly supervised relation extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3839–3848. Association for Computational Linguistics.
2020
501
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5668–5683 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5668 SCDE: Sentence Cloze Dataset with High Quality Distractors From Examinations Xiang Kong∗, Varun Gangal∗, Eduard Hovy Language Technologies Institute Carnegie Mellon University {xiangk,vgangal,hovy}@cs.cmu.edu Abstract We introduce SCDE, a dataset to evaluate the performance of computational models through sentence prediction. SCDE is a humancreated sentence cloze dataset, collected from public school English examinations. Our task requires a model to fill up multiple blanks in a passage from a shared candidate set with distractors designed by English teachers. Experimental results demonstrate that this task requires the use of non-local, discourse-level context beyond the immediate sentence neighborhood. The blanks require joint solving and significantly impair each other’s context. Furthermore, through ablations, we show that the distractors are of high quality and make the task more challenging. Our experiments show that there is a significant performance gap between advanced models (72%) and humans (87%), encouraging future models to bridge this gap.1 2 1 Introduction Cloze questions were first proposed by Taylor (1953) as a readability test, motivated by Gestalt psychology. They become an efficient way of testing reading for public exams, overtaking the dominant paradigm of subjective questions (Fotos, 1991; Jonz, 1991). Cloze datasets (Zweig and Burges, 2011; Hermann et al., 2015; Hill et al., 2015; Paperno et al., 2016; Onishi et al., 2016; Xie et al., 2018) became prevalent as questionanswering (QA) benchmarks since they are convenient either to be generated automatically or by annotators. These datasets could be split into two clear types: 1. Where the context is a complete text, and there is an explicit question posed which is a statement with a cloze gap. The answer is either generated freely or is a span ∗Equal Contribution 1Data: vgtomahawk.github.io/sced.html 2Code: https://github.com/shawnkx/SCDE from the context, e.g. Children’s Books Test (CBT) (Hill et al., 2015). 2. Where the context itself comes with cloze gaps. There is no explicit question. The answer is generated freely or chosen from a set of candidates, e.g. CLOTH (Xie et al., 2018). Herein, we focus on the 2nd category. A common property of these datasets is that they have gaps at the level of words, entities or short syntactic spans. The entity and span-based clozes may sometimes be multi-token, but they do not extend beyond a few tokens. Nevertheless, none of these datasets have cloze gaps at the level of full sentences. Since many syntactic and semantic cues are present in the same sentence, this makes the gap easier to fill compared to the sentence level cloze case where models would have to rely on “discourse” cues beyond the same sentence. Besides lack of intra-sentence cues, sentencelevel cloze may require comparing candidates of very different lengths. For instance, the example in Table 1 has a standard deviation of 7.6 with candidate lengths between 3 to 25. A model that only represents words well may not get comparable probabilities at sentence level for very different sentence lengths. Therefore, robust sentence representation models are also required to solve this question. In this paper, we present SCDE, a dataset of sentence-level cloze questions sourced from public school examinations. Each dataset example consists of a passage with multiple sentence-level blanks and a shared set of candidates. Besides the right answer to each cloze in the passage, the candidate set also contains ones which don’t answer any cloze, a.k.a., distractors. Both cloze positions and distractors are authored by teachers who design the public school examinations carefully. §3.2 explains our data collection. A representative example from SCDE is shown in Table 1. 5669 Passage: A student’s life is never easy. And it is even more difficult if you will have to complete your study in a foreign land. 1 The following are some basic things you need to do before even seizing that passport and boarding on the plane. Knowing the country. You shouldn’t bother researching the country’s hottest tourist spots or historical places. You won’t go there as a tourist, but as a student. 2 In addition, read about their laws. You surely don’t want to face legal problems, especially if you’re away from home. 3 Don’t expect that you can graduate abroad without knowing even the basics of the language. Before leaving your home country, take online lessons to at least master some of their words and sentences. This will be useful in living and studying there. Doing this will also prepare you in communicating with those who can’t speak English. Preparing for other needs. Check the conversion of your money to their local currency. 4. The Internet of your intended school will be very helpful in findings an apartment and helping you understand local currency. Remember, you’re not only carrying your own reputation but your country’s reputation as well. If you act foolishly, people there might think that all of your countrymen are foolish as well. 5 Candidates: A. Studying their language. B. That would surely be a very bad start for your study abroad program. C. Going with their trends will keep it from being too obvious that you’re a foreigner. D. Set up your bank account so you can use it there, get an insurance, and find an apartment. E. It’ll be helpful to read the most important points in their history and to read up on their culture. F. A lot of preparations are needed so you can be sure to go back home with a diploma and a bright future waiting for you. G. Packing your clothes. Answers with Reasoning Type: 1→F (Summary) , 2→E (Inference) , 3→A (Paraphrase) , 4→D (WordMatch), 5→B (Inference) (C and G are distractors) Discussion: Blank 3 is the easiest to solve, since “Studying their language” is a near-paraphrase of “Knowing even the basics of the language”. Blank 2 needs to be reasoned out by Inference - specifically E can be inferred from the previous sentence. Note however that C is also a possible inference from the previous sentence - it is only after reading the entire context, which seems to be about learning various aspects of a country, that E seems to fit better. Blank 1 needs Summary →it requires understanding several later sentences and abstracting out that they all refer to lots of preparations. Finally, Blank 5 can be mapped to B by inferring that people thinking all your countrymen are foolish is bad, while Blank 4 is a easy WordMatch on apartment to D. The other distractor G, although topically related to preparation for going abroad, does not directly fit into any of the blank contexts Table 1: A Representative Example from SCDE. Another salient aspect of our dataset is that more than 40% of blanks belong to the reasoning category “Inference” (more on this in §3.3 and Table 4) which require models to compare plausibility of competing hypotheses given a premise (whether the previous or last sentence(s), or even a combination of information from the two). Filling these blanks requires the model to reason by using commonsense knowledge, factual knowledge, time gaps, etc. Some of these can be thought of as simple entailment, but more generally, many of these can be seen as requiring abductive reasoning, which is of recent interest (Bhagavatula et al., 2019; Sap et al., 2019a,b) to the NLP community. In summary, our contributions are as follows 1. We introduce the task of sentence level cloze completion with multiple sentence blanks and a shared candidate set with distractors. 2. We release SCDE, a sentence level cloze dataset of ≈6k passages and ≈30k blanks. 3. We estimate human performance on SCDE, and benchmark several models, including state-of-the-art contextual embeddings (Table 5). We find a significant gap of > 15% for future models to close in order to match human performance. 4. Through several ablations described in §5.6, we show that distractors designed by English teachers are of high quality and make the task more challenging. 5. We show that extra sentence level cloze questions generated automatically from an external corpus can be used to further improve model performance through data augmentation (See §5.7). 2 Related Work Several cloze test datasets are collected to measure reading comprehension ability of machines. CNN/DailyMail (Hermann et al., 2015), an early dataset of current QA research, constructs cloze questions from article summaries, with article spans as answers. Their cloze gaps are entities and hence one or few tokens long at best. The LAMBADA dataset (Paperno et al., 2016) constructs a corpus of word level cloze gaps, such that each gap is in the last passage sentence. CBT (Hill and Simha, 2016) creates word level cloze questions by removing a word in the last sentence of every consecutive 21 sentences, with the first 20 sentences being the context. Onishi et al. (2016) curate a dataset of who-did-what type sentences with 5670 Dataset SL MB Distractors Candidates Position ∥Context∥w SCDE   Human Shared Anywhere 319 ROCSTORIES (2016)  × Human End 25 CLOTH (2018) ×  Human Separated Anywhere 243 LAMBADA (2016) × × Exhaustive End 76 CBT (2015) × × Automatic End 465 MRSCC (2011) × × Human Anywhere 20 Table 2: Comparing SCDE with previous cloze datasets. Exhaustive denotes the case where the entire vocabulary is a candidate for a word level cloze. For the single-blank case, candidate sharing is irrelevant. SL and MB mean sentence level and multi-blanks respectively. ∥Context∥w is the average token length of the context. word level blanks. The CLOTH (Xie et al., 2018) dataset collects word level cloze questions from English exams designed by teachers. MRSCC (Zweig and Burges, 2011) consists of 1,040 word level cloze questions created by human annotators. Among recent cloze datasets, ROCStories (Mostafazadeh et al., 2016) is the closest we could find to a sentence level cloze dataset. In this task, the first 4 sentences of a 5-sentence story are provided, and the task is to choose the correct ending from a pair of candidate ending sentences. However, there are several key differences between SCDE and ROCStories. Firstly, there are multiblanks in SCDE which are not in a fixed position and require learning cues from bidirectional contexts of varying lengths. Secondly, the endings in ROCStories have been found to contain “annotation artifacts” (Gururangan et al., 2018) which makes a large fraction of them predictable independent of context. In contrast, SCDE is by design independent of artifacts, since a) given a blank, only some of our candidates are distractors, the rest being answers for other blanks. Even if one were to learn a classifier to distinguish distractors without context, the non-distractor candidates would be unresolvable without context. b) we further check how distinguishable our distractors are from non-distractors without context by training a strong classifier in this setting, as described in §5.6. The classifier obtains a reasonably low F1 score of 0.38. In Table 2, we summarize the comparison of SCDE with cloze datasets from prior art to show its attractive aspects. Public school examinations have been used as a data source by many earlier QA works, two prominent examples being the CLEF QA tracks (Penas et al., 2014; Rodrigo et al., 2015) and RACE (Lai et al., 2017). 3 SCDE Dataset 3.1 Sentence Cloze Test with distractors In this task, each question consists of a passage, S, multiple sentence level blanks B, and a shared set of candidates C with distractors D, where D ⊂C. Problem Complexity3 For our case, given the typical value of |C| and |B| being 7 and 5 respectively, the size of the answer space, |A| is 2520. Thus, the chance of guessing all blanks correctly at random is only 0.04%. Moreover, there is a 48.2% probability of being entirely wrong with randomly guessing. Finally, given an answer list chosen uniformly at random, the expectation of number of distractors in the answer list is 1.4, i.e. on average, roughly one and half answers are distractors. 3.2 Data Collection and Statistics Raw sentence cloze problems are crawled from public websites4 which curate middle and high school English exams designed by teachers. In total, 14,062 raw passages and 68,515 blank questions are crawled from these websites and the following steps are used to clean them. Firstly, duplicate passages are removed. Secondly, when the official answer to the problems are images, two OCR toolkits5 are employed to convert these images to text and the questions with different results from these two programs will be discarded. Finally, we remove examples which have 1) answers pointing to non-existent candidates, 2) missing or null candidates, 3) number of blanks > number of candidates, 4) missing answers. After cleaning, we obtain our SCDE dataset with 5,959 passages and 29,731 blanks. They are 3We defer the derivation to Appendix §1 4http://www.21cnjy.com/; http://5utk.ks5u.com/; http://zujuan.xkw.com/; https://www.gzenxx.com/Html/rw/. 5tesseract; ABBYY FineReader 5671 Statistic Value Total Passages 5,959 Total Blanks 29,731 Blanks Per Passage 4.99 # Candidates Per Passage 6.79 Avg Candidates Per Blank 1.35 % Consecutive Blanks 1.28 # Words Per Passage 319.64 Vocabulary Size 48.6k Var(Candidate Length) 19.54 Table 3: SCDE Statistics. For Consecutive Blanks, either of previous or next sentences is also a blank. randomly split into training, validation and test sets with 4790, 511 and 658 passages respectively. The detailed statistics are presented in Table 3. We find that candidates have very different lengths and passages have long context. 3.3 In-Depth Analysis & Categorization In order to evaluate students’ mastery of a language, teachers usually design tests in a way that questions cover different aspects of a language. Reasoning Types As illustrated with examples in Table 4, we set a four-fold categorization for the reasoning which leads to a ground truth candidate being assigned to a blank. Our reasoning type taxonomy is motivated by categorization of question types in earlier works in QA such as (Chen et al., 2016; Trischler et al., 2017)6. Strictly speaking, these reasoning types could co-exist. But for simplicity, we classify each blank into only one of the four. • WORDMATCH: If the candidate has word overlap, especially of non-stopwords or infrequent phrases, with context around the blank. • PARAPHRASE: If the candidate doesn’t have an explicit word overlap with the context, but nevertheless contains words or phrases which are paraphrases of those in the context. • INFERENCE: If the candidate is a valid hypothesis conditioned on the left context [as premise], or a necessary precondition/premise based on the right context. Note that the candidate in this case doesn’t contain word overlap/paraphrases which would obviate need for inferential reasoning. The reasoning required needs not 6See Section 4.2 from both respective papers. be just strict entailment (Bowman et al., 2015; Marelli et al., 2014) but could also involve abductive reasoning (Bhagavatula et al., 2019), where the candidate is just one of many likely hypothesis (premise) given the left (right) context as premise (hypothesis). • SUMMARY: If the candidate is a summary, introduction, or conclusion of multiple sentences before or after it. In this type, unlike INFERENCE, there is no requirement to deduce and reason about new hypotheses/possibilities not present in the premise only consolidation and rearranging of information is required. A sample of 100 passages containing 500 blanks are manually categorized into these four categories. Examples and statistics of these four types are listed in Table 4. More than 40% blanks need inference to be solved, denoting the high difficulty of our dataset. 4 Methods 4.1 Context Length We experiment with giving our models different amounts of context. Through this, we can explore how context length affects model performance. 1. P(N): Immediate previous (next) sentence 2. P+N: Immediate previous and next sentence 3. AP(AN): All previous (next) sentences 4. AP+AN: All previous and next sentences AP+AN is the unablated setting, where all passage sentences are available to the model. 4.2 PMI Before exploring deep representational approaches, we would like to find how well symbolic ones perform at this task. Starting with works such as Iyyer et al. (2015) and Arora et al. (2017), it has become convention to first benchmark simple baselines of this kind. PMI merely encodes how likely it is for a word pair to occur in consecutive sentences. It does not consider the internal sentence structures, or the relative position of the words in their respective sentence. Intuitively, it can be called a “surfacelevel” approach. A high performance by PMI would indicate that candidates can be matched to blanks by simple ngram statistics, without requiring sentence representation, which would make SCDE uninteresting. 5672 Type Examples with Excerpts From Blank Context WM (18.47%) 1: One day, a teacher was giving a speech to his student. He held up a glass of water and asked the class The students answers ranged from 20g to 500g.  Candidate: B. How heavy do you think this glass of water is? × Candidate: D. It does not matter on the weight itself. Explanation: WordMatch based on glass of water. Para. (19.48%) 2: If you want time to have breakfast with your family, save some time the night before by setting out clothes, shoes and bags. That’s a quarter-hour more you could be sleeping if you bought a coffee maker with a timer. × Candidate: D. And consider setting a second alarm. × Candidate: F. Stick to your set bedtime and wake-up time, no matter the day.  Candidate: G. Reconsider the 15 minutes you spend in line at the cafe. Explanation: Need to match 15 minutes, quarter-hour and coffee, cafe. Infer. (41.97%) 3: May is a great month. You can have a good time with your family. × Candidate: E. All the students can come to their schools.  Candidate: F. From May 1st to 7th, we don’t need to come to school. × Candidate: G. On May 20th, a famous sports star YaoMing comes to our school. Explanation: Need to infer that not coming to school →one is at home with family. Simply matching for words May or school will also match wrong candidates. Sum. (20.08%) 4: How to Enjoy Life As a Teen? Are high school days equal to the “best years of your life”? Maybe not, but you can learn to make the most of your high school days Whether it ’s having a computer, having friends, having a good supply of food, a bed to sleep on, family that loves you, having a decent education or simply being born in this world. Be happy, and life will reward you. × Candidate: A. Remember that the point of life is for you to enjoy it.  Candidate: C. Learn to appreciate small things. Explanation: After summarizing sentences after the blank [which describe a list of “small things”], the answer should be C. A is a strong distractor since both “enjoy” and “life” appear in the context, besides being pertinent to the topic. Indeed, our best-performing BERT-ft model chooses A as the answer. Table 4: Blanks in a sample of 100 passages are manually categorized into four categories. For the ease of illustration, we’ve shown only limited context around the blanks , and 1-2 wrong candidates. WM, Para., Infer. and Sum denote WordMatch, Paraphrase, Inference and Summary respectively. More examples are in Appendix. We estimate PMI counts (Church and Hanks, 1990) from all consecutive sentence pairs in our training split. Let f denote frequency PMI(ws, wc) = f(ws ∈S, wc ∈C) f(ws ∈S)f(wc ∈C) Note that our PMI definition diverges from typical PMI since its asymmetric between ws and wc. Since S and C are the sets of non-terminating and non-starting sentences respectively, they overlap but aren’t identical. For a pair of sentences, we find aggregate PMI(S, C) as: PMI(S, C) = 1 |C||S| X wc∈C X ws∈S PMI(ws, wc) This definition can be extended to all n-grams upto a certain n. We denote this by PMIn. We notice that PMIn performance saturates after n = 2. Hence, in our experiments, we use PMI2. 4.3 Language Modelling One intuitive way to solve this task is to generate the blank sentence given the context by advanced pre-trained language models (LM). Formally, suppose the blank is the ith sentence, si, and s1, . . . , si−1, si+1, . . . , sn are the context. Our goal is to choose ck from the candidate set which could maximize the joint probability p(s1, . . . , si−1, ck, si+1, . . . , sn). Due to limited number of passages available to train a robust LM, Transformer-XL (TR.XL) Base (Dai et al., 2019), trained on WikiText-103, is employed to address this task. In order to make decoding time tractable, context length is limited to three sentences before and after the blank. 4.4 Coherence Coherence models assign a continuous score to a sentence sequence indicative of its coherence. This score is usually unnormalized and not needed to be a probability [unlike language models]. We use the local coherence approaches implemented by the COHERE7 framework (Smith et al., 2016). Roughly, this model works on the intuition that successive sentences exhibit regularities in syntactic patterns. Specifically, it uses ngram patterns on linearized syntactic parses (e.g. S NP VP ...) of consecutive sentences. Once 7github.com/karins/CoherenceFramework 5673 trained, this model can return a “coherence score” for any sentence sequence. The COHERE model is first trained on all ground-truth passages from our training set, with the ground truth answers filled into the blanks. At test-time, we score each possible answer permutation using the trained COHERE model and pick the highest scoring one. Note that decoding for COHERE is by definition exhaustive, and doesn’t make any assumptions by answering the blanks in a particular order. 4.5 InferSent Conneau et al. (2017) use textual inference supervision as a signal to train a shared sentence encoder for premises and hypotheses, which can later be used as a sentence representor. We refer to this approach as INFST. Context features of a given blank and one candidate feed to two encoders in INFST respectively and classify whether this candidate is suitable to this blank. The maximum tokens of context features is set as 256. Bi-directional LSTMs with the max pooling operation are employed as our encoders. We follow the training procedure described in Conneau et al. (2017). 4.6 BERT Models Input Representations Let ck denotes the kth candidate. s−i and s+i denote the ith sentence before and after the blank respectively and |P| and |N| represent total number of sentences before and after the current blank respectively. Following the input convention in Devlin et al. (2018), the input sequence given various context lengths and ck is: 1. P : [CLS]s−1[SEP]ck 2. N : [CLS]ck[SEP]s+1 3. AP : [CLS]s−|P| . . . s−1[SEP]ck 4. AN : [CLS]ck[SEP]s+1 . . . s+|N| To retain sentence sequentiality, the order between the context and the candidate follows that in the original passage. Furthermore, for (A)P+(A)N, we create and score one input sample for each of the context directions during prediction. The average of these two scores is taken as the final score. The maximum tokens of input is set as 256 in our experiments and only the context is truncated to meet this requirement. BERT Next Sentence Prediction (NSP) One of the objectives in BERT pre-training stage is Type Model BA/PA UNSUP BERT 36.9/3.5 TR.XL 32.3/2.6 FT BERT 71.7/29.9 SUP PMI2 29.8/8.4 COHERE 23.3/1.1 INFST 55.8/18.4 HUMAN 87.1/56.3 Table 5: Test BA/PA of various model types with EXH decoding and AP+AN context. understanding the relationship between two sentences, which is highly correlated with our task. Therefore, we use the pre-trained BERT-Largeuncasedd with its NSP layer to predict the most appropriate candidate for each blank given its context. Specifically, BERT is employed to predict the probability of the context and the candidate being consecutive. Finetuning BERT A wide range of NLP tasks have greatly benefited from the pre-trained BERT model. Therefore, we also finetune the pre-trained BERT-Large model on our task through sequence pair classification schema. Specifically, for each blank, its correct candidate will be labelled as 0 and the label of all other wrong candidates is 1. Batch size and number of epochs for all models are 32 and 3. We employ Adam (Kingma and Ba, 2014) as the optimizer with three different learning rates {1e−5, 2e−5, 3e−5}. Best model selection is based on validation performance. All BERT finetuning experiments including ablation study follow this training strategy. 5 Experiments 5.1 Decoding Strategy The decoding strategy decides how exactly we assign a candidate to each blank in the passage. Due to shared candidates, we have two strategies: 1. INC: Answering each blank from left to right in order. Once a blank is answered with a candidate, this candidate is unavailable for later blanks. 2. EXH: Exhaustively scoring all permutations of candidates to answer the blanks. The score of a permutation is simply the sum of each its 5674 Type Model P N AP AN P+N AP+AN UNSUP BERT+INC 33.0/2.1 34.7/4.1 29.8/2.1 15.7/0.3 34.7/2.3 27.3/1.4 +EXH 34.2/3.2 40.2/4.7 31.5/2.6 14.7/0.0 40.2/4.7 36.9/3.5 FT BERT+INC 44.3/6.8 48.0/9.6 50.4/9.9 56.9/16.1 61.0/20.4 66.6/25.1 +EXH 47.2/8.5 54.2/11.2 60.0/17.5 60.0/17.5 66.5/25.2 71.7/29.9 SUP PMI2+INC 23.4/1.2 24.4/1.5 16.2/0.3 17.5/0.1 26.2/1.7 17.1/0.0 +EXH 24.7/1.5 28.2/1.5 20.6/0.9 13.3/0.0 29.7/2.6 25.2/0.6 Table 6: Test BA/PA of various model types unsupervised (UNSUP), finetuned (FT) and supervised (SUP) across varying context levels, with INC or EXH decoding. BERT-Un TR.XL BERT-ft RemoveDt 47.4/17.2 39.7/9.1 80.9/62.0 RandomDt 44.6/12.4 36.0/6.8 77.9/50.9 Unablated 40.2/4.7 32.3/2.6 71.7/29.9 Table 7: Test BA/PA with distractor ablations on test set. RemoveDt and RandomDt represent removing and sampling distractors respectively. BERT-Un and BERT-ft denotes pre-trained and finetuned BERT. constituent blank-candidate pairs. The highest scoring permutation is the answer. 5.2 Evaluation Metrics We design two metrics to evaluate models. Both of these metrics are reported as percentage. Blank accuracy (BA): The fraction of blanks answered correctly, averaged over all passages. Passage Accuracy (PA): PA is 1 iff the model gets all blanks in a passage correct, and 0 otherwise. The average of PA over all passages is reported. 5.3 Human Performance We hire annotators from AMT to both answer and label difficulty for 144 randomly chosen test examples. Annotators are restricted to be from USA/UK and have Master designation on AMT8, along with > 90% HIT approval rate. On average, each annotator spends 624 seconds to answer one example. Difficulty level is chosen from {VeryHard, Hard, Moderate, Easy, VeryEasy}. 3.5% of annotators find the task VeryHard, while 8.3% find it VeryEasy. The largest fraction of 38.2% find it to be Moderate. We note that SCDE contains a larger proportion of non-easy 8Marked by AMT based on approval %, no. approved etc. questions (61.0%). Human performance is reported in Table 5. Annotators achieve BA of 87% which we take as the ceiling performance for models to match. 5.4 Model Performance All models are trained with AP+AN context and decoded by EXH9. Results are shown in Table 5. Finetuning BERT achieves the best performance among other models, though it still lags behind human performance significantly. Unsupervised models could only solve one third of all blanks. Surprisingly, PMI2 and COHERE performs worse than the unsupervised models. We conjecture that it is difficult for COHERE, using syntactic regularities alone, to distinguish between the ground truth answer for a particular blank and another candidate which is a ground truth answer for another nearby blank. As noted, PMI2 suffers due to inability of incorporating larger context. To explore effects of various context length and decoding strategies, models are trained with different context lengths and inferred by both decoding methods. Results are shown in Table 6. INC vs EXH EXH is better than INC for most approaches, indicating that human created blanks are interdependent and need joint answering. Context Length Increasing the context length, such as (P vs. AP), could significantly improve model performance, showing that this task needs discourse-level context to be answered. Furthermore, models with bidirectional context, such as (P+N), perform better than single-direction context, e.g., P, indicating that this task needs global context. Lastly, we observe that PMI-based approaches which do not explicitly encode sentences 9Unless stated otherwise, models decode with EXH and are trained with full context i.e AP+AN 5675 Figure 1: Test blank accuracy of BERT-ft and Human on each reasoning type category introduced in §3.3. are unable to incorporate larger context levels, showing best performance with P+N. 5.5 BERT-ft vs. Human BERT after finetuning (BERT-ft) can perform reasonably well (72%) but there is still a gap comparing with human performance (87%). In this section, we would like to analyze the strength and weakness of BERT-ft compared with HUMAN. Therefore, we analyze their performance across different reasoning categories on test set. From Figure 1, inference questions are the most difficult for both HUMAN and BERT-ft and questions needing WordMatch are relatively easy. Compared with human performance, BERT-ft could achieve comparable BA on WordMatch and paraphrasing problems. However, BERT-ft performs much worse on questions needing inference and summary. We also refer to some examples from Table 4. In Example 4, BERT-ft prefers A but the answer is C. The reason why BERT-ft chooses A may be that “enjoy life” happens in the context, but summarizing the next sentence is necessary to achieve the correct answer. Therefore, it is necessary to improve the ability of BERT to represent meaning at the sentence level beyond representing individual words in context. We also explore how the system performance corresponds to the human judgement of difficulty. Since evaluates rate the problems into 5 difficulty levels, we report the system BA/PA for each level in Table 8. For BA (blank-level accuracy), we see that, overall, the system accuracy decreases as difficulty increases from VeryEasy (0.75) to VeryHard (0.68). However, the decrease is not exactly monotonic (there is a small increase from VeryEasy to Easy, as also from Moderate to Hard). We conjecture that non-monotonicity could be due to two reasons: • Our difficulty annotations are at passage level rather than blank level. There might be some hard blanks in a passage marked overall “Easy”. Conversely, there might be easy blanks in a passage marked overall “Hard”. • Since we’ve more examples marked with certain difficulty levels - e.g 30.5% examples are “Easy” while only 8.3% are “VeryEasy”. This might make system accuracy average for levels with more examples more stable (lower sample variance), leading to some nonmonotonicity (e.g for Easy and VeryEasy) For PA (passage-level accuracy, i.e., getting all questions correct) also, we see a clear decrease as difficulty increases from VeryEasy (0.63) to VeryHard(0.2). The decrease here is sharper than BA , with only one violation of monotonicity (increase from 0.29 to 0.35 on Moderate to Hard). The sharper trend for PA supports our first point above. Diffculty BA PA Very Easy 0.75 0.63 Easy 0.78 0.45 Moderate 0.71 0.29 Hard 0.72 0.35 Very Hard 0.68 0.20 Table 8: BERT-ft performance in terms of human judgement of diffculty. 5.6 Distractor Quality An attractive aspect of this task is distractors designed by English teachers. We verify distractor quality through the following experiments. Model Performance w/o Distractors All distractors in the test set are removed and models are evaluated on this non-distracted test set. Results are shown in Table 7. It is clear to see that after removing these distracting candidates, models can get better scores, showing that models find it hard to exclude distractors during prediction. Randomly Sampled Distractors After removing human-created distractors, we further randomly sample sentences from other passages as new distractors. To mitigate sampling variance, we run this experiment with 8 seeds and report the 5676 Model Uni. PMI2 BERT-ft HUMAN DE 1.429 1.204 0.661 0.375 Table 9: Distractor error on test set of different models. Uni. denotes the uniform model. Training Strategy PA BA DE QA 65.2 26.1 0.792 QH 71.7 29.9 0.661 QA ; QH 74.2 33.9 0.624 QA + QH 74.5 34.3 0.637 Table 10: Test performance of models with QA and QH. averaged score in Table 7. Comparing with distractors designed by teachers, models could discern these distractors more easily. Annotation artifacts of distractors Annotation artifacts (Gururangan et al., 2018) occurs in many datasets created by human annotators. A potential artifact type for our task is whether we could detect distractors without passages. Therefore, we finetune BERT-Large as a binary classifier, the input of which is just distractors and other correct candidates. With this model, we could only obtain 38% F1 score on the test set, showing that it is difficult to filter distractors out without any context. Distractor Error (DE) We define DE as the number of predicted answers per passage which are actually distractors. Through DE, we measure a model’s ability to exclude distractors during prediction. Results are shown in Table 9. HUMAN has the lowest DE and BERT-ft could discern distractors to some extent. However, DE of PMI2 is more than 1, meaning that on average, there is atleast one distractor in the predicted answer list. In summary, distractors created by teachers are high quality and increase task difficulty. 5.7 Automatically Generated Sentence Cloze Questions To explore automatic generation of examples for the task, we construct sentence cloze questions by randomly choosing five sentences in a passage as blanks. We defer automatically generating distractors to future work since non-trivial distractor generation is a hard problem in itself. Specifically, we extract all passages from RACE (Lai et al., 2017) (which is also from exams) and filter out passages which have less than 10 sentences or more than 30 sentences. While choosing blank positions, we prevent three or more blanks consecutive to each other in generated questions. Finally, 16,706 examples are obtained automatically. Here, questions generated automatically and collected from examinations are called QA and QH respectively. We leverage QA in three ways: 1). train models only on QA , 2) first train models on QA and finetune models on QH, i.e., QA ; QH, 3) train models on the concatenation of QA and QH, i.e., QA + QH. BERT-Large is finetuned through these ways and results are shown in Table 10. The model trained only on QA has worst performance and we attribute this to the difficulty of distinguishing distractors without seeing them during training. Therefore, this model has the highest DE. However, models trained on QH and QA could achieve better performance. We conjecture this is because QA assists the model to have better generalization. 6 Conclusion We introduce SCDE, a sentence cloze dataset with high quality distractors carefully designed by English teachers. SCDE requires use of discourselevel context and different reasoning types. More importantly, the high quality distractors make this task more challenging. Human performance is found to exceed advanced contextual embedding and language models by a significant margin. Through SCDE, we aim to encourage the development of more advanced language understanding models. Acknowledgements We thank Qizhe Xie, Hiroaki Hayashi and the 3 anonymous reviewers for valuable comments. 5677 References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. ICLR. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. arXiv preprint arXiv:1908.05739. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2358–2367. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational linguistics, 16(1):22–29. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Sandra S Fotos. 1991. The cloze test as an integrative measure of eflproficiency: A substitute for essays on college entrance examinations? Language learning, 41(3):313–336. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 107–112. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693– 1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301. Jennifer Hill and Rahul Simha. 2016. Automatic generation of context-based fill-in-the-blank exercises using co-occurrence likelihoods and google ngrams. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 23–30. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1681–1691. Jon Jonz. 1991. Cloze item types and second language comprehension. Language testing, 8(1):1–22. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785– 794. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 1–8. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457. Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. 2016. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031. 5678 Anselmo Penas, Yusuke Miyao, Alvaro Rodrigo, Eduard H Hovy, and Noriko Kando. 2014. Overview of CLEF QA Entrance Exams Task 2014. In CLEF (Working Notes), pages 1194–1200. Alvaro Rodrigo, Anselmo Penas, Yusuke Miyao, Eduard H Hovy, and Noriko Kando. 2015. Overview of CLEF QA Entrance Exams Task 2015. In CLEF (Working Notes). Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019a. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027–3035. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019b. Socialiqa: Commonsense reasoning about social interactions. arXiv preprint arXiv:1904.09728. Karin Sim Smith, Wilker Aziz, and Lucia Specia. 2016. Cohere: A toolkit for local coherence. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 4111–4114. Wilson L Taylor. 1953. “cloze procedure”: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Qizhe Xie, Guokun Lai, Zihang Dai, and Eduard Hovy. 2018. Large-scale cloze test dataset created by teachers. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2344–2356. Geoffrey Zweig and Christopher JC Burges. 2011. The microsoft research sentence completion challenge. Microsoft Research, Redmond, WA, USA, Tech. Rep. MSR-TR-2011-129. 5679 A Problem Complexity With |B| = 5 blanks and |C| = 7 candidates, the size of answer space, |A|, is number of permutations |B| objects taken |C| at a time, i.e., P(7, 5) = 2520. Therefore, the probability of answering all blanks correctly is 1 2520 = 0.03% What are the chances of getting answers partially correct? What are the chances of getting answers partially correct? If we have the same number of candidates as blanks, this is equivalent to |B|! −D|B|, where D|B| is the number of derangements10 of |B| elements. In the presence of more candidates than blanks i.e distractors, this expression becomes more involved to derive. Therefore, here, we enumerate all the permutation of answer lists given a correct answer. With |C| = 7 and |B| = 5, ζ(|C|, |B|) = 51.8%. In other words, there is a 48.2% probability of being entirely wrong with a randomly chosen set of answers to each blank in the passage. What are the chances of getting distractors as predicted answers? For the expectation of number of distractors choosing by uniform model, it should be E[DE], where DE denotes distractors errors. 2 X d=0 p(DE = d) × d (1) where p(DE = d) denotes the probability of d predicated answers are distractors. Since there are two distractors in candidates, the maximum of d is 2. Furthermore, p(DE = 1) is P(5, 4)C(5, 4)C(2, 1)/|A| = 0.476 (2) and p(DE = 2) is P(5, 3)C(5, 3)A(2, 2)/|A| = 0.476 (3) where C(·, ·) and P(·, ·) is combination and permutation respectively. Therefore, the expectation of number of distractors is 1.429. B Additional Experiment Specifications Specific BERT Model Used We use uncased BERT models for all our experiments. We use the BERT models trained by the canonical pytorch implementation of Wolf et al. (2019). 10en.wikipedia.org/wiki/Derangement C More examples We show more examples belonging to different reasoning categories in Table 11. Also, some completed questions with strong distractors, multiblank logic and diverse reasoning types are shown in Table 12, 13 and 14. 5680 Reasoning Examples with Excerpts From Blank Context WM (18.47%) 1: One day, a teacher was giving a speech to his student. He held up a glass of water and asked the class. The students answers ranged from 20g to 500g.  Candidate: B. How heavy do you think this glass of water is? × Candidate: D. It does not matter on the weight itself. Explanation: Match based on glass of water 2: Begin the sleep adjustment for your school schedule as early as possible. But if you feel you will need some extra time to adjust, start earlier.  Candidate: C. Starting a few days early will be enough. × Candidate: A. Relax before you go to bed. Explanation: Match based on early, start Para. (19.47%) 3: If you want time to have breakfast with your family, save some time the night before by setting out clothes, shoes, and bags. That’s a quarter-hour more you could be sleeping if you bought a coffee maker with a timer.  Candidate: G. Reconsider the 15 minutes you spend in line at the cafe. × Candidate: F. Stick to your set bedtime and wake-up time, no matter the day. × Candidate: D. And consider setting a second alarm Explanation: Need to match 15 minutes, quarter-hour and coffee, cafe 4: Riding a London subway, a person from China will notice one major difference: In London, commuters do not look at each other. That’s not rudeness- people are just too busy to bother looking.  Candidate: E. In fact, eye contact is avoided at all times. × Candidate: F. Apple must earn a fortune from London commuters. × Candidate: G. Modern Londoner are fancy victims. Explanation: Need to match looking and eye contact Infer. (41.16%) 5: May is a great month. You can have a good time with your family.  Candidate: F. From May 1st to 7th, we don’t need to come to school. × Candidate: G. On May 20th, a famous sports star YaoMing comes to our school. × Candidate: E. All the students can come to their schools. Explanation: Need to infer that not coming to school →one is at home with family. Simply matching for words May or school will also match wrong candidates. 6: The Colosseum in Rome was built during the time of the Roman Empire, in the first century AD. . It is a popular tourist attraction today.  Candidate: D. It could seat 50K people, who went to see fights between animals and people. × Candidate: B. The country used to depend on agriculture. × Candidate: C. Mountains cover about three-fourths of the country. Explanation: World knowledge that Colosseum or -eum suffix relates to building with seating facility. Also coreference with the It in It is a popular . . . 7: American students usually get to school at about 8 : 30 in the morning. In class, American students can sit in their seats when they answer teachers’ questions.  Candidate: B. School starts at 9:00 a.m. × Candidate: D. Then they take part in different kinds of after-school activities. Explanation: Requires inference about time. Activity starts at 9 after participants get there before. Sum. (20.08%) 8: Around water, adults should watch children at all times to make sure they are safe. Those who don’t know how to swim should wear life jackets. But by themselves they are not enough, so an adult should always be present. If you have to rescue a child from drowning, a few seconds can make a big difference. Make sure you have a friend with you whenever you swim. . That person can make sure you get help. Drink a lot water. The sun’s heat and the physical activity may make you sweat more than you realize. By following these simple rules, you can make sure your swim time is safe as well as fun.  Candidate: B. Now get out there, and enjoy the water. × Candidate: D. Make sure everyone in your family swim well. Explanation: B is a good conclusion pertinent to the content of the passage. 9: . Whenever you are worried, write down the questions that make you worry. And write out all the various steps you could take and then the probable consequences of each step. For example, ”What am l worrying about?”, What can I do about it? Here is what I’m going to do about it. After carefully weighing all the facts, you can calmly come to a decision.  Candidate: A. Analyze the facts. × Candidate: C. Decide how much anxiety a thing may be worth. Explanation: A is a more appropriate option to summarize its succeeding context. Table 11: More examples of reasoning categories. 5681 Dear David 1 After I had spent a week with my English family, I slowly began to understand their English a little better. 2 Students in my group are from different cities of Britain and their dialects are different too! Some of their accents are quite strong and they also have their own words and expressions. 3 Before I came to England I had thought that fish and chips were eaten every day. That’s quite wrong! I get rather annoyed now when I hear all the foolish words about typical English food. I had expected to see “London fog”. Do you remember our texts about it ? We had no idea that most of this “thick fog” disappeared many years ago when people stopped using coal in their homes. But the idea to speak about weather was very helpful. 4 On the other hand , habits are different . People tell me what is typical British here in London is not always typical in Wales or Scotland. 5 But what is ordinary for all British is that they follow traditions. Probably Britain has more living signs of its past than many other countries. And people have always been proud of having ancient buildings in capitals, big cities and the countryside. I will tell you more about Britain in my other letters. Love from Britain. Candidates: A. But it’s not the language that’s different and surprising. B. Thanks for your nice letter. C. I have difficulty in understanding my classmates. D. The family I live with are friendly. E. It ’s very different from what I learned at school. F. Local habits and traditions are not the same as what we knew. G. The weather in London is really changeable. Answers: 1→B , 2→E, 3→A , 4→G, 5→F (C and D are distractors) Discussion: C is a strong distractor - not only does it have strong word overlap with the contexts of many blanks it also has words which can make it rank high in terms of the possible inferences (dialects are different implies difficulty in understanding. Though not as strong as C, D also has a key word matching and is similar in content to the topic. How to Enjoy Life As a Teen. Are high school days equal to the “best years of your life”? Maybe not, but you can learn to make the most of your high school days. 1 Whether it’s having a computer, having friends, having a good supply of food, a bed to sleep on, family that loves you, having a decent education or simply being born in this world. Be happy, and life will reward you. Remember that these are the last few years you will be able to enjoy yourself without having to worry about the responsibility of an adult, but make sure you prepare yourself for when you do become one. Choose your friends wisely. Unlike what many articles state, you don’t have to be popular and have a gazillion friends to be happy. 2 Try to have friends that like you who you are, not just because you are wearing a certain brand of shoes or something like that. These are people who shop at the same store as you; not someone who will sympathize with you when your dog dies. 3 Participating in clubs, activities, and sports increases your chances of meeting new friends. While you only need 4 or 5 close friends, that doesn’t mean you shouldn’t try to meet new people. Participating gives you something to do instead of sitting bored at home and wallowing in self-pity. You can pursue interests you enjoy. Video games, for example, are good if you’re the type who can get into that kind of thing. Use your “hobby time” either to gain practical skills for college apps, job resumes, and scholarships or get into something else in the creative field like painting or dance. 4 Work at a job you can enjoy. Working is a great way to gain experience and to meet other people. When you do get out of college, interviewing companies will look at your prior work experience. 5 If you can’t find work, especially in this hard economic time, volunteer or make your own job. Candidates: A.Remember that the point of life is for you to enjoy it. B. In fact, many of the “friends” you have when you are popular are not true friends. C. Learn to appreciate small things. D. Be sociable. E. This will look great on your resume. F. This is the time to start developing passions. G. You should also find a hobby that is meaningful or practical. Answers: 1→C , 2→B, 3→D , 4→F, 5→E (A and G are distractors) Discussion: Both A and G are strong distractors especially for 4. Both of them overlap on key words, and do fit in the local context, though they are less coherent w.r.t F (which doesn’t have any overlapping words) when placed in the broader narrative. Table 12: Examples with strong distractors 5682 The demand for ways to improve memory is higher in students than it is in adults. Students often come across new knowledge in different areas that they need to store for exams. 1 Here are three effective ways to improve your memory as a student. 2 Research shows that learning activities that take more than two hours without a break are less productive when compared to those that take one hour or 30 minutes. Students are likely to remember things they learn over a short period of time. Make sure you take breaks between learning sessions to help improve your memory. Try to relax. Relaxing should be an essential part of your learning process. Scientists have proven that stronger and lasting memories can be achieved when a person relaxes. 3 Deep breathing is one of the most popular relaxation techniques. Establish a quiet environment and find a comfortable position. Then go through a deep breathing process for at least 15 minutes. Train the brain Students should give their brains a workout in order to improve their memory. At times the brain needs the right stimulation to keep growing and developing. You need to come up with a brain boosting activity that is suitable for you. 4 Write a short story and then try to use seven to nine words to describe it. You can also do games and puzzles to help improve your memory. 5 The techniques discussed above will help you to improve your memory significantly. Candidates: A. Distribute learning. B. Enrich learning activities. C. Some students suffer with memory problems. D. Like a muscle memory can stretch and grow with a workout. E. For instance you can prepare a list of items and try to memorize them. F. You need to use different relaxation techniques in order to improve your memory. G. In summary a good memory is an important advantage to any student who wants to improve his or her grades. Answers: 1→C, 2→A, 3→F , 4→E, 5→G (B and D are distractors) Discussion: The candidate F can actually go into three possible blanks and fit well into their context - Blanks 1, 3 and 5. This can be seen from the several overlapping phrases/paraphrases F shares with all three, as shown by the three colors (one per concept). However, G (which starts with the phrase In summary, can only fit into Blank 5. A is also difficult to place in any blank other than Blank 1. Hence , candidate F has to be placed into Blank 3. Table 13: Examples which require multi-blank logic 5683 A student’s life is never easy. And it is even more difficult if you will have to complete your study in a foreign land. 1 The following are some basic things you need to do before even seizing that passport and boarding on the plane. Knowing the country. You shouldn’t bother researching the country’s hottest tourist spots or historical places. You won’t go there as a tourist, but as a student. 2 In addition, read about their laws. You surely don’t want to face legal problems, especially if you’re away from home. 3 Don’t expect that you can graduate abroad without knowing even the basics of the language. Before leaving your home country, take online lessons to at least master some of their words and sentences. This will be useful in living and studying there. Doing this will also prepare you in communicating with those who can’t speak English. Preparing for other needs. Check the conversion of your money to their local currency. 4 The Internet of your intended school will be very helpful in findings an apartment and helping you understand local currency. Remember, you’re not only carrying your own reputation but your country’s reputation as well. If you act foolishly, people there might think that all of your countrymen are foolish as well. 5 Candidates: A. Studying their language. B. That would surely be a very bad start for your study abroad program. C. Going with their trends will keep it from being too obvious that you’re a foreigner. D. Set up your bank account so you can use it there , get an insurance , and find an apartment. E. It’ll be helpful to read the most important points in their history and to read up on their culture. F. A lot of preparations are needed so you can be sure to go back home with a diploma and a bright future waiting for you. G. Packing your clothes. Answers with Reasoning Type: 1→F (Summary), 2→E (Inference), 3→A (Paraphrase), 4→D (WordMatch), 5→B (Inference) (C and G are distractors) Discussion: Blank 3 is the easiest to solve, since Studying their language is a near-paraphrase of Knowing even the basics of the language. Blank 2 needs to be reasoned out by Inference - specifically E can be inferred from the previous sentence. Note however that C is also a possible inference from the previous sentence - it is only after reading the entire context, which seems to be about learning various aspects of a country, that E seems to fit better. Blank 1 needs to be reasoned out by Summary →it requires understanding several later sentences and abstracting out that they all refer to lots of preparations. Finally, Blank 5 can be mapped to B by inferring that people thinking all your countrymen are foolish is bad, while Blank 4 is a easy WordMatch on apartment to D. Latest news and comment on Street art from guardian.co.uk... 1 You can find it on buildings sidewalks street signs and trash cans from Tokyo to Paris from Moscow to Cape Town. Street art has become a global culture and even art museums and galleries are collecting the works of street artist. Street art started out very secretly because it was illegal to paint on public and private property without permission. 2 Some think it is a crime and others think it is a very beautiful new form of culture. Art experts claim that the street art movement began in New York in the 1960s. Young adults painted words and other images on the walls and trains. This colorful style of writing became known as graffiti whose art showed that young people wanted to rebel against society. Street artists do their work for different reasons. 3 They choose street art because it is closer to the people. Some artists try to express their political opinion in their work. Others like to do things that are forbidden and hope they don’t caught. Advertising companies also use street art in their ads because it gives people the impressions of youth and energy. 4 Artists can show their pictures to an audience all over the world. Many city residents however say that seeing a picture on the Internet is never as good as seeing it alive. 5. There it will continue to change and grow Candidates: A. Street art is a very popular form of art that is spreading quickly all over the world. B. Today the Internet has a big influence on street art. C. With the development of science and technology different art styles come into the Internet. D. The street art movement lives with the energy and life of a big city. E. People often have different opinions about street art. F. Street art used to be illegal but now has become popular. G. Some of them do not like artists who make so much money in galleries and museums. Answers with Reasoning Type: 1→A (Summary), 2→E (Inference), 3→G (Inference), 4→B (Inference), 5→D (Inference) (C and F are distractors) Discussion: Blank 1 requires an answer which makes an overall broad statement to introduce the topic. Working backwards, this requires summarizing or finding a broad topic given the latter sentences. Table 14: Representative examples with diverse reasoning types
2020
502
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5684–5696 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5684 Selective Question Answering under Domain Shift Amita Kamath Robin Jia Percy Liang Computer Science Department, Stanford University {kamatha, robinjia, pliang}@cs.stanford.edu Abstract To avoid giving wrong answers, question answering (QA) models need to know when to abstain from answering. Moreover, users often ask questions that diverge from the model’s training data, making errors more likely and thus abstention more critical. In this work, we propose the setting of selective question answering under domain shift, in which a QA model is tested on a mixture of in-domain and out-of-domain data, and must answer (i.e., not abstain on) as many questions as possible while maintaining high accuracy. Abstention policies based solely on the model’s softmax probabilities fare poorly, since models are overconfident on out-of-domain inputs. Instead, we train a calibrator to identify inputs on which the QA model errs, and abstain when it predicts an error is likely. Crucially, the calibrator benefits from observing the model’s behavior on out-of-domain data, even if from a different domain than the test data. We combine this method with a SQuADtrained QA model and evaluate on mixtures of SQuAD and five other QA datasets. Our method answers 56% of questions while maintaining 80% accuracy; in contrast, directly using the model’s probabilities only answers 48% at 80% accuracy. 1 Introduction Question answering (QA) models have achieved impressive performance when trained and tested on examples from the same dataset, but tend to perform poorly on examples that are out-of-domain (OOD) (Jia and Liang, 2017; Chen et al., 2017; Yogatama et al., 2019; Talmor and Berant, 2019; Fisch et al., 2019). Deployed QA systems in search engines and personal assistants need to gracefully handle OOD inputs, as users often ask questions that fall outside of the system’s training distribution. While the ideal system would correctly answer all Dataset Distributions Example question Q: What can result from disorders of the immune system? (from SQuAD) Q: John Wickham Legg was recommended by Jenner for the post of medical attendant to which eighth child and youngest son of Queen Victoria and Prince Albert of Saxe-Coburg and Gotha? (from HotpotQA) Q: Capote gained fame with this “other” worldly 1948 novel about a teenager in a crumbling southern mansion. (from SearchQA) Train Calibrate Test Source Source Known OOD Source Unknown OOD Figure 1: Selective question answering under domain shift with a trained calibrator. First, a QA model is trained only on source data. Then, a calibrator is trained to predict whether the QA model was correct on any given example. The calibrator’s training data consists of both previously held-out source data and known OOD data. Finally, the combined selective QA system is tested on a mixture of test data from the source distribution and an unknown OOD distribution. OOD questions, such perfection is not attainable given limited training data (Geiger et al., 2019). Instead, we aim for a more achievable yet still challenging goal: models should abstain when they are likely to err, thus avoiding showing wrong answers to users. This general goal motivates the setting of selective prediction, in which a model outputs both a prediction and a scalar confidence, and abstains on inputs where its confidence is low (El-Yaniv and Wiener, 2010; Geifman and El-Yaniv, 2017). In this paper, we propose the setting of selective question answering under domain shift, which captures two important aspects of real-world QA: (i) test data often diverges from the training distribution, and (ii) systems must know when to abstain. We train a QA model on data from a source distribution, then evaluate selective prediction performance on a dataset that includes samples from both the source distribution and an unknown OOD distribution. This mixture simulates the likely scenario in which users only sometimes ask questions that are covered by the training distribution. While the sys5685 tem developer knows nothing about the unknown OOD data, we allow access to a small amount of data from a third known OOD distribution (e.g., OOD examples that they can foresee). We first show that our setting is challenging because model softmax probabilities are unreliable estimates of confidence on out-of-domain data. Prior work has shown that a strong baseline for indomain selective prediction is MaxProb, a method that abstains based on the probability assigned by the model to its highest probability prediction (Hendrycks and Gimpel, 2017; Lakshminarayanan et al., 2017). We find that MaxProb gives good confidence estimates on in-domain data, but is overconfident on OOD data. Therefore, MaxProb performs poorly in mixed settings: it does not abstain enough on OOD examples, relative to in-domain examples. We correct for MaxProb’s overconfidence by using known OOD data to train a calibrator—a classifier trained to predict whether the original QA model is correct or incorrect on a given example (Platt, 1999; Zadrozny and Elkan, 2002). While prior work in NLP trains a calibrator on in-domain data (Dong et al., 2018), we show this does not generalize to unknown OOD data as well as training on a mixture of in-domain and known OOD data. Figure 1 illustrates the problem setup and how the calibrator uses known OOD data. We use a simple random forest calibrator over features derived from the input example and the model’s softmax outputs. We conduct extensive experiments using SQuAD (Rajpurkar et al., 2016) as the source distribution and five other QA datasets as different OOD distributions. We average across all 20 choices of using one as the unknown OOD dataset and another as the known OOD dataset, and test on a uniform mixture of SQuAD and unknown OOD data. On average, the trained calibrator achieves 56.1% coverage (i.e., the system answers 56.1% of test questions) while maintaining 80% accuracy on answered questions, outperforming MaxProb with the same QA model (48.2% coverage at 80% accuracy), using MaxProb and training the QA model on both SQuAD and the known OOD data (51.8% coverage), and training the calibrator only on SQuAD data (53.7% coverage). In summary, our contributions are as follows: (1) We propose a novel setting, selective question answering under domain shift, that captures the practical necessity of knowing when to abstain on test data that differs from the training data. (2) We show that QA models are overconfident on out-of-domain examples relative to indomain examples, which causes MaxProb to perform poorly in our setting. (3) We show that out-of-domain data, even from a different distribution than the test data, can improve selective prediction under domain shift when used to train a calibrator. 2 Related Work Our setting combines extrapolation to out-ofdomain data with selective prediction. We also distinguish our setting from the tasks of identifying unanswerable questions and outlier detection. 2.1 Extrapolation to out-of-domain data Extrapolating from training data to test data from a different distribution is an important challenge for current NLP models (Yogatama et al., 2019). Models trained on many domains may still struggle to generalize to new domains, as these may involve new types of questions or require different reasoning skills (Talmor and Berant, 2019; Fisch et al., 2019). Related work on domain adaptation also tries to generalize to new distributions, but assumes some knowledge about the test distribution, such as unlabeled examples or a few labeled examples (Blitzer et al., 2006; Daume III, 2007); we assume no such access to the test distribution, but instead make the weaker assumption of access to samples from a different OOD distribution. 2.2 Selective prediction Selective prediction, in which a model can either predict or abstain on each test example, is a longstanding research area in machine learning (Chow, 1957; El-Yaniv and Wiener, 2010; Geifman and El-Yaniv, 2017). In NLP, Dong et al. (2018) use a calibrator to obtain better confidence estimates for semantic parsing. Rodriguez et al. (2019) use a similar approach to decide when to answer QuizBowl questions. These works focus on training and testing models on the same distribution, whereas our training and test distributions differ. Selective prediction under domain shift. Other fields have recognized the importance of selective prediction under domain shift. In medical applications, models may be trained and tested on different groups of patients, so selective prediction is needed to avoid costly errors (Feng et al., 2019). In computational chemistry, Toplak et al. (2014) use 5686 selective prediction techniques to estimate the set of (possibly out-of-domain) molecules for which a reactivity classifier is reliable. To the best of our knowledge, our work is the first to study selective prediction under domain shift in NLP. Answer validation. Traditional pipelined systems for open-domain QA often have dedicated systems for answer validation—judging whether a proposed answer is correct. These systems often rely on external knowledge about entities (Magnini et al., 2002; Ko et al., 2007). Knowing when to abstain has been part of past QA shared tasks like RespubliQA (Pe˜nas et al., 2009) and QA4MRE (Pe˜nas et al., 2013). IBM’s Watson system for Jeopardy also uses a pipelined approach for answer validation (Gondek et al., 2012). Our work differs by focusing on modern neural QA systems trained end-to-end, rather than pipelined systems, and by viewing the problem of abstention in QA through the lens of selective prediction. 2.3 Related goals and tasks Calibration. Knowing when to abstain is closely related to calibration—having a model’s output probability align with the true probability of its prediction (Platt, 1999). A key distinction is that selective prediction metrics generally depend only on relative confidences—systems are judged on their ability to rank correct predictions higher than incorrect predictions (El-Yaniv and Wiener, 2010). In contrast, calibration error depends on the absolute confidence scores. Nonetheless, we will find it useful to analyze calibration in Section 5.3, as miscalibration on some examples but not others does imply poor relative ordering, and therefore poor selective prediction. Ovadia et al. (2019) observe increases in calibration error under domain shift. Identifying unanswerable questions. In SQuAD 2.0, models must recognize when a paragraph does not entail an answer to a question (Rajpurkar et al., 2018). Sentence selection systems must rank passages that answer a question higher than passages that do not (Wang et al., 2007; Yang et al., 2015). In these cases, the goal is to “abstain” when no system (or person) could infer an answer to the given question using the given passage. In contrast, in selective prediction, the model should abstain when it would give a wrong answer if forced to make a prediction. Outlier detection. We distinguish selective prediction under domain shift from outlier detection, the task of detecting out-of-domain examples (Sch¨olkopf et al., 1999; Hendrycks and Gimpel, 2017; Liang et al., 2018). While one could use an outlier detector for selective classification (e.g., by abstaining on all examples flagged as outliers), this would be too conservative, as QA models can often get a non-trivial fraction of OOD examples correct (Talmor and Berant, 2019; Fisch et al., 2019). Hendrycks et al. (2019b) use known OOD data for outlier detection by training models to have high entropy on OOD examples; in contrast, our setting rewards models for predicting correctly on OOD examples, not merely having high entropy. 3 Problem Setup We formally define the setting of selective prediction under domain shift, starting with some notation for selective prediction in general. 3.1 Selective Prediction Given an input x, the selective prediction task is to output (ˆy, c) where ˆy ∈Y (x), the set of answer candidates, and c ∈R denotes the model’s confidence. Given a threshold γ ∈R, the overall system predicts ˆy if c ≥γ and abstain otherwise. The risk-coverage curve provides a standard way to evaluate selective prediction methods (El-Yaniv and Wiener, 2010). For a test dataset Dtest, any choice of γ has an associated coverage—the fraction of Dtest the model makes a prediction on—and risk—the error on that fraction of Dtest. As γ decreases, coverage increases, but risk will usually also increase. We plot risk versus coverage and evaluate on the area under this curve (AUC), as well as the maximum possible coverage for a desired risk level. The former metric averages over all γ, painting an overall picture of selective prediction performance, while the latter evaluates at a particular choice of γ corresponding to a specific level of risk tolerance. 3.2 Selective Prediction under Domain Shift We deviate from prior work by considering the setting where the model’s training data Dtrain and test data Dtest are drawn from different distributions. As our experiments demonstrate, this setting is challenging because standard QA models are overconfident on out-of-domain inputs. To formally define our setting, we specify three 5687 data distributions. First, psource is the source distribution, from which a large training dataset Dtrain is sampled. Second, qunk is an unknown OOD distribution, representing out-of-domain data encountered at test time. The test dataset Dtest is sampled from ptest, a mixture of psource and qunk: ptest = αpsource + (1 −α)qunk (1) for α ∈(0, 1). We choose α = 1 2, and examine the effect of changing this ratio in Section 5.8. Third, qknown is a known OOD distribution, representing examples not in psource but from which the system developer has a small dataset Dcalib. 3.3 Selective Question Answering While our framework is general, we focus on extractive question answering, as exemplified by SQuAD (Rajpurkar et al., 2016), due to its practical importance and the diverse array of available QA datasets in the same format. The input x is a passage-question pair (p, q), and the set of answer candidates Y (x) is all spans of the passage p. A base model f defines a probability distribution f(y | x) over Y (x). All selective prediction methods we consider choose ˆy = arg maxy′∈Y (x) f(y′ | x), but differ in their associated confidence c. 4 Methods Recall that our setting differs from the standard selective prediction setting in two ways: unknown OOD data drawn from qunk appears at test time, and known OOD data drawn from qknown is available to the system. Intuitively, we expect that systems must use the known OOD data to generalize to the unknown OOD data. In this section, we present three standard selective prediction methods for indomain data, and show how they can be adapted to use data from qknown. 4.1 MaxProb The first method, MaxProb, directly uses the probability assigned by the base model to ˆy as an estimate of confidence. Formally, MaxProb with model f estimates confidence on input x as: cMaxProb = f(ˆy | x) = max y′∈Y (x) f(y′ | x). (2) MaxProb is a strong baseline for our setting. Across many tasks, MaxProb has been shown to distinguish in-domain test examples that the model gets right from ones the model gets wrong (Hendrycks and Gimpel, 2017). MaxProb is also a strong baseline for outlier detection, as it is lower for out-of-domain examples than in-domain examples (Lakshminarayanan et al., 2017; Liang et al., 2018; Hendrycks et al., 2019b). This is desirable for our setting: models make more mistakes on OOD examples, so they should abstain more on OOD examples than in-domain examples. MaxProb can be used with any base model f. We consider two such choices: a model fsrc trained only on Dtrain, or a model fsrc+known trained on the union of Dtrain and Dcalib. 4.2 Test-time Dropout For neural networks, another standard approach to estimate confidence is to use dropout at test time. Gal and Ghahramani (2016) showed that dropout gives good confidence estimates on OOD data. Given an input x and model f, we compute f on x with K different dropout masks, obtaining prediction distributions ˆp1, . . . , ˆpK, where each ˆpi is a probability distribution over Y (x). We consider two statistics of these ˆpi’s that are commonly used as confidence estimates. First, we take the mean of ˆpi(ˆy) across all i (Lakshminarayanan et al., 2017): cDropoutMean = 1 K K X i=1 ˆpi(ˆy). (3) This can be viewed as ensembling the predictions across all K dropout masks by averaging them. Second, we take the negative variance of the ˆpi(ˆy)’s (Feinman et al., 2017; Smith and Gal, 2018): cDropoutVar = −Var[ˆp1(ˆy), . . . , ˆpK(ˆy)]. (4) Higher variance corresponds to greater uncertainty, and hence favors abstaining. Like MaxProb, dropout can be used either with f trained only on Dtrain, or on both Dtrain and the known OOD data. Test-time dropout has practical disadvantages compared to MaxProb. It requires access to internal model representations, whereas MaxProb only requires black box access to the base model (e.g., API calls to a trained model). Dropout also requires K forward passes of the base model, leading to a K-fold increase in runtime. 4.3 Training a calibrator Our final method trains a calibrator to predict when a base model (trained only on data from psource) is 5688 correct (Platt, 1999; Dong et al., 2018). We differ from prior work by training the calibrator on a mixture of data from psource and qknown, anticipating the test-time mixture of psource and qunk. More specifically, we hold out a small number of psource examples from base model training, and train the calibrator on the union of these examples and the qknown examples. We define cCalibrator to be the prediction probability of the calibrator. The calibrator itself could be any binary classification model. We use a random forest classifier with seven features: passage length, the length of the predicted answer ˆy, and the top five softmax probabilities output by the model. These features require only a minimal amount of domain knowledge to define. Rodriguez et al. (2019) similarly used multiple softmax probabilities to decide when to answer questions. The simplicity of this model makes the calibrator fast to train when given new data from qknown, especially compared to retraining the QA model on that data. We experiment with four variants of the calibrator. First, to measure the impact of using known OOD data, we change the calibrator’s training data: it can be trained either on data from psource only, or both psource and qknown data as described. Second, we consider a modification where instead of the model’s probabilities, we use probabilities from the mean ensemble over dropout masks, as described in Section 4.2, and also add cDropoutVar as a feature. As discussed above, dropout features are costly to compute and assume white-box access to the model, but may result in better confidence estimates. Both of these variables can be changed independently, leading to four configurations. 5 Experiments and Analysis 5.1 Experimental Details Data. We use SQuAD 1.1 (Rajpurkar et al., 2016) as the source dataset and five other datasets as OOD datasets: NewsQA (Trischler et al., 2017), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017), HotpotQA (Yang et al., 2018), and Natural Questions (Kwiatkowski et al., 2019).1 These are all extractive question answering datasets where all questions are answerable; however, they vary widely in the nature of passages (e.g., Wikipedia, news, web snippets), questions (e.g., Jeopardy and trivia questions), and relationship between pas1We consider these different datasets to represent different domains, hence our usage of the term “domain shift.” sages and questions (e.g., whether questions are written based on passages, or passages retrieved based on questions). We used the preprocessed data from the MRQA 2019 shared task (Fisch et al., 2019). For HotpotQA, we focused on multi-hop questions by selecting only “hard” examples, as defined by Yang et al. (2018). In each experiment, two different OOD datasets are chosen as qknown and qunk. All results are averaged over all 20 such combinations, unless otherwise specified. We sample 2,000 examples from qknown for Dcalib, and 4,000 SQuAD and 4,000 qunk examples for Dtest. We evaluate using exact match (EM) accuracy, as defined by SQuAD (Rajpurkar et al., 2016). Additional details can be found in Appendix A.1. QA model. For our QA model, we use the BERTbase SQuAD 1.1 model trained for 2 epochs (Devlin et al., 2019). We train six models total: one fsrc and five fsrc+known’s, one for each OOD dataset. Selective prediction methods. For test-time dropout, we use K = 30 different dropout masks, as in Dong et al. (2018). For our calibrator, we use the random forest implementation from Scikitlearn (Pedregosa et al., 2011). We train on 1,600 SQuAD examples and 1,600 known OOD examples, and use the remaining 400 SQuAD and 400 known OOD examples as a validation set to tune calibrator hyperparameters via grid search. We average our results over 10 random splits of this data. When training the calibrator only on psource, we use 3,200 SQuAD examples for training and 800 for validation, to ensure equal dataset sizes. Additional details can be found in Appendix A.2. 5.2 Main results Training a calibrator with qknown outperforms other methods. Table 1 compares all methods that do not use test-time dropout. Compared to MaxProb with fsrc+known, the calibrator has 4.3 points and 6.7 points higher coverage at 80% and 90% accuracy respectively, and 1.1 points lower AUC.2 This demonstrates that training a calibrator is a better use of known OOD data than training a QA model. The calibrator trained on both psource and qknown also outperforms the calibrator trained on psource alone by 2.4% coverage at 80% accuracy. All methods perform far worse than the optimal selective predictor with the given base model, though 295% confidence interval is [1.01, 1.69], using the paired bootstrap test with 1000 bootstrap samples. 5689 AUC ↓ Cov @ Acc=80% ↑ Cov @ Acc=90% ↑ Train QA model on SQuAD MaxProb Calibrator (psource only) Calibrator (psource and qknown) Best possible 20.54 19.27 18.47 9.64 48.23 53.67 56.06 74.92 21.07 26.68 29.42 66.59 Train QA model on SQuAD + known OOD MaxProb Best possible 19.61 8.83 51.75 76.80 22.76 68.26 Table 1: Results for methods without test-time dropout. The calibrator with access to qknown outperforms all other methods. ↓: lower is better. ↑: higher is better. AUC ↓ Cov @ Acc=80% ↑ Cov @ Acc=90% ↑ Train QA model on SQuAD Test-time dropout (–var) Test-time dropout (mean) Calibrator (psource only) Calibrator (psource and qknown) Best possible 28.13 18.35 17.84 17.31 9.64 24.50 57.49 58.35 59.99 74.92 15.40 29.55 34.27 34.99 66.59 Train QA model on SQuAD + known OOD Test-time dropout (–var) Test-time dropout (mean) Best possible 26.67 17.72 8.83 26.74 59.60 76.80 15.95 30.40 68.26 Table 2: Results for methods that use test-time dropout. Here again, the calibrator with access to qknown outperforms all other methods. achieving this bound may not be realistic.3 Test-time dropout improves results but is expensive. Table 2 shows results for methods that use test-time dropout, as described in Section 4.2. The negative variance of ˆpi(ˆy)’s across dropout masks serves poorly as an estimate of confidence, but the mean performs well. The best performance is attained by the calibrator using dropout features, which has 3.9% higher coverage at 80% accuracy than the calibrator with non-dropout features. Since test-time dropout introduces substantial (i.e., Kfold) runtime overhead, our remaining analyses focus on methods without test-time dropout. The QA model has lower non-trivial accuracy on OOD data. Next, we motivate our focus on selective prediction, as opposed to outlier detection, by showing that the QA model still gets a non-trivial fraction of OOD examples correct. Table 3 shows the (non-selective) exact match scores 3As the QA model has fixed accuracy < 100% on Dtest, it is impossible to achieve 0% risk at 100% coverage. Figure 2: Area under the risk-coverage curve as a function of how much data from qknown is available. At all points, using data from qknown to train the calibrator is more effective than using it for QA model training. for all six QA models used in our experiments on all datasets. All models get around 80% accuracy on SQuAD, and around 40% to 50% accuracy on most OOD datasets. Since OOD accuracies are much higher than 0%, abstaining on all OOD examples would be overly conservative.4 At the same time, since OOD accuracy is worse than in-domain accuracy, a good selective predictor should answer more in-domain examples and fewer OOD examples. Training on 2,000 qknown examples does not significantly help the base model extrapolate to other qunk distributions. Results hold across different amounts of known OOD data. As shown in Figure 2, across all amounts of known OOD data, using it to train and validate the calibrator (in an 80–20 split) performs better than adding all of it to the QA training data and using MaxProb. 5.3 Overconfidence of MaxProb We now show why MaxProb performs worse in our setting compared to the in-domain setting: it is miscalibrated on out-of-domain examples. Figure 3a shows that MaxProb values are generally lower for OOD examples than in-domain examples, following previously reported trends (Hendrycks and Gimpel, 2017; Liang et al., 2018). However, the MaxProb values are still too high out-of-domain. Figure 3b shows that MaxProb is not well calibrated: it is underconfident in-domain, and overconfident out-of-domain.5 For example, for a Max4In Section A.3, we confirm that an outlier detector does not achieve good selective prediction performance. 5The in-domain underconfidence is because SQuAD (and some other datasets) provides only one answer at training time, but multiple answers are considered correct at test time. In Ap5690 Train Data ↓/ Test Data → SQuAD TriviaQA HotpotQA NewsQA Natural Questions SearchQA SQuAD only 80.95 48.43 44.88 40.45 42.78 17.98 SQuAD + 2K TriviaQA 81.48 (50.50) 43.95 39.15 47.05 25.23 SQuAD + 2K HotpotQA 81.15 49.35 (53.60) 39.85 48.18 24.40 SQuAD + 2K NewsQA 81.50 50.18 42.88 (44.00) 47.08 20.40 SQuAD + 2K NaturalQuestions 81.48 51.43 44.38 40.90 (54.85) 25.95 SQuAD + 2K SearchQA 81.60 56.58 44.30 40.15 47.05 (59.80) Table 3: Exact match accuracy for all six QA models on all six test QA datasets. Training on Dcalib improves accuracy on data from the same dataset (diagonal), but generally does not improve accuracy on data from qunk. (a) (b) (c) (d) Figure 3: MaxProb is lower on average for OOD data than in-domain data (a), but it is still overconfident on OOD data: when plotting the true probability of correctness vs. MaxProb (b), the OOD curve is below the y = x line, indicating MaxProb overestimates the probability that the prediction is correct. The calibrator assigns lower confidence on OOD data (c) and has a smaller gap between in-domain and OOD curves (d), indicating improved calibration. Prob of 0.6, the model is about 80% likely to get the question correct if it came from SQuAD (indomain), and 45% likely to get the question correct if it was OOD. When in-domain and OOD examples are mixed at test time, MaxProb therefore does not abstain enough on the OOD examples. Figure 3d shows that the calibrator is better calibrated, even though it is not trained on any unknown OOD data. In Appendix A.5, we show that the calibrator abstains on more OOD examples than MaxProb. Our finding that the BERT QA model is not overconfident in-domain aligns with Hendrycks et al. (2019a), who found that pre-trained computer vision models are better calibrated than models trained from scratch, as pre-trained models can be pendix A.4, we show that removing multiple answers makes MaxProb well-calibrated in-domain; it stays overconfident out-of-domain. trained for fewer epochs. Our QA model is only trained for two epochs, as is standard for BERT. Our findings also align with Ovadia et al. (2019), who find that computer vision and text classification models are poorly calibrated out-of-domain even when well-calibrated in-domain. Note that miscalibration out-of-domain does not imply poor selective prediction on OOD data, but does imply poor selective prediction in our mixture setting. 5.4 Extrapolation between datasets We next investigated how choice of qknown affects generalization of the calibrator to qunk. Figure 4 shows the percentage reduction between MaxProb and optimal AUC achieved by the trained calibrator. The calibrator outperforms MaxProb over all dataset combinations, with larger gains when qknown and qunk are similar. For example, samples from TriviaQA help generalization to SearchQA and vice versa; both use web snippets as passages. Samples from NewsQA, the only other nonWikipedia dataset, are also helpful for both. On the other hand, no other dataset significantly helps generalization to HotpotQA, likely due to HotpotQA’s unique focus on multi-hop questions. 5.5 Calibrator feature ablations We determine the importance of each feature of the calibrator by removing each of its features individually, leaving the rest. From Table 4, we see that the most important features are the softmax probabilities and the passage length. Intuitively, passage length is meaningful both because longer passages have more answer candidates, and because passage length differs greatly between different domains. 5.6 Error analysis We examined calibrator errors on two pairs of qknown and qunk—one similar pair of datasets and one dissimilar. For each, we sampled 100 errors in which the system confidently gave a wrong answer (overconfident), and 100 errors in which the sys5691 Figure 4: Results for different choices of qknown (y-axis) and qunk (x-axis). For each pair, we report the percent AUC improvement of the trained calibrator over MaxProb, relative to the total possible improvement. Datasets that use similar passages (e.g., SearchQA and TriviaQA) help each other the most. Main diagonal elements (shaded) assume access to qunk (see Section 5.9). AUC ↓ Cov @ Acc=80% ↑ Cov @ Acc=90% ↑ All features –Top softmax probability –2nd:5th highest softmax probabilities –All softmax probabilities –Context length –Prediction length 18.47 18.61 19.11 26.41 19.79 18.6 56.06 55.46 54.29 24.57 51.73 55.67 29.42 29.27 26.67 0.08 24.24 29.30 Table 4: Performance of the calibrator as each of its features is removed individually, leaving the rest. The base model’s softmax probabilities are important features, as is passage length. tem abstained but would have gotten the question correct if it had answered (underconfident). These were sampled from the 1000 most overconfident or underconfident errors, respectively. qknown = NewsQA, qunk = TriviaQA. These two datasets are from different non-Wikipedia sources. 62% of overconfidence errors are due to the model predicting valid alternate answers, or span mismatches—the model predicts a slightly different span than the gold span, and should be considered correct; thus the calibrator was not truly overconfident. This points to the need to improve QA evaluation metrics (Chen et al., 2019). 45% of underconfidence errors are due to the passage requiring coreference resolution over long distances, including with the article title. Neither SQuAD nor NewsQA passages have coreference chains as long or contain titles, so it is unsurprising that the calibrator struggles on these cases. Another 25% of underconfidence errors were cases in which there was insufficient evidence in the paragraph to answer the question (as TriviaQA was constructed via distant supervision), so the calibrator was not incorrect to assign low confidence. 16% of all underconfidence errors also included phrases that would not be common in SQuAD and NewsQA, such as using “said bye bye” for “banned.” qknown = NewsQA, qunk = HotpotQA. These two datasets are dissimilar from each other in multiple ways. HotpotQA uses short Wikipedia passages and focuses on multi-hop questions; NewsQA has much longer passages from news articles and does not focus on multi-hop questions. 34% of the overconfidence errors are due to valid alternate answers or span mismatches. On 65% of the underconfidence errors, the correct answer was the only span in the passage that could plausibly answer the question, suggesting that the model arrived at the answer due to artifacts in HotpotQA that facilitate guesswork (Chen and Durrett, 2019; Min et al., 2019). In these situations, the calibrator’s lack of confidence is therefore justifiable. 5.7 Relationship with Unanswerable Questions We now study the relationship between selective prediction and identifying unanswerable questions. Unanswerable questions do not aid selective prediction. We trained a QA model on SQuAD 2.0 (Rajpurkar et al., 2018), which augments SQuAD 1.1 with unanswerable questions. Our trained calibrator with this model gets 18.38 AUC, which is very close to the 18.47 for the model trained on SQuAD 1.1 alone. MaxProb also performed similarly with the SQuAD 2.0 model (20.81 AUC) and SQuAD 1.1 model (20.54 AUC). Selective prediction methods do not identify unanswerable questions. For both MaxProb and our calibrator, we pick a threshold γ′ ∈R and predict that a question is unanswerable if the confidence c < γ′. We choose γ′ to maximize SQuAD 2.0 EM score. Both methods perform poorly: the calibrator (averaged over five choices of qknown) achieves 54.0 EM, while MaxProb achieves 53.1 EM.6 These results only weakly outperform the 6We evaluate on 4000 questions randomly sampled from the SQuAD 2.0 development set. 5692 Figure 5: Difference in AUC between calibrator and MaxProb, as a function of how much of Dtest comes from psource (i.e., SQuAD) instead of qunk, averaged over 5 OOD datasets. The calibrator outperforms MaxProb most when Dtest is a mixture of psource and qunk. majority baseline of 48.9 EM. Taken together, these results indicate that identifying unanswerable questions is a very different task from knowing when to abstain under distribution shift. Our setting focuses on test data that is dissimilar to the training data, but on which the original QA model can still correctly answer a nontrivial fraction of examples. In contrast, unanswerable questions in SQuAD 2.0 look very similar to answerable questions, but a model trained on SQuAD 1.1 gets all of them wrong. 5.8 Changing ratio of in-domain to OOD Until now, we used α = 1 2 both for Dtest and training the calibrator. Now we vary α for both, ranging from using only SQuAD to only OOD data (sampled from qknown for Dcalib and from qunk for Dtest). Figure 5 shows the difference in AUC between the trained calibrator and MaxProb. At both ends of the graph, the difference is close to 0, showing that MaxProb performs well in homogeneous settings. However, when the two data sources are mixed, the calibrator outperforms MaxProb significantly. This further supports our claim that MaxProb performs poorly in mixed settings. 5.9 Allowing access to qunk We note that our findings do not hold in the alternate setting where we have access to samples from qunk (instead of qknown). Training the QA model with this OOD data and using MaxProb achieves average AUC of 16.35, whereas training a calibrator achieves 17.87; unsurprisingly, training on examples similar to the test data is helpful. We do not focus on this setting, as our goal is to build selective QA models for unknown distributions. 6 Discussion In this paper, we propose the setting of selective question answering under domain shift, in which systems must know when to abstain on a mixture of in-domain and unknown OOD examples. Our setting combines two important goals for real-world systems: knowing when to abstain, and handling distribution shift at test time. We show that models are overconfident on OOD examples, leading to poor performance in the our setting, but training a calibrator using other OOD data can help correct for this problem. While we focus on question answering, our framework is general and extends to any prediction task for which graceful handling of out-of-domain inputs is necessary. Across many tasks, NLP models struggle on out-of-domain inputs. Models trained on standard natural language inference datasets (Bowman et al., 2015) generalize poorly to other distributions (Thorne et al., 2018; Naik et al., 2018). Achieving high accuracy on out-of-domain data may not even be possible if the test data requires abilities that are not learnable from the training data (Geiger et al., 2019). Adversarially chosen ungrammatical text can also cause catastrophic errors (Wallace et al., 2019; Cheng et al., 2020). In all these cases, a more intelligent model would recognize that it should abstain on these inputs. Traditional NLU systems typically have a natural ability to abstain. SHRDLU recognizes statements that it cannot parse, or that it finds ambiguous (Winograd, 1972). QUALM answers reading comprehension questions by constructing reasoning chains, and abstains if it cannot find one that supports an answer (Lehnert, 1977). NLP systems deployed in real-world settings inevitably encounter a mixture of familiar and unfamiliar inputs. Our work provides a framework to study how models can more judiciously abstain in these challenging environments. Reproducibility. All code, data and experiments are available on the Codalab platform at https: //bit.ly/35inCah. Acknowledgments. This work was supported by the DARPA ASED program under FA8650-18-27882. We thank Ananya Kumar, John Hewitt, Dan Iter, and the anonymous reviewers for their helpful comments and insights. 5693 References J. Blitzer, R. McDonald, and F. Pereira. 2006. Domain adaptation with structural correspondence learning. In Empirical Methods in Natural Language Processing (EMNLP). S. Bowman, G. Angeli, C. Potts, and C. D. Manning. 2015. A large annotated corpus for learning natural language inference. In Empirical Methods in Natural Language Processing (EMNLP). A. Chen, G. Stanovsky, S. Singh, and M. Gardner. 2019. Evaluating question answering evaluation. In Workshop on Machine Reading for Question Answering (MRQA). D. Chen, A. Fisch, J. Weston, and A. Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Association for Computational Linguistics (ACL). J. Chen and G. Durrett. 2019. Understanding dataset design choices for multi-hop reasoning. In North American Association for Computational Linguistics (NAACL). M. Cheng, J. Yi, H. Zhang, P. Chen, and C. Hsieh. 2020. Seq2Sick: Evaluating the robustness of sequenceto-sequence models with adversarial examples. In Association for the Advancement of Artificial Intelligence (AAAI). C. K. Chow. 1957. An optimum character recognition system using decision functions. In IRE Transactions on Electronic Computers. H. Daume III. 2007. Frustratingly easy domain adaptation. In Association for Computational Linguistics (ACL). J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Association for Computational Linguistics (ACL), pages 4171– 4186. L. Dong, C. Quirk, and M. Lapata. 2018. Confidence modeling for neural semantic parsing. In Association for Computational Linguistics (ACL). M. Dunn, , L. Sagun, M. Higgins, U. Guney, V. Cirik, and K. Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXiv. R. El-Yaniv and Y. Wiener. 2010. On the foundations of noise-free selective classification. Journal of Machine Learning Research (JMLR), 11. R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner. 2017. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410. J. Feng, A. Sondhi, J. Perry, and N. Simon. 2019. Selective prediction-set models with coverage guarantees. arXiv preprint arXiv:1906.05473. A. Fisch, A. Talmor, R. Jia, M. Seo, E. Choi, and D. Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Workshop on Machine Reading for Question Answering (MRQA). Y. Gal and Z. Ghahramani. 2016. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning (ICML). Y. Geifman and R. El-Yaniv. 2017. Selective classification for deep neural networks. In Advances in Neural Information Processing Systems (NeurIPS). A. Geiger, I. Cases, L. Karttunen, and C. Potts. 2019. Posing fair generalization tasks for natural language inference. In Empirical Methods in Natural Language Processing (EMNLP). D. C. Gondek, A. Lally, A. Kalyanpur, J. W. Murdock, P. A. Duboue, L. Zhang, Y. Pan, Z. M. Qiu, and C. Welty. 2012. A framework for merging and ranking of answers in DeepQA. IBM Journal of Research and Development, 56. D. Hendrycks and K. Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations (ICLR). D. Hendrycks, K. Lee, and M. Mazeika. 2019a. Using pre-training can improve model robustness and uncertainty. In International Conference on Machine Learning (ICML). D. Hendrycks, M. Mazeika, and T. Dietterich. 2019b. Deep anomaly detection with outlier exposure. In International Conference on Learning Representations (ICLR). R. Jia and P. Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Empirical Methods in Natural Language Processing (EMNLP). M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Association for Computational Linguistics (ACL). J. Ko, L. Si, and E. Nyberg. 2007. A probabilistic framework for answer selection in question answering. In North American Association for Computational Linguistics (NAACL). T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M. Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov. 2019. Natural questions: a benchmark for question answering research. In Association for Computational Linguistics (ACL). 5694 B. Lakshminarayanan, A. Pritzel, and C. Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems (NeurIPS). W. Lehnert. 1977. The Process of Question Answering. Ph.D. thesis, Yale University. S. Liang, Y. Li, and R. Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations (ICLR). B. Magnini, M. Negri, R. Prevete, and H. Tanev. 2002. Is it the right answer? exploiting web redundancy for answer validation. In Association for Computational Linguistics (ACL). S. Min, E. Wallace, S. Singh, M. Gardner, H. Hajishirzi, and L. Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In Association for Computational Linguistics (ACL). A. Naik, A. Ravichander, N. Sadeh, C. Rose, and G. Neubig. 2018. Stress test evaluation for natural language inference. In International Conference on Computational Linguistics (COLING), pages 2340– 2353. Y. Oren, S. Sagawa, T. Hashimoto, and P. Liang. 2019. Distributionally robust language modeling. In Empirical Methods in Natural Language Processing (EMNLP). Y. Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. V. Dillon, B. Lakshminarayanan, and J. Snoek. 2019. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In Advances in Neural Information Processing Systems (NeurIPS). F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research (JMLR), 12. A. Pe˜nas, P. Forner, R. Sutcliffe, ´Alvaro Rodrigo, C. For˘ascu, I. Alegria, D. Giampiccolo, N. Moreau, and P. Osenova. 2009. Overview of ResPubliQA 2009: Question answering evaluation over european legislation. In Cross Language Evaluation Forum. A. Pe˜nas, E. Hovy, P. Forner, ´Alvaro Rodrigo, R. Sutcliffe, and R. Morante. 2013. QA4MRE 2011-2013: Overview of question answering for machine reading evaluation. In Cross Language Evaluation Forum. J. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers, 10(3):61–74. P. Rajpurkar, R. Jia, and P. Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Association for Computational Linguistics (ACL). P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP). P. Rodriguez, S. Feng, M. Iyyer, H. He, and J. Boyd-Graber. 2019. Quizbowl: The case for incremental question answering. arXiv preprint arXiv:1904.04792. B. Sch¨olkopf, R. Williamson, A. Smola, J. ShaweTaylor, and J. Platt. 1999. Support vector method for novelty detection. In Advances in Neural Information Processing Systems (NeurIPS). L. Smith and Y. Gal. 2018. Understanding measures of uncertainty for adversarial example detection. In Uncertainty in Artificial Intelligence (UAI). A. Talmor and J. Berant. 2019. MultiQA: An empirical investigation of generalization and transfer in reading comprehension. In Association for Computational Linguistics (ACL). J. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. In North American Association for Computational Linguistics (NAACL). M. Toplak, R. Moˇcnik, M. Polajnar, Z. Bosni´c, L. Carlsson, C. Hasselgren, J. Demˇsar, S. Boyer, B. Zupan, and J. St˚alring. 2014. Assessment of machine learning reliability methods for quantifying the applicability domain of QSAR regression models. Journal of Chemical Information and Modeling, 54. A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman. 2017. NewsQA: A machine comprehension dataset. In Workshop on Representation Learning for NLP. E. Wallace, S. Feng, N. Kandpal, M. Gardner, and S. Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Empirical Methods in Natural Language Processing (EMNLP). M. Wang, N. A. Smith, and T. Mitamura. 2007. What is the jeopardy model? a quasi-synchronous grammar for QA. In Empirical Methods in Natural Language Processing (EMNLP). T. Winograd. 1972. Understanding Natural Language. Academic Press. Y. Yang, W. Yih, and C. Meek. 2015. WikiQA: A challenge dataset for open-domain question answering. In Empirical Methods in Natural Language Processing (EMNLP), pages 2013–2018. 5695 Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Empirical Methods in Natural Language Processing (EMNLP). D. Yogatama, C. de M. d’Autume, J. Connor, T. Kocisky, M. Chrzanowski, L. Kong, A. Lazaridou, W. Ling, L. Yu, C. Dyer, et al. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373. B. Zadrozny and C. Elkan. 2002. Transforming classifier scores into accurate multiclass probability estimates. In International Conference on Knowledge Discovery and Data Mining (KDD), pages 694–699. A Appendix A.1 Dataset Sources The OOD data used in calibrator training and validation was sampled from MRQA training data, and the SQuAD data for the same was sampled from MRQA validation data, to prevent train/test mismatch for the QA model (Fisch et al., 2019). The test data was sampled from a disjoint subset of the MRQA validation data. A.2 Calibrator Features and Model We ran experiments including question length and word overlap between the passage and question as calibrator features. However, these features did not improve the validation performance of the calibrator. We hypothesize that they may provide misleading information about a given example, e.g., a long question in SQuAD may provide more opportunities for alignment with the paragraph, making it more likely to be answered correctly, but a long question in HotpotQA may contain a conjunction, which is difficult for the SQuAD-trained model to extrapolate to. For the calibrator model, we experimented using an MLP and logistic regression. Both were slightly worse than Random Forest. A.3 Outlier Detection for Selective Prediction In this section, we study whether outlier detection can be used to perform selective prediction. We train an outlier detector to detect whether or not a given input came from the in-domain dataset (i.e., SQuAD) or is out-of-domain, and use its probability of an example being in-domain for selective prediction. The outlier detection model, training data (a mixture of psource and qknown), and features are the same as those of the calibrator. We find Figure 6: When considering only one answer option as correct, MaxProb is well-calibrated in-domain, but is still overconfident out-of-domain. that this method does poorly, achieving an AUC of 24.23, Coverage at 80% Accuracy of 37.91%, and Coverage at 90% Accuracy of 14.26%. This shows that, as discussed in Section 2.3 and Section 5.2, this approach is unable to correctly identify the OOD examples that the QA model would get correct. A.4 Underconfidence of MaxProb on SQuAD As noted in Section 5.3, MaxProb is underconfident on SQuAD examples due to the additional correct answer options given at test time but not at train time. When the test time evaluation is restricted to allow only one correct answer, we find that MaxProb is well-calibrated on SQuAD examples (Figure 6). The calibration of the calibrator improves as well (Figure 7). However, we do not retain this restriction for the experiments, as it diverges from standard practice on SQuAD, and EM over multiple spans is a better evaluation metric since there are often multiple answer spans that are equally correct. A.5 Accuracy and Coverage per Domain Table 1 in Section 5.2 shows the coverage of MaxProb and the calibrator over the mixed dataset Dtest while maintaining 80% accuracy and 90% accuracy. In Table 5, we report the fraction of these answered questions that are in-domain or OOD. We also show the accuracy of the QA model on each portion. Our analysis in Section 5.3 indicated that MaxProb was overconfident on OOD examples, which we expect would make it answer too many OOD questions and too few in-domain questions. Indeed, 5696 Figure 7: When considering only one answer option as correct, the calibrator is almost perfectly calibrated on both in-domain and out-of-domain examples. at 80% accuracy, 62% of the examples MaxProb answers are in-domain, compared to 68% for the calibrator. This demonstrates that the calibrator improves over MaxProb by answering more indomain questions, which it can do because it is less overconfident on the OOD questions. MaxProb Accuracy MaxProb Coverage Calibrator Accuracy Calibrator Coverage At 80% Accuracy in-domain 92.45 61.59 89.09 67.57 OOD 58.00 38.41 59.55 32.43 At 90% Accuracy in-domain 97.42 67.85 94.35 78.72 OOD 71.20 32.15 72.30 21.28 Table 5: Per-domain accuracy and coverage values of MaxProb and the calibrator (psource and qknown) at 80% and 90% Accuracy on Dtest.
2020
503
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5697–5708 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5697 The Cascade Transformer: an Application for Efficient Answer Sentence Selection Luca Soldaini Amazon Alexa Manhattan Beach, CA, USA [email protected] Alessandro Moschitti Amazon Alexa Manhattan Beach, CA, USA [email protected] Abstract Large transformer-based language models have been shown to be very effective in many classification tasks. However, their computational complexity prevents their use in applications requiring the classification of a large set of candidates. While previous works have investigated approaches to reduce model size, relatively little attention has been paid to techniques to improve batch throughput during inference. In this paper, we introduce the Cascade Transformer, a simple yet effective technique to adapt transformer-based models into a cascade of rankers. Each ranker is used to prune a subset of candidates in a batch, thus dramatically increasing throughput at inference time. Partial encodings from the transformer model are shared among rerankers, providing further speed-up. When compared to a state-of-the-art transformer model, our approach reduces computation by 37% with almost no impact on accuracy, as measured on two English Question Answering datasets. 1 Introduction Recent research has shown that transformer-based neural networks can greatly advance the state of the art over many natural language processing tasks. Efforts such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019c), XLNet (Dai et al., 2019), and others have led to major advancements in several NLP subfields. These models are able to approximate syntactic and semantic relations between words and their compounds by pre-training on copious amounts of unlabeled data (Clark et al., 2019; Jawahar et al., 2019). Then, they can easily be applied to different tasks by just fine-tuning them on training data from the target domain/task (Liu et al., 2019a; Peters et al., 2019). The impressive effectiveness of transformer-based neural networks can be partially attributed to their large number of parameters (ranging from 110 million for “base” models to over 8 billion (Shoeybi et al., 2019)); however, this also makes them rather expensive in terms of computation time and resources. Being aware of this problem, the research community has been developing techniques to prune unnecessary network parameters (Lan et al., 2019; Sanh et al., 2019) or optimize the transformer architecture (Zhang et al., 2018; Xiao et al., 2019). In this paper, we propose a completely different approach for increasing the efficiency of transformer models, which is orthogonal to previous work, and thus can be applied in addition to any of the methods described above. Its main idea is that a large class of NLP problems requires choosing one correct candidate among many. For some applications, this often entails running the model over hundreds or thousands of instances. However, it is well-known that, in many cases, some candidates can be more easily excluded from the optimal solution (Land and Doig, 1960), i.e., they may require less computation. In the case of hierarchical transformer models, this property can be exploited by using a subset of model layers to score a significant portion of candidates, i.e., those that can be more easily excluded from search. Additionally, the hierarchical structure of transformer models intuitively enables the re-use of the computation of lower blocks to feed the upper blocks. Following the intuition above, this work aims at studying how transformer models can be cascaded to efficiently find the max scoring elements among a large set of candidates. More specifically, the contributions of this paper are: First, we build a sequence of rerankers SRN = {R1, R2, ..., RN} of different complexity, which process the candidates in a pipeline. Each reranker at position i takes the set of candidates selected by (i −1)-th reranker and provides top ki candidates to the reranker of position i + 1. By requiring that ki < ki−1 ∀i = 1, . . . , N −1, this approach 5698 allows us to save computation time from the more expensive rerankers by progressively reducing the number of candidates at each step. We build Ri using transformer networks of 4, 6, 8, 10, and 12 blocks from RoBERTa pre-trained models. Second, we introduce a further optimization on SRN to increase its efficiency based on the observation that models Ri in SRN process their input independently. In contrast, we propose the Cascade Transformer (CT), a sequence of rerankers built on top of a single transformer model. Rerankers R1, . . . , RN are obtained by adding small feedforward classification networks at different transformer block positions; therefore, the partial encodings of the transformer blocks are used as both input to reranker Ri, as well as to subsequent transformer encoding blocks. This allows us to efficiently re-use partial results consumed by Ri for rankers Ri+1, . . . , RN. To enable this approach, the parameters of all rerankers must be compatible. Thus, we trained CT in a multi-task learning fashion, alternating the optimization for different i, i.e., the layers of Ri are affected by the back-propagation of its loss as well as by the loss of Rj, with j ≤i. Finally, as a test case for CT, we target Answer Sentence Selection (AS2), a well-known task in the domain of Question Answering (QA). Given a question and a set of sentence candidates (e.g., retrieved by a search engine), this task consists in selecting sentences that correctly answer the question. We tested our approach on two different datasets: (i) ASNQ, recently made available by Garg et al. (2020); and (ii) a benchmark dataset built from a set of anonymized questions asked to Amazon Alexa. Our code, ASNQ split, and models trained on ASNQ are publicly available.1 Our experimental results show that: (i) The selection of different ki for SRN determines different trade-off points between efficiency and accuracy. For example, it is possible to reduce the overall computation by 10% with just 1.9% decrease in accuracy. (ii) Most importantly, the CT approach largely improves over SR, reducing the cost by 37% with almost no loss in accuracy. (iii) The rerankers trained through our cascade approach achieve equivalent or better performance than transformer models trained independently. Finally, (iv) our results suggest that CT can be used with other 1https://github.com/alexa/ wqa-cascade-transformers NLP tasks that require candidate ranking, e.g., parsing, summarization, and many other structured prediction tasks. 2 Related Work In this section, we first summarize related work for sequential reranking of passages and documents, then we focus on the latest methods for AS2, and finally, we discuss the latest techniques for reducing transformer complexity. Reranking in QA and IR The approach introduced in this paper is inspired by our previous work (Matsubara et al., 2020); there, we used a fast AS2 neural model to select a subset of instances to be input to a transformer model. This reduced the computation time of the latter up to four times, preserving most accuracy. Before our paper, the main work on sequential rankers originated from document retrieval research. For example, Wang et al. (2011) formulated and developed a cascade ranking model that improved both top-k ranked effectiveness and retrieval efficiency. Dang et al. (2013) proposed two stage approaches using a limited set of textual features and a final model trained using a larger set of query- and document-dependent features. Wang et al. (2016) focused on quickly identifying a set of good candidate documents that should be passed to the second and further cascades. Gallagher et al. (2019) presented a new general framework for learning an end-to-end cascade of rankers using back-propagation. Asadi and Lin (2013) studied effectiveness/efficiency trade-offs with three candidate generation approaches. While these methods are aligned with our approach, they target document retrieval, which is a very different setting. Further, they only used linear models or simple neural models. Agarwal et al. (2012) focused on AS2, but just applied linear models. Answer Sentence Selection (AS2) In the last few years, several approaches have been proposed for AS2. For example, Severyn and Moschitti (2015) applied CNN to create question and answer representations, while others proposed interweighted alignment networks (Shen et al., 2017; Tran et al., 2018; Tay et al., 2018). The use of compare and aggregate architectures has also been extensively evaluated (Wang and Jiang, 2016; Bian et al., 2017; Yoon et al., 2019). This family of approaches uses a shallow attention mechanism 5699 over the question and answer sentence embeddings. Finally, Tayyar Madabushi et al. (2018) exploited fine-grained question classification to further improve answer selection. Transformer models have been fine-tuned on several tasks that are closely related to AS2. For example, they were used for machine reading (Devlin et al., 2019; Yang et al., 2019a; Wang et al., 2019), ad-hoc document retrieval (Yang et al., 2019b; MacAvaney et al., 2019), and semantic understanding (Liu et al., 2019b) tasks to obtain significant improvement over previous neural methods. Recently, Garg et al. (2020) applied transformer models, obtaining an impressive boost of the state of the art for AS2 tasks. Reducing Transformer Complexity The high computational cost of transformer models prevents their use in many real-word applications. Some proposed solutions rely on leveraging knowledge distillation in the pre-training step, e.g., (Sanh et al., 2019), or used parameter reduction techniques (Lan et al., 2019) to reduce inference cost. However, the effectiveness of these approaches varies depending on the target task they have been applied to. Others have investigated methods to reduce inference latency by modifying how self-attention operates, either during encoding (Child et al., 2019; Guo et al., 2019b), or decoding (Xiao et al., 2019; Zhang et al., 2018). Overall, all these solutions are mostly orthogonal to our approach, as they change the architecture of transformer cells rather than efficiently re-using intermediate results. With respect to the model architecture, our approach is similar to probing models2 (Adi et al., 2017; Liu et al., 2019a; Hupkes et al., 2018; Belinkov et al., 2017), as we train classification layers based on partial encoding on the input sequence. However, (i) our intermediate classifiers are integral part of the model, rather than being trained on frozen partial encodings, and (ii) we use these classifiers not to inspect model properties, but rather to improve inference throughput. Our apporach also shares some similarities with student-teacher (ST) approaches for self-training (Yarowsky, 1995; McClosky et al., 2006). Under this setting, a model is used both as a “teacher” (which makes predictions on unlabeled data to obtain automatic labels) and as a “student” (which learns both from gold standard and automatic labels). In recent years, many variants of ST have 2Also known as auxiliary or diagnostic classifiers. been proposed, including treating teacher predictions as soft labels (Hinton et al., 2015), masking part of the label (Clark et al., 2018), or use multiple modules for the teacher (Zhou and Li, 2005; Ruder and Plank, 2018). Unlike classic ST approaches, we do not aim at improving the teacher models or creating efficient students; instead, we trained models to be used as sequential ranking components. This may be seen as a generalization of the ST approach, where the student needs to learn a simpler task than the teacher. However, our approach is significantly different from the traditional ST setting, which our preliminary investigation showed to be not very effective. 3 Preliminaries and Task Definition We first formalize the problem of selecting the most likely element in a set as a reranking problem; then, we define sequential reranking (SR); finally, we contextualize AS2 task in such framework. 3.1 Max Element Selection In general, a large class of NLP (and other) problems can be formulated as a max element selection task: given a query q and a set of candidates A = {a1, .., an}, select aj that is an optimal element for q. We can model the task as a selector function π : Q × P(A) →A, defined as π(q, A) = aj, where P(A) is the powerset of A, j = argmaxi p(q, ai), and p(q, ai) is the probability of ai to be the required element. p(q, ai) can be estimated using a neural network model. In the case of transformers, said model can be optimized using a point-wise loss, i.e., we only use the target candidate to generate the selection probability. Pairwise or list-wise approaches can still be used (Bian et al., 2017), but (i) they would not change the findings of our study, and (ii) point-wise methods have been shown to achieve competitive performance in the case of transformer models. 3.2 Search with Sequential Reranking (SR) Assuming that no heuristics are available to preselect a subset of most-likely candidates, max element selection requires evaluating each sample using a relevance estimator. Instead of a single estimator, it is often more efficient to use a sequence of rerankers to progressively reduce the number of candidates. We define a reranker as a function R : Q × P(A) →P(A), which takes a subset Σ ⊆ 5700 A, and returns a set of elements, R(q, Σ) = {ai1, ..., aik} ⊂Σ of size k, with the highest probability to be relevant to the query. That is, p(q, a) > p(q, b) ∀a ∈Σ, ∀b ∈A −Σ. Given a sequence of rerankers sorted in terms of computational efficiency, (R1,R2, . ..,RN), we assume that the ranking accuracy, A (e.g., in terms of MAP and MRR), increases in reverse order of the efficiency, i.e., A(Rj) > A(Ri) iff j > i. Then, we define a Sequential Reranker of order N as the composition of N rerankers: SRN(A) = RN ◦RN−1 ◦.. ◦R1(A), where RN can also be the element selector π(q, ·). Each Ri is associated with a different ki = |Ri(·)|, i.e., the number of elements the reranker returns. Depending on the values of ki, SR models with different trade-offs between accuracy and efficiency can be obtained.3 3.3 AS2 Definition The definition of AS2 directly follows from the definition of element selection of Section 3.1, where the query is a natural language question and the elements are answer sentence candidates retrieved with any approach, e.g., using a search engine. 4 SR with transformers In this section, we explain how to exploit the hierarchical architecture of a traditional transformer model to build an SR model. First, we briefly recap how traditional transformer models (we refer to them as “monolithic”) are used for sequence classification, and how to derive a set of sequential rerankers from a pre-trained transformer model (Section 4.1). Then, we introduce our Cascade Transformer (CT) model, a SR model that efficiently uses partial encodings of its input to build a set of sequential rerankers Ri (Section 4.3). Finally, we explain how such model is trained and used for inference in sections 4.3.1 and 4.3.2, respectively. 4.1 Monolithic Transformer Models We first briefly describe the use of transformer models for sequence classification. We call them monolithic as, for all input samples, the computation flows from the first until the last of their layers. Let T = {E; L1, L2, . . . , Ln} be a standard stacked transformer model (Vaswani et al., 2017), where E is the embedding layer, and Li are the 3The design of an end-to-end algorithm to learn the optimal parameter set for a given target trade-off is left as future work. Ž1 (~) Ž2 (~) Ž3 (~) ℎ2 (~),0 Dropping batch element #3 a h(~) _(~) h(~)+1 hƒ  ...  ` ℎ3 (~),0 ℎ1 (~),0  ...  ℎ3 (~),‚ ℎ2 (~),‚ Input batch to CT model ^ Contains 3 sequences Final classification layer Ž2 Ž1  ...  ℎ1 0 ℎ2 0 3‚ 2‚ 1‚ 3 1 2 1 1 1 3 0 2 0 1 0 ℎ1 (~),1 ℎ1 (~),‚ ℎ2 (~),1 ℎ3 (~),1 Figure 1: A visual representation of the Cascade Transformer (CT) model proposed in this paper. Components in yellow represent layers of a traditional transformer model, while elements in purple are unique to CT; input and outputs of the model are shown in blue. In this example, drop rate α= 0.4 causes sample X3 to be removed by partial classifier Cρ(i). transformer layers4 generating contextualized representations for an input sequence; n is typically referred to as the depth of the encoder, i.e., the number of layers. Typical values for n range from 12 to 24, although more recent works have experimented with up to 72 layers (Shoeybi et al., 2019). T can be pre-trained on large amounts of unlabeled text using a masked (Devlin et al., 2019; Liu et al., 2019c) or autoregressive (Yang et al., 2019c; Radford et al., 2019) language modeling objective. Pre-trained language models are fine-tuned for the target tasks using additional layers and data, e.g., a fully connected layer is typically stacked on top of T to obtain a sentence classifier. Formally, given a sequence of input symbols5, X = {x0, x1, . . . , xm}, an encoding H = 4That is, an entire transformer block, constituted by layers for multi-head attention, normalization, feed forward processing and positional embeddings. 5For ranking tasks, the sequence of input symbols is typically a concatenation of the query q and a candidate aj. In 5701 {h0, h1, . . . , hm} is first obtained by recursively applying Hi to the input: H0 = E(X), Hi = Li(Hi−1) ∀i = 1, . . . , n, where H = Hn. Then, the first symbol of the input sequence6 is fed into a sequence of dense feedforward layers D to obtain a final output score, i.e., y = D(h0). D is fine-tuned together with the entire model on a task-specific dataset (a set of question and candidate answer pairs, in our case). 4.2 Transformer-based Sequential Reranker (SR) Models Monolithic transformers can be easily modified or combined to build a sequence of rerankers as described in Seciton 3.2. In our case, we adapt an existing monolithic T to obtain a sequence of N rerankers Ri. Each Ri consists of encoders from T up to layer ρ(i), followed by a classification layer Di, i.e., Ri = {E; L1, . . . , Lρ(i), Di}. For a sequence of input symbols X, all rerankers in the sequence are designed to predict p(q, a), which we indicate as Ri(X) = yρ(i). All rerankers in SRN are trained independently on the target data. In our experiments, we obtained the best performance by setting N = 5 and using the following formula to determine the architecture of each reranker Ri: ρ(i) = 4 + 2· (i −1) ∀i = {1, . . . , 5} In other words, we assemble sequential reranker SR5 using five rerankers built with transformer models of 4, 6, 8, 10 and 12 layers, respectively. This choice is due to the fact that our experimental results seem to indicate that the information in layers 1 to 3 is not structured enough to achieve satisfactory classification performance for our task. This observation is in line with recent works on the effectiveness of partial encoders for semantic tasks similar to AS2 (Peters et al., 2019). 4.3 Cascade Transformer (CT) Models During inference, monolithic transformer models evaluate a sequence X through the entire computation graph to obtain the classification scores Y . order for the model to distinguish between the two, a special token such as “[SEP]” or “</s>” is used. Some models also use a second embedding layer to represent which sequence each symbol comes from. 6Before being processed by a transformer model, sequences are typically prefixed by a start symbol, such as “[CLS]” or “<s>”. This allows transformer models to accumulate knowledge about the entire sequence at this position without compromising token-specific representations (Devlin et al., 2019). This means that when using SRN, examples are processed multiple times by similar layers for different Ri, e.g., for i = 1, all Ri compute the same operations of the first ρ(i) transformer layers, for i = 2, N −1 rerankers compute the same ρ(i) −ρ(i + 1), layers and so on. A more computationally-efficient approach is to share all the common transformer blocks between the different rerankers in SRN. We speed up this computation by using one transformer encoder to implement all required Ri. This can be easily obtained by adding a classification layer Cρ(i) after each ρ(i) layers (see Figure 1). Consequently, given a sample X, the classifiers Cρ(i) produces scores yρ(i) only using a partial encoding. To build a CT model, we use each Cρ(i) to build rerankers Ri, and select the top ki candidates to score with the subsequent rerankers Ri+1. We use the same setting choices of N and ρ(i) described in Section 4.2. Finally, we observed the best performance when all encodings in Hρ(i) are used as input to partial classifier Cρ(i), rather than just the partial encoding of the classification token hρ(i),0. Therefore, we use their average to obtain score yρ(i) = Cρ(i)( 1 m P l=1,..,m hρ(i),l), In line with Kovaleva et al. (2019), we hypothesize that, at lower encoding layers, long dependencies might not be properly accounted in hρ(i),0. However, in our experiments, we found no benefits in further parametrizing this operation, e.g., by either using more complex networks or weighting the average operation. 4.3.1 Training CT The training of the proposed model is conducted in a multi-task fashion. For every mini-batch, we randomly sample one of the rankers Ri (including the final output ranker), calculate its loss against the target labels, and back-propagate its loss throughout the entire model down to the embedding layers. We experimented with several more complex sampling strategies, including a round-robin selection process and a parametrized bias towards early rankers for the first few epochs, but we ultimately found that uniform sampling works best. We also empirically determined that, for all classifiers Cρ(i), backpropagating the loss to the input embeddings, as opposed to stopping it at layer ρ(i −1), is crucial to ensure convergence. A possible explanation could be: enabling each classifier to influence the input representation during backpropagation ensures that later rerankers are more robust against 5702 variance in partial encodings, induced by early classifiers. We experimentally found that if the gradient does not flow throughout the different blocks, the development set performance for later classifiers drops when early classifiers start converging. 4.3.2 Inference Recall that we are interested in speeding up inference for classification tasks such as answer selection, where hundreds of candidates are associated with each question. Therefore, we can assume without loss of generality that each batch of samples B = {X1, . . . , Xb} contains candidate answers for the same question. We use our partial classifiers to throw away a fraction α of candidates, to increase throughput. That is, we discard ki = ⌊α· ki−1⌋candidates, where ⌊·⌋rounds α· ki−1 down to the closest integer. For instance, let α = 0.3, batch size b = 128; further, recall that, in our experiments, a CT consists of 5 cascade rerankers. Then, after layer 4, the size of the batch gets reduced to 90 (⌊0.3· 128⌋= 38 candidates are discarded by the first classifier). After the second classifier (layer 6), ⌊0.3· 90⌋= 27 examples are further removed, for an effective batch size of 63. By layer 12, only 31 samples are left, i.e., the instance number scored by the final classifier is reduced by more than 4 times. Our approach has the effect of improving the throughput of a transformer model by reducing the average batch size during inference: the throughput of any neural model is capped by the maximum number of examples it can process in parallel (i.e., the size of each batch), and said number is usually ceiled by the amount of memory available to the model (e.g., RAM on GPU). The monolithic models have a constant batch size at inference; however, because the batch size for a cascade model varies while processing a batch, we can size our network with respect to its average batch size, thus increasing the number of samples we initially have in a batch. In the example above, suppose that the hardware requirement dictates a maximum batch size of 84 for the monolithic model. As the average batch size for the cascading model is (4· 128 + 2· 90 + 2· 63 + 2· 44 + 2· 28)/12 = 80.2 < 84, we can process a batch of 128 instances without violating memory constrains, increasing throughput by 52%. We remark that using a fixed α is crucial to obtain the performance gains we described: if we were to employ a score-based thresholding apASNQ GPD TRECQA WikiQA TRAIN Questions 57,242 1,000 1,227 873 Avg cand. 413.3 99.8 39.2 9.9 Avg corr. 1.2 4.4 4.8 1.2 DEV Questions 1,336 340 65 126 Avg cand. 403.6 99.7 15.9 9.0 Avg corr. 3.2 2.85 2.9 1.1 TEST Questions 1,336 440 68 243 Avg cand. 400.5 101.1 20.0 9.7 Avg corr. 3.2 8.13 3.4 1.2 Table 1: Datasets statistics: ASNQ and GPD have more sentence candidates than TRECQA and WikiQA. proach (that is, discard all candidates with score below a given threshold), we could not determine the size of batches throughout the cascade, thus making it impossible to efficiently scale our system. On the other hand, we note that nothing in our implementations prevents potentially correct candidates from being dropped when using CT. However, as we will show in Section 5, an opportune choice of a threshold and good accuracy of early classifiers ensure high probability of having at least one positive example in the candidate set for the last classifier of the cascade. 5 Experiments We present three sets of experiments designed to evaluate CT. In the first (Section 5.3), we show that our proposed approach without any selection produces comparable or superior results with respect to the state of the art of AS2, thanks to its stability properties; in the second (Section 5.4), we compare our Cascade Transformer with a vanilla transformer, as well as a sequence of transformer models trained independently; finally, in the third (Section 5.5), we explore the tuning of the drop ratio, α. 5.1 Datasets TRECQA & WikiQA Traditional benchmarks used for AS2, such as TRECQA (Wang et al., 2007) and WikiQA (Yang et al., 2015), typically contain a limited number of candidates for each question. Therefore, while they are very useful to compare accuracy of AS2 systems with the state of the art, they do not enable testing large scale passage reranking, i.e., inference on hundreds or thousand of answer candidates. Therefore, we evaluated our approach (Sec. 4.3) on two datasets: ASNQ, which is publicly available, and our GPD dataset. We still leverage TRECQA and WikiQA to show that that our 5703 cascade system has comparable performance to state-of-the-art transformer models when no filtering is applied. ASNQ The Answer Sentence Natural Questions dataset (Garg et al., 2020) is a large collection (23M samples) of question-answer pairs, which is two orders of magnitude larger than most public AS2 datasets. It was obtained by extracting sentence candidates from the Google Natural Question (NQ) benchmark (Kwiatkowski et al., 2019). Samples in NQ consists of tuples ⟨question, answerlong, answershort, label⟩, where answerlong contains multiple sentences, answershort is fragment of a sentence, and label is a binary value indicating whether answerlong is correct. The positive samples were obtained by extracting sentences from answerlong that contain answershort; all other sentences are labeled as negative. The original release of ANSQ7 only contains train and development splits; we further split the dev. set to both have dev. and test sets. GPD The General Purpose Dataset is part of our efforts to study large scale web QA and evaluate performance of AS2 systems. We built GPD using a search engine to retrieve up to 100 candidate documents for a set of given questions. Then, we extracted all candidate sentences from such documents, and rank them using a vanilla transformer model, such as the one described in Sec. 4.1. Finally, the top 100 ranked sentences were manually annotated as correct or incorrect answers. We measure the accuracy of our approach on ASNQ and GPD using four metrics: Mean Average Precision (MAP), Mean Reciprocal Rank (MRR), Precision at 1 of ranked candidates (P@1), and Normalized Discounted Cumulative Gain at 10 of retrieved candidates (nDCG@10). While the first two metrics capture the overall system performance, the latter two are better suited to evaluate systems with many candidates, as they focus more on Precision. For WikiQA and TRECQA, we use MAP and MRR. 5.2 Models and Training Our models are fine-tuned starting from a pretrained RoBERTa encoder (Liu et al., 2019c). We chose this transformer model over others due to its strong performance on answer selection tasks (Garg et al., 2020). Specifically, we use the BASE 7https://github.com/alexa/wqa_tanda Model WikiQA TRECQA MAP MRR MAP MRR CA1 (Wang and Jiang, 2016) 74.3 75.4 – – CA2 (Yoon et al., 2019) 83.4 84.8 87.5 94.0 TANDABASE (Garg et al., 2020) 88.9 90.1 91.4 95.2 4 layers TANDA 80.5 80.9 77.2 83.1 6 layers TANDA 82.1 82.9 78.5 88.4 8 layers TANDA 85.7 86.7 88.2 94.7 10 layers TANDA 89.0 90.0 90.5 95.9 Our TANDABASE 89.1 90.1 91.6 96.0 CT (4 layers, α = 0.0) 60.1 60.2 67.9 74.7 CT (6 layers, α = 0.0) 79.8 80.3 89.7 95.0 CT (8 layers, α = 0.0) 84.8 85.4 92.3 95.3 CT (10 layers, α = 0.0) 89.7 89.8 92.3 95.6 CT (12 layers, α = 0.0) 89.9 91.0 92.4 96.7 Table 2: Comparison on two AS2 academic datasets. With the exception of a 4-layer transformer, both the partial and final classifiers from CT achieve comparable or better performance than state of the art models. variant (768-dimensional embeddings, 12 layers, 12 heads, and 3072 hidden units), as it is more appropriate for efficient classification. When applicable8, we fine-tune our models using the two-step “transfer and adapt” (TANDA) technique introduced by Garg et al. (2020). As mentioned in Section 4.3, we optimize our model in a multi-task setting; that is, for each minibatch, we randomly sample one of the output layers of the CT classifiers to backpropagate its loss to all layers below. While we evaluated different sampling techniques, we found that a simple uniform distribution is sufficient and allows the model to converge quickly. Our models are optimized using Adam (Kingma and Ba, 2014) using triangular learning rate (Smith, 2017) with a 4, 000 updates ramp-up9, and a peak learning rate lr = 1e−6. Batch size was set to up to 2, 000 tokens per mini-batch for CT models. For the partial and final classifiers, we use 3-layers feedforward modules with with 768 hidden units and tanh activation function. Like the original BERT implementation, we use dropout value of 0.1 on all dense and attention layers. We implemented our system using MxNet 1.5 (Chen et al., 2015) and GluonNLP 0.8.1 (Guo et al., 2019a) on a machine with 8 NVIDIA Tesla V100 GPUs, each with 16GB of memory. 8When fine-tuning on GPD, TRECQA, and WikiQA, we perform a “transfer” step on ASNQ before “adapting” to our target dataset; for ASNQ, we directly fine-tune on it. 9On ASNQ, it is roughly equivalent to ˜ 950k samples or about 4% of the training set. 5704 Method Model α ASNQ GPD Cost reduction per batch MAP nDCG@10 P@1 MRR MAP nDCG@10 P@1 MRR Monolithic transformer (MT) 4 layers TANDA – 31.5 30.8 25.9 30.8 38.9 50.1 40.8 54.0 −67% 6 layers TANDA – 60.2 58.7 47.2 59.2 51.4 64.1 56.1 67.6 −50% 8 layers TANDA – 63.9 62.2 49.2 62.4 56.3 68.7 61.2 70.4 −33% 10 layers TANDA – 65.3 64.5 52.0 64.1 57.2 71.3 64.9 72.7 −20% TANDABASE – 65.5 65.1 52.1 64.7 58.0 72.2 67.5 76.8 baseline Sequential Ranker (SR) MT models, 4 to 12 layers, in sequence 0.3 65.4 65.1 52.1 64.8 55.8 70.2 66.2 74.3 +53% 0.4 64.9 64.2 51.6 64.2 53.8 69.6 65.6 73.0 +18% 0.5 64.6 63.4 50.8 63.5 52.2 68.4 63.0 72.3 −10% Cascade transformer (CT) 4 layers CT 0.0 22.0 19.3 10.2 18.3 32.7 38.9 35.2 42.6 −67% 6 layers CT 0.0 49.1 47.2 32.7 47.7 44.8 56.0 47.3 58.5 −50% 8 layers CT 0.0 62.8 61.5 48.7 61.9 53.8 71.7 61.2 69.1 −33% 10 layers CT 0.0 65.6 65.1 53.0 65.2 55.8 72.0 63.1 72.1 −20% Full CT (12 layers) 0.0 66.3 66.1 53.2 65.4 57.8 71.9 67.5 76.9 −0% 0.3 65.3 65.3 52.9 65.3 55.7 69.8 66.2 75.1 −37% 0.4 64.8 65.0 52.5 64.8 52.8 68.6 65.6 74.3 −45% 0.5 64.1 65.0 52.4 64.5 50.2 66.1 62.4 72.9 −51% Table 3: Comparison of Cascade Transformers with other models on the ASNQ and GPD datasets. “Monolithic transformer” refers to a single transformer model trained independently; “sequential ranker” (ST) is a sequence of monolithic transformer models of size 4, 6, . . . , 12 trained independently; and “Cascade Transformer” (CT) is the approach we propose. This can train models that equal or outperform the state of the art when no drop is applied (i.e., α = 0.0); with drop, they obtain the same performance with 37% to 51% fewer operations. 5.3 Stability Results of Cascade Training In oder to better assess how our training strategy for CT models compare with a monolithic transformer, we evaluated the performance of our system on two well known AS2 datasets, WikiQA and TRECQA. The results of these experiments are presented in Table 2. Note how, in this case, we are not applying any drop to our cascade classifier, as it is not necessary on this dataset: all sentences fit comfortably in one mini batch (see dataset statistics in Table 1), so we would not observe any advantage in pruning candidates. Instead, we focus on evaluating how our training strategy affects performance of partial and final classifiers of a CT model. Our experiment shows that classifiers in a CT model achieve competitive performance with respect to the state of the art: our 12-layer transformer model trained in cascade outperforms TANDABASE by 0.8 and 0.9 absolute points in MAP (0.9 and 0.7 in MRR). 10, 8, and 6 layer models are equally comparable, differing at most by 2.3 absolute MAP points on WikiQA, and outscoring TANDA by up to 11.2 absolute MAP points on TRECQA. However, we observed meaningful differences between the performance of the 4-layers cascade model and its monolithic counterparts. We hypothesize that this is due to the fact that lower layers are not typically well suited for classification when used as part of a larger model (Peters et al., 2019); this observation is reinforced by the fact that the 4 layers TANDA model shown in Table 2 takes four times the number of the iterations of any other model to converge to a local optimum. Overall, these experiments show that our training strategy is not only effective for CT models, but can also produce smaller transformer models with good accuracy without separate fine-tuning. 5.4 Results on Effectiveness of Cascading The main results for our CT approach are presented in Table 3: we compared it with (i) a state-of-the-art monolithic transformer (TANDABASE), (ii) smaller, monolithic transformer models with 4-10 layers, and (iii) a sequential ranker (SR) consisting of 5 monolithic transformer models with 4, 6, 8, 10 and 12 layers trained independently. For CT, we report performance of each classifier individually (layers 4 up to 12, which is equivalent to a full transformer model). We test SR and CT with drop ratio 30%, 40%, 50%. Finally, for each model, we report the relative cost per batch compared to a base transformer model with 12 layers. Overall, we observed that our cascade models are competitive with monolithic transformers on both ASNQ and GPD datasets. In particular, when no selection is applied (α = 0.0), a 12 layer cascade model performs equal or better to TANDABASE: on ASNQ, we improve P@1 by 2.1% (53.2 vs 52.1), and MAP by 1.2% (66.3 vs 65.5); on GDP, we achieve the same P@1 (67.5), and a slightly lower MAP (57.8 vs 58.0). This indicates that, despite the multitasking setup, out method is competitive with the state of the art. 5705 A drop rate α > 0.0 produces a small degradation in accuracy, at most, while significantly reducing the number of operations per batch (−37%). In particular, when α = 0.3, we achieve less than 2% drop in P@1 on GPD, when compared to TANDABASE; on ANSQ, we slightly improve over it (52.9 vs 52.1). We observe a more pronounced drop in performance for MAP, this is to be expected, as intermediate classification layers are designed to drop a significant number of candidates. For larger values of α, such as 0.5, we note that we achieve significantly better performance than monolithic transformer of similar computational cost. For example, CT achieves an 11.2% improvement in P@1 over a 6-layers TANDA model (62.4 vs 56.1) on GPD; a similar improvement is obtained on ANSQ (+11.0%, 52.4 vs 47.2). Finally, our model is also competitive with respect to a sequential transformer with equivalent drop rates, while being between 1.9 to 2.4 times more efficient. This is because an SR model made of independent TANDA models cannot re-use encodings generated by smaller models as CT does. 5.5 Results on Tuning of Drop Ratio α Finally, we examined how different values for drop ratio α affect the performance of CT models. In particular, we performed an exhaustive grid-search on a CT model trained on the GPD dataset for drop ratio values {αp1, αp2, αp3, αp4}, with αpk ∈ {0.1, 0.2, . . . , 0.6}. The performance is reported in Figure 2 with respect to the relative computational cost per batch of a configuration when compared with a TANDABASE model. Overall, we found that CT models are robust with respect to the choice of {αpk}4 k=1. We observe moderate degradation for higher drop ratio values (e.g., P@1 varies from 85.6 to 80.0). Further, as expected, performance increases for models with higher computational cost per batch, although they taper off for CT models with relative cost ≥70%. On the other hand, the grid search results do not seem to suggest an effective strategy to pick optimal values for {αpk}4 k=1, and, in our experiments, we ended up choosing the same values for all drop rates. In the future, we would be like to learn such values while training the cascade model itself. 6 Conclusions and Future Work This work introduces CT, a variant of the traditional transformer model designed to improve inference 64 66 68 70 72 74 MAP 84 86 88 90 MRR 78 80 82 84 86 88 nDCG@10 45% 50% 55% 60% 65% 70% 75% 80% 85% Relative Cost 78 80 82 84 86 Precision@1 Figure 2: Grid search plot on the GPD validation set. Each point corresponds to a configuration of drop ratios {αp1, . . . , αp4} with αpk ∈{0.1, 0.2, . . . , 0.6}; values on the x-axis represent the relative computational cost per batch of a configuration compared to TANDABASE. The three runs reported in Table 3 correspond to ▲ (α = 0.3), ♦(α = 0.4), and (α = 0.5). throughput. Compared to a traditional monolithic stacked transformer model, our approach leverages classifiers placed at different encoding stages to prune candidates in a batch and improve model throughput. Our experiments show that a CT model not only achieves comparable performance to a traditional transformer model while reducing computational cost per batch by over 37%, but also that our training strategy is stable and jointly produces smaller transformer models that are suitable for classification when higher throughput and lower latency goals must be met. In future work, we plan to explore techniques to automatically learn where to place intermediate classifiers, and what drop ratio to use for each one of them. References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In ICLR. 5706 Arvind Agarwal, Hema Raghavan, Karthik Subbian, Prem Melville, Richard D. Lawrence, David C. Gondek, and James Fan. 2012. Learning to rank for robust question answering. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12, pages 833–842, New York, NY, USA. ACM. Nima Asadi and Jimmy Lin. 2013. Effectiveness/efficiency tradeoffs for candidate generation in multi-stage retrieval architectures. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’13, pages 997–1000, New York, NY, USA. ACM. Yonatan Belinkov, Llu´ıs M`arquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017. Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1–10, Taipei, Taiwan. Asian Federation of Natural Language Processing. Weijie Bian, Si Li, Zhao Yang, Guang Chen, and Zhiqing Lin. 2017. A compare-aggregate model with dynamic-clip attention for answer selection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM ’17, pages 1987–1990, New York, NY, USA. ACM. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Christopher D Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914– 1925. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. CoRR, abs/1901.02860. Van Dang, Michael Bendersky, and W. Bruce Croft. 2013. Two-stage learning to rank for information retrieval. In Proceedings of the 35th European Conference on Advances in Information Retrieval, ECIR’13, pages 423–434, Berlin, Heidelberg. Springer-Verlag. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Luke Gallagher, Ruey-Cheng Chen, Roi Blanco, and J. Shane Culpepper. 2019. Joint optimization of cascade ranking models. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM ’19, pages 15–23, New York, NY, USA. ACM. Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection. In AAAI. Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, et al. 2019a. GluonCV and GluonNLP: Deep learning in computer vision and natural language processing. arXiv preprint arXiv:1907.04433. Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, and Zheng Zhang. 2019b. Startransformer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1315–1325. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and’diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907–926. Ganesh Jawahar, Benoˆıt Sagot, and Djam´e Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657, Florence, Italy. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 5707 Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of bert. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4356–4365. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. A.H. Land and A.G. Doig. 1960. An automatic method for solving discrete programming problems. Econometrica, 28:497–520. Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, and Noah A Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019b. Multi-task deep neural networks for natural language understanding. In ACL. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019c. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. CEDR: Contextualized embeddings for document ranking. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Yoshitomo Matsubara, Thuy Vu, and Alessandro Moschitti. 2020. Reranking for efficient transformerbased answer selection. In To appear in Proceedings of the 43th international ACM SIGIR conference on research and development in information retrieval. ACM. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 337–344. Association for Computational Linguistics. Matthew Peters, Sebastian Ruder, and Noah A Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1044–1054. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 373– 382. ACM. Gehui Shen, Yunlun Yang, and Zhi-Hong Deng. 2017. Inter-weighted alignment network for sentence pair modeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1179–1189, Copenhagen, Denmark. Association for Computational Linguistics. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-LM: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053. Leslie N Smith. 2017. Cyclical learning rates for training neural networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 464–472. IEEE. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Multi-cast attention networks for retrieval-based question answering and response prediction. CoRR, abs/1806.00778. Harish Tayyar Madabushi, Mark Lee, and John Barnden. 2018. Integrating question classification and deep learning for improved answer selection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3283–3294, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Quan Hung Tran, Tuan Lai, Gholamreza Haffari, Ingrid Zukerman, Trung Bui, and Hung Bui. 2018. The context-dependent additive recurrent neural net. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1274–1283, New Orleans, Louisiana. Association for Computational Linguistics. 5708 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Lidan Wang, Jimmy Lin, and Donald Metzler. 2011. A cascade ranking model for efficient ranked retrieval. In sigir, pages 105–114. Mengqiu Wang, Noah A. Smith, and Teruko Mitamura. 2007. What is the Jeopardy model? a quasi-synchronous grammar for QA. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 22–32, Prague, Czech Republic. Association for Computational Linguistics. Qi Wang, Constantinos Dimopoulos, and Torsten Suel. 2016. Fast first-phase candidate generation for cascading rankers. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’16, pages 295–304, New York, NY, USA. ACM. Ran Wang, Haibo Su, Chunye Wang, Kailin Ji, and Jupeng Ding. 2019. To tune or not to tune? how about the best of both worlds? ArXiv, abs/1907.05338. Shuohang Wang and Jing Jiang. 2016. A compareaggregate model for matching text sequences. arXiv preprint arXiv:1611.01747. Tong Xiao, Yinqiao Li, Jingbo Zhu, Zhengtao Yu, and Tongran Liu. 2019. Sharing attention weights for fast transformer. In International Joint Conferences on Artificial Intelligence (IJCAI). Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019a. End-to-end open-domain question answering with BERTserini. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 72–77, Minneapolis, Minnesota. Association for Computational Linguistics. Wei Yang, Haotian Zhang, and Jimmy Lin. 2019b. Simple applications of BERT for ad hoc document retrieval. CoRR, abs/1903.10972. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013–2018, Lisbon, Portugal. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019c. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189–196. Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, and Kyomin Jung. 2019. A compareaggregate model with latent clustering for answer selection. CoRR, abs/1905.12897. Biao Zhang, Deyi Xiong, and Jinsong Su. 2018. Accelerating neural transformer via an average attention network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1789–1798. Zhi-Hua Zhou and Ming Li. 2005. Tri-training: Exploiting unlabeled data using three classifiers. IEEE Transactions on knowledge and Data Engineering, 17(11):1529–1541.
2020
504
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5709–5714 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5709 Transformers to Learn Hierarchical Contexts in Multiparty Dialogue for Span-based Question Answering Changmao Li Department of Computer Science Emory University Atlanta, GA, USA [email protected] Jinho D. Choi Department of Computer Science Emory University Atlanta, GA, USA [email protected] Abstract We introduce a novel approach to transformers that learns hierarchical representations in multiparty dialogue. First, three language modeling tasks are used to pre-train the transformers, token- and utterance-level language modeling and utterance order prediction, that learn both token and utterance embeddings for better understanding in dialogue contexts. Then, multitask learning between the utterance prediction and the token span prediction is applied to finetune for span-based question answering (QA). Our approach is evaluated on the FRIENDSQA dataset and shows improvements of 3.8% and 1.4% over the two state-of-the-art transformer models, BERT and RoBERTa, respectively. 1 Introduction Transformer-based contextualized embedding approaches such as BERT (Devlin et al., 2019), XLM (CONNEAU and Lample, 2019), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019), and AlBERT (Lan et al., 2019) have re-established the state-of-the-art for practically all question answering (QA) tasks on not only general domain datasets such as SQUAD (Rajpurkar et al., 2016, 2018), MS MARCO (Nguyen et al., 2016), TRIVIAQA (Joshi et al., 2017), NEWSQA (Trischler et al., 2017), or NARRATIVEQA (Koisk et al., 2018), but also multiturn question datasets such as SQA (Iyyer et al., 2017), QUAC (Choi et al., 2018), COQA (Reddy et al., 2019), or CQA (Talmor and Berant, 2018). However, for span-based QA where the evidence documents are in the form of multiparty dialogue, the performance is still poor even with the latest transformer models (Sun et al., 2019; Yang and Choi, 2019) due to the challenges in representing utterances composed by heterogeneous speakers. Several limitations can be expected for language models trained on general domains to process dialogue. First, most of these models are pre-trained on formal writing, which is notably different from colloquial writing in dialogue; thus, fine-tuning for the end tasks is often not sufficient enough to build robust dialogue models. Second, unlike sentences in a wiki or news article written by one author with a coherent topic, utterances in a dialogue are from multiple speakers who may talk about different topics in distinct manners such that they should not be represented by simply concatenating, but rather as sub-documents interconnected to one another. This paper presents a novel approach to the latest transformers that learns hierarchical embeddings for tokens and utterances for a better understanding in dialogue contexts. While fine-tuning for span-based QA, every utterance as well as the question are separated encoded and multi-head attentions and additional transformers are built on the token and utterance embeddings respectively to provide a more comprehensive view of the dialogue to the QA model. As a result, our model achieves a new state-of-the-art result on a span-based QA task where the evidence documents are multiparty dialogue. The contributions of this paper are:1 • New pre-training tasks are introduced to improve the quality of both token-level and utterance-level embeddings generated by the transformers, that better suit to handle dialogue contexts (§2.1). • A new multi-task learning approach is proposed to fine-tune the language model for span-based QA that takes full advantage of the hierarchical embeddings created from the pre-training (§2.2). • Our approach significantly outperforms the previous state-of-the-art models using BERT and RoBERTa on a span-based QA task using dialogues as evidence documents (§3). 1All our resources including the source codes and the dataset with the experiment split are available at https://github.com/emorynlp/friendsqa 5710 Transformer Encoder (TE) Softmax ew 11 es 1 ⋯eμ ij ew 1n ⋯ ec oμ ij ⋯ ⋯ [CLS] s1 w11 w1n μij ⋯ ⋯ sm wm1 wmn ⋯es m ew m1 ew mn ⋯ Transformer Encoder (TE) Softmax ⋯ ⋯ [CLSi] si wi1 μij win ⋯ew in ew i1 es i eμ ik ⋯ ec i oμ ij (a) Token-level MLM (§2.1.1) Transformer Encoder (TE) Softmax ew 11 es 1 ⋯eμ ij ew 1n ⋯ ec oμ ij ⋯ ⋯ [CLS] s1 w11 w1n μij ⋯ ⋯ sm wm1 wmn ⋯es m ew m1 ew mn ⋯ Transformer Encoder (TE) Softmax ⋯ ⋯ [CLSi] si wi1 μij win ⋯ew in ew i1 es i eμ ik ⋯ ec i oμ ij (b) Utterance-level MLM (§2.1.2) Transformer Encoder (TE) Softmax ew 11 es 1 ⋯eμ ij ew 1n ⋯ ec oμ ij ⋯ ⋯ [CLS] s1 w11 w1n μij ⋯ ⋯ sm wm1 wmn ⋯es m ew m1 ew mn ⋯ Transformer Encoder (TE) Softmax ⋯ ⋯ [CLSi] si wi1 μij win ⋯ew in ew i1 es i eμ ik ⋯ ec i oμ ij Transformer Encoder (TE) ⋯ [CLS1] s1 w11 w1n [CLSm] s′ m w′ m1 w′ mn ⋯ ⋯ ⋯ ⋯ [CLSi] ⋯ s′ i w′ i1 ⋯w′ in ⋯ ⋯ TL2 TL1 Softmax oν tc 1 tc m ⋯tc i ⋯ ec 1 ec m ⋯ ⋯ ec i ew 11 es 1 es m ew m1 ew 1n ⋯ew mn ⋯ es i ew i1 ⋯ew in ⋯ ⋯ ⋯ ⋯ (c) Utterance order prediction (§2.1.3) Figure 1: The overview of our models for the three pre-training tasks (Section 2.1). 2 Transformers for Learning Dialogue This section introduces a novel approach for pretraining (Section 2.1) and fine-tuning (Section 2.2) transformers to effectively learn dialogue contexts. Our approach has been evaluated with two kinds of transformers, BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), and shown significant improvement to a question answering task (QA) on multiparty dialogue (Section 3). 2.1 Pre-training Language Models Pre-training involves 3 tasks in sequence, the tokenlevel masked language modeling (MLM; §2.1.1), the utterance-level MLM (§2.1.2), and the utterance order prediction (§2.1.3), where the trained weights from each task are transferred to the next task. Note that the weights of publicly available transformer encoders are adapted to train the tokenlevel MLM, which allows our QA model to handle languages in both dialogues, used as evidence documents, and questions written in formal writing. Transformers from BERT and RoBERTa are trained with static and dynamic MLM respectively, as described by Devlin et al. (2019); Liu et al. (2019). 2.1.1 Token-level Masked LM Figure 1(a) illustrates the token-level MLM model. Let D = {U1, . . . , Um} be a dialogue where Ui = {si, wi1, . . . , win} is the i’th utterance in D, si is the speaker of Ui, and wij is the j’th token in Ui. All speakers and tokens in D are appended in order with the special token CLS, representing the entire dialogue, which creates the input string sequence I = {CLS}⊕U1⊕. . .⊕Un. For every wij ∈I, let Iµ ij = (I \{wij})∪{µij}, where µij is the masked token substituted in place of wij. Iµ ij is then fed into the transformer encoder (TE), which generates a sequence of embeddings {ec} ⊕E1 ⊕. . . ⊕Em where Ei = {es i, ew i1, .., ew in} is the embedding list for Ui, and (ec, es i, ew ij, eµ ij) are the embeddings of (CLS, si, wij, µij) respectively. Finally, eµ ij is fed into a softmax layer that generates the output vector oµ ij ∈R|V | to predict µij, where V is the set of all vocabularies in the dataset.2 2.1.2 Utterance-level Masked LM The token-level MLM (t-MLM) learns attentions among all tokens in D regardless of the utterance boundaries, allowing the model to compare every token to a broad context; however, it fails to catch unique aspects about individual utterances that can be important in dialogue. To learn an embedding for each utterance, the utterance-level MLM model is trained (Figure 1(b)). Utterance embeddings can be used independently and/or in sequence to match contexts in the question and the dialogue beyond the token-level, showing an advantage in finding utterances with the correct answer spans (§2.2.1). 2n: the maximum number of words in every utterance, m: the maximum number of utterances in every dialogue. 5711 Transformer Encoder (TE) TL1 TL2 ec q ec 1 ec m ⋯ eq 1 eq 2 ⋯ eq n ew 11 es 1 ⋯ es m ew m1 ew 1n ⋯ew mn ⋯ ⋯ ⋯ Softmax Softmax e|t ∈ℝd E ∈ℝn×d ou ∈ℝm+1 ol|r ∈ℝn+1 ⋯ [CLS1] s1 w11 w1n [CLSm] sm wm1 wmn ⋯ ⋯ [CLSq] q1 q2 qn ⋯ ⋯ ⋯ MHA tc q tc 1 tc m ⋯ Softmax Eq 1 Eq 2 Eq m ⋯ or 1 or 2 or m ⋯ oℓ 1 oℓ 2 oℓ m ⋯ ou Figure 2: The overview of our fine-tuning model exploiting multi-task learning (Section 2.2). For every utterance Ui, the masked input sequence Iµ ij = {CLSi} ⊕{(Ui \ {wij}) ∪µij} is generated. Note that CLSi now represents Ui instead of D and Iµ ij is much shorter than the one used for t-MLM. Iµ ij is fed into TE, already trained by t-MLM, and the embedding sequence Ei = {ec i, es i, ew i1, .., ew in} is generated. Finally, ec i, instead of eµ ij, is fed into a softmax layer that generates oµ ij to predict µij. The intuition behind the utterance-level MLM is that once ec i learns enough contents to accurately predict any token in Ui, it consists of most essential features about the utterance; thus, ec i can be used as the embedding of Ui. 2.1.3 Utterance Order Prediction The embedding ec i from the utterance-level MLM (u-MLM) learns contents within Ui, but not across other utterances. In dialogue, it is often the case that a context is completed by multiple utterances; thus, learning attentions among the utterances is necessary. To create embeddings that contain crossutterance features, the utterance order prediction model is trained (Figure 1(c)). Let D = D1 ⊕D2 where D1 and D2 comprise the first and the second halves of the utterances in D, respectively. Also, let D′ = D1 ⊕D′ 2 where D′ 2 contains the same set of utterances as D2 although the ordering may be different. The task is whether or not D′ preserves the same order of utterances as D. For each Ui ∈D′, the input Ii = {CLSi}⊕Ui is created and fed into TE, already trained by u-MLM, to create the embeddings Ei = {ec i, es i, ew i1, .., ew in}. The sequence Ec = {ec 1, . . . , ec n} is fed into two transformer layers, TL1 and TL2, that generate the new utterance embedding list T c = {tc 1, . . . , tc n}. Finally, T c is fed into a softmax layer that generates oν ∈R2 to predict whether or not D′ is in order. 2.2 Fine-tuning for QA on Dialogue Fine-tuning exploits multi-task learning between the utterance ID prediction (§2.2.1) and the token span prediction (§2.2.2), which allows the model to train both the utterance- and token-level attentions. The transformer encoder (TE) trained by the utterance order prediction (UOP) is used for both tasks. Given the question Q = {q1, . . . , qn} (qi is the i’th token in Q) and the dialogue D = {U1, . . . , Um}, Q and all U∗are fed into TE that generates Eq = {ec q, eq 1, .., eq n} and Ei = {ec i, es i, ew i1, .., ew in} for Q and every Ui, respectively. 2.2.1 Utterance ID Prediction The utterance embedding list Ec = {ec q, ec 1, .., ec n} is fed into TL1 and TL2 from UOP that generate T c = {tc q, tc 1, .., tc n}. T c is then fed into a softmax layer that generates ou ∈Rm+1 to predict the ID of the utterance containing the answer span if exists; otherwise, the 0’th label is predicted, implying that the answer span for Q does not exist in D. 2.2.2 Token Span Prediction For every Ei, the pair (E′ q, E′ i) is fed into the multihead attention layer, MHA, where E′ q = Eq \ {ec q} and E′ i = Ei \ {ec i}. MHA (Vaswani et al., 2017) then generates the attended embedding sequences, T a 1 , . . . , T a m, where T a i = {ts i, tw i1, .., tw in}. Finally, each T a i is fed into two softmax layers, SL and SR, that generate oℓ i ∈Rn+1 and or i ∈Rn+1 to predict the leftmost and the rightmost tokens in Ui respectively, that yield the answer span for Q. It is possible that the answer spans are predicted in multiple utterances, in which case, the span from the utterance that has the highest score for the utterance ID prediction is selected, which is more efficient than the typical dynamic programming approach. 5712 3 Experiments 3.1 Corpus Despite of all great work in QA, only two datasets are publicly available for machine comprehension that take dialogues as evidence documents. One is DREAM comprising dialogues for language exams with multiple-choice questions (Sun et al., 2019). The other is FRIENDSQA containing transcripts from the TV show Friends with annotation for spanbased question answering (Yang and Choi, 2019). Since DREAM is for a reading comprehension task that does not need to find the answer contents from the evidence documents, it is not suitable for our approach; thus, FRIENDSQA is chosen. Each scene is treated as an independent dialogue in FRIENDSQA. Yang and Choi (2019) randomly split the corpus to generate training, development, and evaluation sets such that scenes from the same episode can be distributed across those three sets, causing inflated accuracy scores. Thus, we re-split them by episodes to prevent such inflation. For finetuning (§2.2), episodes from the first four seasons are used as described in Table 1. For pre-training (§2.1), all transcripts from Seasons 5-10 are used as an additional training set. Set D Q A E Training 973 9,791 16,352 1 - 20 Development 113 1,189 2,065 21 - 22 Evaluation 136 1,172 1,920 23 - * Table 1: New data split for FriendsQA. D/Q/A: # of dialogues/questions/answers, E: episode IDs. 3.2 Models The weights from the BERTbase and RoBERTabase models (Devlin et al., 2019; Liu et al., 2019) are transferred to all models in our experiments. Four baseline models, BERT, BERTpre, RoBERTa, and RoBERTapre, are built, where all models are finetuned on the datasets in Table 1 and the *pre models are pre-trained on the same datasets with the additional training set from Seasons 5-10 (§3.1). The baseline models are compared to BERTour and RoBERTAour that are trained by our approach.3 3.3 Results Table 2 shows results achieved by all the models. Following Yang and Choi (2019), exact matching (EM), span matching (SM), and utterance matching (UM) are used as the evaluation metrics. Each 3Detailed experimental setup are provided in Appendices. model is developed three times and their average score as well as the standard deviation are reported. The performance of RoBERTa* is generally higher than BERT* although RoBERTabase is pre-trained with larger datasets including CC-NEWS (Nagel, 2016), OPENWEBTEXT (Gokaslan and Cohen, 2019), and STORIES (Trinh and Le, 2018) than BERTbase such that results from those two types of transformers cannot be directly compared. Model EM SM UM BERT 43.3(±0.8) 59.3(±0.6) 70.2(±0.4) BERTpre 45.6(±0.9) 61.2(±0.7) 71.3(±0.6) BERTour 46.8(±1.3) 63.1(±1.1) 73.3(±0.7) RoBERTa 52.6(±0.7) 68.2(±0.3) 80.9(±0.8) RoBERTapre 52.6(±0.7) 68.6(±0.6) 81.7(±0.7) RoBERTaour 53.5(±0.7) 69.6(±0.8) 82.7(±0.5) Table 2: Accuracies (± standard deviations) achieved by the BERT and RoBERTa models. The *pre models show marginal improvement over their base models, implying that pre-training the language models on FRIENDSQA with the original transformers does not make much impact on this QA task. The models using our approach perform noticeably better than the baseline models, showing 3.8% and 1.4% improvements on SM from BERT and RoBERTa, respectively. Type Dist. EM SM UM Where 18.16 66.1(±0.5) 79.9(±0.7) 89.8(±0.7) When 13.57 63.3(±1.3) 76.4(±0.6) 88.9(±1.2) What 18.48 56.4(±1.7) 74.0(±0.5) 87.7(±2.1) Who 18.82 55.9(±0.8) 66.0(±1.7) 79.9(±1.1) How 15.32 43.2(±2.3) 63.2(±2.5) 79.4(±0.7) Why 15.65 33.3(±2.0) 57.3(±0.8) 69.8(±1.8) Table 3: Results from the RoBERTaour model by different question types. Table 3 shows the results achieved by RoBERTaour w.r.t. question types. UM drops significantly for Why that often spans out to longer sequences and also requires deeper inferences to answer correctly than the others. Compared to the baseline models, our models show more well-around performance regardless the question types.4 3.4 Ablation Studies Table 4 shows the results from ablation studies to analyze the impacts of the individual approaches. BERTpre and RoBERTapre are the same as in Table 2, that are the transformer models pre-trained by 4Question type results for all models are in Appendices. 5713 the token-level masked LM (§2.1.1) and fine-tuned by the token span prediction (§2.2.2). BERTuid and RoBERTauid are the models that are pre-trained by the token-level masked LM and jointly fine-tuned by the token span prediction as well as the utterance ID prediction (UID: §2.2.1). Given these two types of transformer models, the utterance-level masked LM (ULM: §2.1.2) and the utterance order prediction (UOP: §2.1.3) are separately evaluated. Model EM SM UM BERTpre 45.6(±0.9) 61.2(±0.7) 71.3(±0.6) ⊕ULM 45.7(±0.9) 61.8(±0.9) 71.8(±0.5) ⊕ULM⊕UOP 45.6(±0.9) 61.7(±0.7) 71.7(±0.6) BERTuid 45.7(±0.8) 61.1(±0.8) 71.5(±0.5) ⊕ULM 46.2(±1.1) 62.4(±1.2) 72.5(±0.8) ⊕ULM⊕UOP 46.8(±1.3) 63.1(±1.1) 73.3(±0.7) RoBERTapre 52.6(±0.7) 68.6(±0.6) 81.7(±0.7) ⊕ULM 52.9(±0.8) 68.7(±1.1) 81.7(±0.6) ⊕ULM⊕UOP 52.5(±0.8) 68.8(±0.5) 81.9(±0.7) RoBERTauid 52.8(±0.9) 68.7(±0.8) 81.9(±0.5) ⊕ULM 53.2(±0.6) 69.2(±0.7) 82.4(±0.5) ⊕ULM⊕UOP 53.5(±0.7) 69.6(±0.8) 82.7(±0.5) Table 4: Results for the ablation studies. Note that the *uid⊕ULM⊕UOP models are equivalent to the *our models in Table 2, respectively. These two dialogue-specific LM approaches, ULM and UOP, give very marginal improvement over the baseline models, that is rather surprising. However, they show good improvement when combined with UID, implying that pre-training language models may not be enough to enhance the performance by itself but can be effective when it is coupled with an appropriate fine-tuning approach. Since both ULM and UOP are designed to improve the quality of utterance embeddings, it is expected to improve the accuracy for UID as well. The improvement on UM is indeed encouraging, giving 2% and 1% boosts to BERTpre and RoBERTapre, respectively and consequently improving the other two metrics. 3.5 Error Analysis As shown in Table 3, the major errors are from the three types of questions, who, how, and why; thus, we select 100 dialogues associated with those question types that our best model, RoBERTaour, incorrectly predicts the answer spans for. Specific examples are provided in Tables 12, 13 and 14 (§A.3). Following Yang et al. (2019), errors are grouped into 6 categories, entity resolution, paraphrase and partial match, cross-utterance reasoning, question bias, noise in annotation, and miscellaneous. Table 5 shows the errors types and their ratios with respect to the question types. Two main error types are entity resolution and cross-utterance reasoning. The entity resolution error happens when many of the same entities are mentioned in multiple utterances. This error also occurs when the QA system is asked about a specific person, but predicts wrong people where there are so many people appearing in multiple utterances. The cross-utterance reasoning error often happens with the why and how questions where the model relies on pattern matching mostly and predicts the next utterance span of the matched pattern. Error Types Who How Why Entity Resolution 34% 23% 20% Paraphrase and Partial Match 14% 14% 13% Cross-Utterance Reasoning 25% 28% 27% Question Bias 11% 13% 17% Noise in Annotation 4% 7% 9% Miscellaneous 12% 15% 14% Table 5: Error types and their ratio with respect to the three most challenging question types. 4 Conclusion This paper introduces a novel transformer approach that effectively interprets hierarchical contexts in multiparty dialogue by learning utterance embeddings. Two language modeling approaches are proposed, utterance-level masked LM and utterance order prediction. Coupled with the joint inference between token span prediction and utterance ID prediction, these two language models significantly outperform two of the state-of-the-art transformer approaches, BERT and RoBERTa, on a span-based QA task called FriendsQA . We will evaluate our approach on other machine comprehension tasks using dialogues as evidence documents to further verify the generalizability of this work. Acknowledgments We gratefully acknowledge the support of the AWS Machine Learning Research Awards (MLRA). Any contents in this material are those of the authors and do not necessarily reflect the views of them. References Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. 5714 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Alexis CONNEAU and Guillaume Lample. 2019. Cross-lingual language model pretraining. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 7057–7067. Curran Associates, Inc. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL’19, pages 4171–4186. Aaron Gokaslan and Vanya Cohen. 2019. OpenWebText Corpus. Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1821–1831, Vancouver, Canada. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Tom Koisk, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gbor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317328. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv, 1907.11692. Sebastian Nagel. 2016. News Dataset Available. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 colocated with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you dont know: Unanswerable questions for squad. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249266. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A Challenge Data Set and Models for Dialogue-Based Reading Comprehension. Transactions of the Association for Computational Linguistics, 7:217–231. Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Trieu H. Trinh and Quoc V. Le. 2018. A Simple Method for Commonsense Reasoning. arXiv, 1806.02847. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset. Proceedings of the 2nd Workshop on Representation Learning for NLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pages 6000–6010, USA. Curran Associates Inc. Zhengzhe Yang and Jinho D. Choi. 2019. FriendsQA: Open-domain question answering on TV show transcripts. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 188–197, Stockholm, Sweden. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 5754– 5764. Curran Associates, Inc.
2020
505