{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:38:49.424137Z" }, "title": "A Survey on Recognizing Textual Entailment as an NLP Evaluation", "authors": [ { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": { "postCode": "3009, 10027", "settlement": "Broadway, New York", "region": "NY" } }, "email": "apoliak@barnard.ed" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recognizing Textual Entailment (RTE) was proposed as a unified evaluation framework to compare semantic understanding of different NLP systems. In this survey paper, we provide an overview of different approaches for evaluating and understanding the reasoning capabilities of NLP systems. We then focus our discussion on RTE by highlighting prominent RTE datasets as well as advances in RTE dataset that focus on specific linguistic phenomena that can be used to evaluate NLP systems on a fine-grained level. We conclude by arguing that when evaluating NLP systems, the community should utilize newly introduced RTE datasets that focus on specific linguistic phenomena.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Recognizing Textual Entailment (RTE) was proposed as a unified evaluation framework to compare semantic understanding of different NLP systems. In this survey paper, we provide an overview of different approaches for evaluating and understanding the reasoning capabilities of NLP systems. We then focus our discussion on RTE by highlighting prominent RTE datasets as well as advances in RTE dataset that focus on specific linguistic phenomena that can be used to evaluate NLP systems on a fine-grained level. We conclude by arguing that when evaluating NLP systems, the community should utilize newly introduced RTE datasets that focus on specific linguistic phenomena.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As NLP technologies are more widely adopted, how to evaluate NLP systems and how to determine whether one model understands language or generates text better than another is an increasingly important question. Recognizing Textual Entailment (RTE Cooper et al., 1996; , the task of determining whether the meaning of one sentence can likely be inferred from another was introduced to answer this question.", "cite_spans": [ { "start": 241, "end": 266, "text": "(RTE Cooper et al., 1996;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We begin this survey by discussing different approaches over the past thirty years for evaluating and comparing NLP systems. Next, we will discuss how RTE was introduced as a specific answer to this broad question of how to best evaluate NLP systems. This will include a broad discussion of efforts in the past three decades to build RTE datasets and use RTE to evaluate NLP models. We will then highlight recent RTE datasets that focus on specific semantic phenomena and conclude by arguing that they should be utilized for evaluating the reasoning capabilities of downstream NLP systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The terms Natural Language Inference (NLI) and RTE are often used interchangeably. Many papers begin by explicitly mentioning that these terms are synonymous (Liu et al., 2016; Gong et al., 2018; Camburu et al., 2018) . 1 The broad phrase \"natural language inference\" is more appropriate for a class of problems that require making inferences from natural language. Tasks like sentiment analysis, event factuality, or even question-answering can be viewed as forms of natural language inference without having to convert them into the sentence pair classification format used in RTE. Earlier works used the term natural language inference in this way (Schwarcz et al., 1970; Wilks, 1975; Punyakanok et al., 2004) .", "cite_spans": [ { "start": 158, "end": 176, "text": "(Liu et al., 2016;", "ref_id": "BIBREF72" }, { "start": 177, "end": 195, "text": "Gong et al., 2018;", "ref_id": "BIBREF51" }, { "start": 196, "end": 217, "text": "Camburu et al., 2018)", "ref_id": "BIBREF19" }, { "start": 220, "end": 221, "text": "1", "ref_id": null }, { "start": 651, "end": 674, "text": "(Schwarcz et al., 1970;", "ref_id": "BIBREF113" }, { "start": 675, "end": 687, "text": "Wilks, 1975;", "ref_id": "BIBREF129" }, { "start": 688, "end": 712, "text": "Punyakanok et al., 2004)", "ref_id": "BIBREF102" } ], "ref_spans": [], "eq_spans": [], "section": "Natural Language Inference or Recognizing Textual Entailment?", "sec_num": null }, { "text": "The leading term recognizing in RTE is fitting as the task is to classify or predict whether the truth of one sentence likely follows the other. The second term textual is similarly appropriate since the domain is limited to textual data. Critics of the name RTE often argue that the term entailment is inappropriate since the definition of the NLP task strays too far from the technical definition from entailment in linguistics (Manning, 2006) . Zaenen et al. (2005) prefer the term textual inference because examples in RTE datasets often require a system to not only identify entailments but also conventional implicatures, conversational implicatures, and world knowledge.", "cite_spans": [ { "start": 430, "end": 445, "text": "(Manning, 2006)", "ref_id": "BIBREF76" }, { "start": 448, "end": 468, "text": "Zaenen et al. (2005)", "ref_id": "BIBREF135" } ], "ref_spans": [], "eq_spans": [], "section": "Natural Language Inference or Recognizing Textual Entailment?", "sec_num": null }, { "text": "If starting over, we would advocate for the phrase Recognizing Textual Inference. However, given the choice between RTE and NLI, we prefer RTE since it is more representative of the task at hand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Natural Language Inference or Recognizing Textual Entailment?", "sec_num": null }, { "text": "The question of how best to evaluate NLP systems is an open problem intriguing the community for decades. A 1988 workshop on the evaluation of NLP systems explored key questions for evaluation. These included questions related to valid measures of \"black-box\" performance, linguistic theories that are relevant to developing test suites, reasonable expectations for robustness, and measuring progress in the field (Palmer and Finin, 1990) . The large number of ACL workshops focused on evaluations in NLP demonstrate the lack of consensus on how to properly evaluate NLP systems. Some workshops focused on: 1) evaluations in general (Pastra, 2003) ; 2) different NLP tasks, e.g. machine translation (ws-, 2001; Goldstein et al., 2005) and summarization (Conroy et al., 2012; Giannakopoulos et al., 2017) ; or 3) contemporary NLP approaches that rely on vector space representations (Levy et al., 2016; Bowman et al., 2017; Rogers et al., 2019) .", "cite_spans": [ { "start": 414, "end": 438, "text": "(Palmer and Finin, 1990)", "ref_id": "BIBREF87" }, { "start": 633, "end": 647, "text": "(Pastra, 2003)", "ref_id": null }, { "start": 699, "end": 710, "text": "(ws-, 2001;", "ref_id": null }, { "start": 711, "end": 734, "text": "Goldstein et al., 2005)", "ref_id": null }, { "start": 753, "end": 774, "text": "(Conroy et al., 2012;", "ref_id": "BIBREF33" }, { "start": 775, "end": 803, "text": "Giannakopoulos et al., 2017)", "ref_id": null }, { "start": 882, "end": 901, "text": "(Levy et al., 2016;", "ref_id": "BIBREF68" }, { "start": 902, "end": 922, "text": "Bowman et al., 2017;", "ref_id": "BIBREF130" }, { "start": 923, "end": 943, "text": "Rogers et al., 2019)", "ref_id": "BIBREF108" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating NLP Systems", "sec_num": "2" }, { "text": "In the quest to develop an ideal evaluation framework for NLP systems, researchers proposed multiple evaluation methods, e.g. EAGLES (King et al., 1995) , TSNLP (Oepen and Netter, 1995; Lehmann et al., 1996) , FraCas (Cooper et al., 1996) , SENSE-VAL (Kilgarriff, 1998) , CLEF (Agosti et al., 2007) , and others. These approaches are often divided along multiple dimensions. Here, we will survey approaches along two dimensions: 1) intrinsic vs. extrinsic evaluations; 2) general purpose vs task specific evaluations. 2", "cite_spans": [ { "start": 133, "end": 152, "text": "(King et al., 1995)", "ref_id": "BIBREF62" }, { "start": 161, "end": 185, "text": "(Oepen and Netter, 1995;", "ref_id": "BIBREF86" }, { "start": 186, "end": 207, "text": "Lehmann et al., 1996)", "ref_id": "BIBREF67" }, { "start": 217, "end": 238, "text": "(Cooper et al., 1996)", "ref_id": "BIBREF34" }, { "start": 251, "end": 269, "text": "(Kilgarriff, 1998)", "ref_id": "BIBREF60" }, { "start": 277, "end": 298, "text": "(Agosti et al., 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating NLP Systems", "sec_num": "2" }, { "text": "Intrinsic evaluations test the system in of itself and extrinsic evaluation test the system in relation to some other task. (Farzindar and Lapalme, 2004) When reviewing Sparck Jones and Galliers (1996) 's textbook on NLP evaluations, Estival (1997) comments that \"one of the most important distinctions that must be drawn when performing an evaluation of a system is that between intrinsic criteria, i.e. those concerned with the system's own objectives, and extrinsic criteria, i.e. those concerned with the function of the system in relation to its set-up.\" Resnik et al. (2006) similarly noted that \"intrinsic evaluations measure the performance of an NLP component on its defined subtask, usually against a defined standard in a reproducible laboratory setting\" while \"extrinsic evaluations focus on the component's contribution to the performance of a complete application, which often involves the participation of a human in the loop.\" Sparck Jones (1994) refers to the distinction of intrinsic vs extrinsic evaluations as the orientation of an evaluation. Under these definitions, for example, \"an intrinsic evaluation of a parser would analyze the accuracy of the results returned by the parser as a stand-alone system, whereas an extrinsic evaluation would analyze the impact of the parser within the context of a broader NLP application\" like answer extraction (Moll\u00e1 and Hutchinson, 2003) . When evaluating a document summarization system, an intrinsic evaluation might ask questions related to the fluency or coverage of key ideas in the summary while an extrinsic evaluation might explore whether a generated summary was useful in a search engine (Resnik and Lin, 2010) . This distinction has also been referred to as applicationfree versus application-driven evaluations (Kov\u00e1\u017a et al., 2016). 3 Proper extrinsic evaluations are often infeasible in an academic lab setting. Therefore, researchers often rely on intrinsic evaluations to approximate extrinsic evaluations, even though intrinsic and extrinsic evaluations serve different goals and many common intrinsic evaluations for word vectors (Tsvetkov et al., 2015; Chiu et al., 2016; Faruqui et al., 2016) , generating natural language text (Belz and Gatt, 2008; Reiter, 2018) , or text mining (Caporaso et al., 2008) might not correlate with extrinsic evaluations. 4 Developing intrinsic evaluations that correlate with extrinsic evaluations remains an open problem in NLP. 3 As another example, in the case of evaluating different methods for training word vectors, intrinsic evaluations might consider how well similarities between word vectors correlate with human evaluated word similarities. This is the basis of evaluation benchmarks like SimLex (Hill et al., 2015) , Verb (Baker et al., 2014) , RW (Luong et al., 2013) , MEN (Bruni et al., 2012) , WordSim-353 (Finkelstein et al., 2001) , and others. Extrinsic evaluations for word embeddings might consider how well different word vectors help models for tasks like sentiment analysis (Petrolito, 2018; Mishev et al., 2019) , machine translation (Wang et al., 2019b) , or named entity recognition (Wu et al., 2015; Nayak et al., 2016) . 4 Although recent work suggest that some intrinsic evaluations for word vectors do indeed correlate with extrinsic evaluations (Qiu et al., 2018; Thawani et al., 2019) .", "cite_spans": [ { "start": 124, "end": 153, "text": "(Farzindar and Lapalme, 2004)", "ref_id": "BIBREF43" }, { "start": 176, "end": 201, "text": "Jones and Galliers (1996)", "ref_id": "BIBREF116" }, { "start": 560, "end": 580, "text": "Resnik et al. (2006)", "ref_id": "BIBREF106" }, { "start": 950, "end": 962, "text": "Jones (1994)", "ref_id": "BIBREF115" }, { "start": 1372, "end": 1400, "text": "(Moll\u00e1 and Hutchinson, 2003)", "ref_id": "BIBREF81" }, { "start": 1661, "end": 1683, "text": "(Resnik and Lin, 2010)", "ref_id": "BIBREF105" }, { "start": 1786, "end": 1809, "text": "(Kov\u00e1\u017a et al., 2016). 3", "ref_id": null }, { "start": 2110, "end": 2133, "text": "(Tsvetkov et al., 2015;", "ref_id": "BIBREF121" }, { "start": 2134, "end": 2152, "text": "Chiu et al., 2016;", "ref_id": "BIBREF28" }, { "start": 2153, "end": 2174, "text": "Faruqui et al., 2016)", "ref_id": "BIBREF42" }, { "start": 2210, "end": 2231, "text": "(Belz and Gatt, 2008;", "ref_id": "BIBREF13" }, { "start": 2232, "end": 2245, "text": "Reiter, 2018)", "ref_id": "BIBREF104" }, { "start": 2263, "end": 2286, "text": "(Caporaso et al., 2008)", "ref_id": "BIBREF20" }, { "start": 2335, "end": 2336, "text": "4", "ref_id": null }, { "start": 2444, "end": 2445, "text": "3", "ref_id": null }, { "start": 2722, "end": 2741, "text": "(Hill et al., 2015)", "ref_id": "BIBREF54" }, { "start": 2749, "end": 2769, "text": "(Baker et al., 2014)", "ref_id": "BIBREF5" }, { "start": 2775, "end": 2795, "text": "(Luong et al., 2013)", "ref_id": "BIBREF74" }, { "start": 2802, "end": 2822, "text": "(Bruni et al., 2012)", "ref_id": "BIBREF18" }, { "start": 2837, "end": 2863, "text": "(Finkelstein et al., 2001)", "ref_id": "BIBREF44" }, { "start": 3013, "end": 3030, "text": "(Petrolito, 2018;", "ref_id": "BIBREF95" }, { "start": 3031, "end": 3051, "text": "Mishev et al., 2019)", "ref_id": "BIBREF80" }, { "start": 3074, "end": 3094, "text": "(Wang et al., 2019b)", "ref_id": "BIBREF127" }, { "start": 3125, "end": 3142, "text": "(Wu et al., 2015;", "ref_id": "BIBREF131" }, { "start": 3143, "end": 3162, "text": "Nayak et al., 2016)", "ref_id": "BIBREF84" }, { "start": 3165, "end": 3166, "text": "4", "ref_id": null }, { "start": 3292, "end": 3310, "text": "(Qiu et al., 2018;", "ref_id": "BIBREF103" }, { "start": 3311, "end": 3332, "text": "Thawani et al., 2019)", "ref_id": "BIBREF119" } ], "ref_spans": [], "eq_spans": [], "section": "Intrinsic vs Extrinsic Evaluations", "sec_num": "2.1" }, { "text": "General purpose evaluations determine how well NLP systems capture different linguistic phenomena. These evaluations often rely on the development of test cases that systematically cover a wide range of phenomena. Additionally, these evaluations generally do not consider how well a system under investigation performs on held out data for the task that the NLP system was trained on. In general purpose evaluations, specific linguistic phenomena should be isolated such that each test or example evaluates one specific linguistic phenomenon, as tests ideally \"are controlled and exhaustive databases of linguistic utterances classified by linguistic features\" (Lloberes et al., 2015) . In task specific evaluations, the goal is to determine how well a model performs on a held out test corpus. How well systems generalize on text classification problems is determined with a combination of metrics like accuracy, precision, and recall, or metrics like BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) in generation tasks. Task specific evaluations, where \"the majority of benchmark datasets . . . are drawn from text corpora, reflecting a natural frequency distribution of language phenomena\" (Belinkov and Glass, 2019) , is the common paradigm in NLP research today. Researchers often begin their research with provided training and held-out test corpora, as their research agenda is to develop systems that outperform other researchers' systems on a held-out test set based on a wide range of metrics.", "cite_spans": [ { "start": 661, "end": 684, "text": "(Lloberes et al., 2015)", "ref_id": "BIBREF73" }, { "start": 958, "end": 981, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF88" }, { "start": 992, "end": 1003, "text": "(Lin, 2004)", "ref_id": "BIBREF69" }, { "start": 1196, "end": 1222, "text": "(Belinkov and Glass, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "General Purpose vs Task Specific Evaluations", "sec_num": "2.2" }, { "text": "The distinction between general purpose and task specific evaluations is sometimes blurred. For example, while general purpose evaluations are ideally task agnostic, researchers develop evaluations that test for a wide range of linguistic phenomena captured by NLP systems trained to perform specific tasks. These include linguistic tests targeted for systems that focus on parsing (Lloberes et al., 2015) , machine translation (King and Falkedal, 1990; Koh et al., 2001; Isabelle et al., 2017; Choshen and Abend, 2019; Popovi\u0107 and Castilho, 2019; Avramidis et al., 2019) , summarization (Pitler et al., 2010) , and others (Chinchor, 1991; Chinchor et al., 1993) .", "cite_spans": [ { "start": 382, "end": 405, "text": "(Lloberes et al., 2015)", "ref_id": "BIBREF73" }, { "start": 428, "end": 453, "text": "(King and Falkedal, 1990;", "ref_id": "BIBREF63" }, { "start": 454, "end": 471, "text": "Koh et al., 2001;", "ref_id": "BIBREF65" }, { "start": 472, "end": 494, "text": "Isabelle et al., 2017;", "ref_id": "BIBREF56" }, { "start": 495, "end": 519, "text": "Choshen and Abend, 2019;", "ref_id": "BIBREF29" }, { "start": 520, "end": 547, "text": "Popovi\u0107 and Castilho, 2019;", "ref_id": "BIBREF101" }, { "start": 548, "end": 571, "text": "Avramidis et al., 2019)", "ref_id": "BIBREF4" }, { "start": 588, "end": 609, "text": "(Pitler et al., 2010)", "ref_id": "BIBREF97" }, { "start": 623, "end": 639, "text": "(Chinchor, 1991;", "ref_id": "BIBREF26" }, { "start": 640, "end": 662, "text": "Chinchor et al., 1993)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "General Purpose vs Task Specific Evaluations", "sec_num": "2.2" }, { "text": "Test Suites vs. Test Corpora This distinction can also be described in terms of the data used to evaluate systems. Oepen and Netter (1995) refer to this distinction as test suites versus test corpora.", "cite_spans": [ { "start": 115, "end": 138, "text": "Oepen and Netter (1995)", "ref_id": "BIBREF86" } ], "ref_spans": [], "eq_spans": [], "section": "General Purpose vs Task Specific Evaluations", "sec_num": "2.2" }, { "text": "They define a test suite as a \"systematic collection of linguistic expressions (test items, e.g. sentences or phrases) and often includes associated annotations or descriptions.\" They lament the state of test suites in their time since \"most of the existing test suites have been written for specific systems or simply enumerate a set of 'interesting' examples [but] does not meet the demand for large, systematic, well-documented and annotated collections of linguistic material required by a growing number of NLP applications.\" Oepen and Netter further delineate the difference between test corpora and test suites. Unlike \"test corpora drawn from naturally occurring texts,\" test suites allow for 1) more control over the data, 2) systematic coverage, 3) non-redundant representation, 4) inclusion of negative data, and 5) coherent annotation. Thus, test suites \"allow for a fine-grained diagnosis of system performance\" (Oepen and Netter, 1995) . Oepen and Netter argue that both should be used in tandem -\"test suites and corpora should stand in a complementary relation, with the former building on the latter wherever possible and necessary.\" Hence, both test suites and test corpora are important for evaluating how well NLP systems capture linguistic phenomena and perform in practice on real world data.", "cite_spans": [ { "start": 361, "end": 366, "text": "[but]", "ref_id": null }, { "start": 925, "end": 949, "text": "(Oepen and Netter, 1995)", "ref_id": "BIBREF86" } ], "ref_spans": [], "eq_spans": [], "section": "General Purpose vs Task Specific Evaluations", "sec_num": "2.2" }, { "text": "In recent years, interpreting and analysing NLP models has become prominent in many research agendas. Contemporary and successful deep learning NLP methods are not as interpretable as previously popular NLP approaches relying on feature engineering. Approaches for interpreting and analysing how well NLP models capture linguistic phenomena often leverage auxiliary or diagnostic classifiers. Contemporary deep learning NLP systems often leverage pre-trained encoders to represent the meaning of a sentence in a fixed-length vector representation. Adi et al. (2017) introduced the notion of using auxiliary classifiers as a general purpose methodology to diagnose what language information is encoded and captured by contemporary sentence representations. They argued for using \"auxiliary prediction tasks\" where, like in Dai and Le (2015), pre-trained sentence encodings are \"used as input for other prediction tasks.\" The \"auxiliary prediction tasks\" can serve as diagnostics, and Adi et al. (2017)'s auxiliary, diagnostic tasks focused on how word order, word content, and sen-tence length are captured in pre-trained sentence representations.", "cite_spans": [ { "start": 548, "end": 565, "text": "Adi et al. (2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Probing Deep Learning NLP Models", "sec_num": "2.3" }, { "text": "As Adi et al.'s general methodology \"can be applied to any sentence representation model,\" researchers develop other diagnostic tasks that explore different linguistic phenomenon (Ettinger et al., 2018; Hupkes et al., 2018) . Belinkov (2018) 's thesis relied on and popularized this methodology when exploring how well speech recognition and machine translation systems capture phenomena related to phonetics (Belinkov and Glass, 2017), morphology (Belinkov et al., 2017a) , and syntax (Belinkov et al., 2017b) .", "cite_spans": [ { "start": 179, "end": 202, "text": "(Ettinger et al., 2018;", "ref_id": "BIBREF40" }, { "start": 203, "end": 223, "text": "Hupkes et al., 2018)", "ref_id": "BIBREF55" }, { "start": 226, "end": 241, "text": "Belinkov (2018)", "ref_id": "BIBREF8" }, { "start": 448, "end": 472, "text": "(Belinkov et al., 2017a)", "ref_id": "BIBREF9" }, { "start": 486, "end": 510, "text": "(Belinkov et al., 2017b)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Probing Deep Learning NLP Models", "sec_num": "2.3" }, { "text": "The general purpose methodology of auxiliary diagnostic classifiers is also used to explore how well different pre-trained sentence representation methods perform on a broad range of NLP tasks. For example, SentEval (Conneau and Kiela, 2018) and GLUE (Wang et al., 2018) are used to evaluate how different sentence representations perform on paraphrase detection, semantic textual similarity, and a wide range of other binary and multiclass classification problems. We categorize these datasets as extrinsic evaluations since they often treat learned sentence-representations as features to train a classifier for an external task. However, most of these do not count as test suites, since the data is not tightly controlled to evaluate specific linguistic phenomena. Rather, resources like GLUE and SuperGLUE (Wang et al., 2019a) package existing test corpora for different tasks and provide an easy platform for researchers to compete on developing systems that perform well on the suite of pre-existing, and re-packaged test corpora.", "cite_spans": [ { "start": 216, "end": 241, "text": "(Conneau and Kiela, 2018)", "ref_id": "BIBREF31" }, { "start": 251, "end": 270, "text": "(Wang et al., 2018)", "ref_id": "BIBREF126" }, { "start": 810, "end": 830, "text": "(Wang et al., 2019a)", "ref_id": "BIBREF125" } ], "ref_spans": [], "eq_spans": [], "section": "Probing Deep Learning NLP Models", "sec_num": "2.3" }, { "text": "NLP systems cannot be held responsible for knowledge of what goes on in the world but no NLP system can claim to \"understand\" language if it can't cope with textual inferences. (Zaenen et al., 2005) Recognizing and coping with inferences is key to understanding human language. While NLP systems might be trained to perform different tasks, such as translating, answering questions, or extracting information from text, most NLP systems require understanding and making inferences from text. Therefore, RTE was introduced as a framework to evaluate NLP systems. Rooted in linguistics, RTE is the task of determining whether the meaning of one sentence can likely be inferred from another. Unlike the strict definition of entailment in linguistics that \"sentence A entails sentence B if in all models in which the interpretation of A is true, also the interpretation of B is true\" (Janssen, 2011), RTE relies on a fuzzier notion of entailment. For example, annotation guidelines for an RTE dataset 5 stated that in principle, the hypothesis must be fully entailed by the text. Judgment would be False if the hypothesis includes parts that cannot be inferred from the text. However, cases in which inference is very probable (but not completely certain) are still judged as True. Starting with FraCas, we will discuss influential work that introduced and argued for RTE as an evaluation framework.", "cite_spans": [ { "start": 177, "end": 198, "text": "(Zaenen et al., 2005)", "ref_id": "BIBREF135" } ], "ref_spans": [], "eq_spans": [], "section": "Recognizing Textual Entailment", "sec_num": "3" }, { "text": "FraCas Over a span of two years (December 1993 -January 1996), Cooper et al. (1996) developed FraCas as \"an inference test suite for evaluating the inferential competence of different NLP systems and semantic theories\". Created manually by many linguists and funded by FP3-LRE, 6 FraCas is a \"semantic test suite\" that covers a range of semantic phenomena categorized into 9 classes. These are generalized quantifiers, plurals, anaphora, ellipsis, adjectives, comparatives, temporal reference, verbs, and attitudes. Based on the descriptions in \u00a72, we would classify FraCas as an intrinsic evaluation and a general purpose test suite.", "cite_spans": [ { "start": 63, "end": 83, "text": "Cooper et al. (1996)", "ref_id": "BIBREF34" }, { "start": 278, "end": 279, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Recognizing Textual Entailment", "sec_num": "3" }, { "text": "Examples in FraCas contain a premise paired with a hypothesis. Premises are at least one sentence, though sometimes they contain multiple sentences, and most hypotheses are written in the form of a question and the answers are either Yes, No, or Don't know. MacCartney (2009) (specifically Chapter 7.8.1) converted the hypotheses from questions into declarative statements. 7 prevents its use as a dataset to train data hungry deep learning models.", "cite_spans": [ { "start": 374, "end": 375, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Recognizing Textual Entailment", "sec_num": "3" }, { "text": "Pascal RTE Challenges With a similar broad goal as FraCas, the Pascal Recognizing Textual Entailment challenges began as a \"generic evaluation framework\" to compare the inference capabilities of models designed to perform different tasks, based on the intuition \"that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment\" . Unlike FraCas's goal of determining whether a model performs distinct types of reasoning, the Pascal RTE Challenges primarily focused on using this framework to evaluate models for distinct, real-world downstream tasks. Thus, the examples in the Pascal RTE datasets were extracted from downstream tasks. The process was referred to as recasting in the thesis by Glickman (2006) . NLU problems were reframed under the RTE framework and candidate sentence pairs were extracted from existing NLP datasets and then labeled under variations of the RTE definition (including the quote above ). 8 For example, the RTE1 data came from 7 tasks: comparable documents, reading comprehension, question answering, information extraction, machine translation, information retrieval, and paraphrase acquisition. 9 Starting with Dagan et al. 2006, there have been eight iterations of the RTE challenge, with the most recent being Dzikovska et al. (2013) .", "cite_spans": [ { "start": 737, "end": 752, "text": "Glickman (2006)", "ref_id": "BIBREF48" }, { "start": 1172, "end": 1173, "text": "9", "ref_id": null }, { "start": 1289, "end": 1312, "text": "Dzikovska et al. (2013)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Recognizing Textual Entailment", "sec_num": "3" }, { "text": "The most popular recent RTE datasets, Stanford Natural Language Inference (SNLI; Bowman et al., 2015) and its successor Multi-NLI (Williams et al., 2017) , each contain over half a million examples and enabled re-searchers to apply data-hungry deep learning methods to RTE. Unlike the RTE datasets, these two datasets were created by eliciting hypotheses from humans. Crowd-source workers were tasked with writing one sentence each that is entailed, neutral, and contradicted by a caption extracted from the Flickr30k corpus (Young et al., 2014) . Next, the label for each premise-hypothesis pair in the development and test sets were verified by multiple crowd-source workers and the majority-vote label was assigned for each example. Table 2 provides such examples for both datasets. Rudinger et al. (2017) illustrated how eliciting textual data in this fashion creates stereotypical biases in SNLI. Some of the biases are gender-, age-, and race-based. Poliak et al. (2018c) argue that this may cause additional biases enabling a hypothesis-only model to outperform the majority baseline on SNLI by 100 percent (Gururangan et al., 2018; Tsuchiya, 2018).", "cite_spans": [ { "start": 81, "end": 101, "text": "Bowman et al., 2015)", "ref_id": "BIBREF16" }, { "start": 130, "end": 153, "text": "(Williams et al., 2017)", "ref_id": "BIBREF130" }, { "start": 525, "end": 545, "text": "(Young et al., 2014)", "ref_id": "BIBREF134" }, { "start": 786, "end": 808, "text": "Rudinger et al. (2017)", "ref_id": "BIBREF111" }, { "start": 956, "end": 977, "text": "Poliak et al. (2018c)", "ref_id": "BIBREF100" } ], "ref_spans": [ { "start": 736, "end": 743, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "SNLI and MNLI", "sec_num": null }, { "text": "The datasets in the PASCAL RTE Challenges were primarily treated as test corpora. Teams participated in those challenges by developing models to achieve increasingly high scores on each challenges' datasets. Since RTE was motivated as a diagnostic, researchers analyzed the RTE challenge datasets. de Marneffe et al. (2008) argued that there exist different levels and types of contradictions. They focus on different types of phenomena, e.g. antonyms, negation, and world knowledge, that can explain why a premise contradicts a hypothesis. MacCartney (2009) used a simple bag-ofwords model to evaluate early iterations of Recognizing Textual Entailment (RTE) challenge sets and noted 10 that \"the RTE1 test suite is the hardest, while the RTE2 test suite is roughly 4% easier, and the RTE3 test suite is roughly 9% easier.\" Addi-", "cite_spans": [ { "start": 301, "end": 323, "text": "Marneffe et al. (2008)", "ref_id": "BIBREF78" } ], "ref_spans": [], "eq_spans": [], "section": "Entailment as a Downstream NLP Task", "sec_num": "3.1" }, { "text": "A woman is talking on the phone while standing next to a dog H1 A woman is on the phone entailment H2 A woman is walking her dog neutral H3 A woman is sleeping contradiction P Tax records show Waters earned around $65,000 in 2000 H1 Waters' tax records show clearly that he earned a lovely $65k in 2000 entailment H2 Tax records indicate Waters earned about $65K in 2000 entailment H3 Waters' tax records show he earned a blue ribbon last year contradiction tionally, Vanderwende and Dolan (2006) and Blake (2007) demonstrate how sentence structure alone can provide a high signal for some RTE datasets. 11 Despite these analyses, researchers primarily built models to perform the task on the PASCAL RTE datasets rather than leveraging these datasets to evaluate models built for other tasks. Coinciding with the recent \"deep learning wave\" that has taken over NLP and Machine Learning (Manning, 2015), the introduction of large scale RTE datasets, specifically SNLI and MNLI, led to a resurgence of interest in RTE amongst NLP researchers. Large scale RTE datasets focusing on specific domains, like grade-school scientific knowledge (Khot et al., 2018) or medical information (Romanov and Shivade, 2018) , emerged as well. However, this resurgence did not primarily focus on using RTE as a means to evaluate NLP systems. Rather, researchers primarily used these datasets to compete with one another to achieve the top score on leaderboards for new RTE datasets.", "cite_spans": [ { "start": 468, "end": 496, "text": "Vanderwende and Dolan (2006)", "ref_id": "BIBREF122" }, { "start": 501, "end": 513, "text": "Blake (2007)", "ref_id": "BIBREF14" }, { "start": 1135, "end": 1154, "text": "(Khot et al., 2018)", "ref_id": "BIBREF59" }, { "start": 1178, "end": 1205, "text": "(Romanov and Shivade, 2018)", "ref_id": "BIBREF109" } ], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "There has been little evidence to suggest [that RTE models] capture the type of compositional or world knowledge tested by datasets like the FraCas test suite. (Pavlick, 2017) As large scale RTE datasets, like SNLI and MNLI, rapidly surged in popularity, some researchers critiqued the datasets' ability to test the inferential capabilities of NLP models. A high accuracy on these datasets does not indicate which types of reasoning RTE models perform or capture. As noted by White et al. 2017, \"researchers compete on which system achieves the highest score on a test set, but this itself does not lead to an understanding of which linguistic properties are better captured by a quantitatively superior system.\" In other words, the single accuracy metric on these challenges indicates how well a model can recognize whether one sentence likely follows from another, but it does not illuminate how well NLP models capture different semantic phenomena that are important for general NLU.", "cite_spans": [ { "start": 160, "end": 175, "text": "(Pavlick, 2017)", "ref_id": "BIBREF91" } ], "ref_spans": [], "eq_spans": [], "section": "Revisiting RTE as an NLP Evaluation", "sec_num": "4" }, { "text": "This issue was pointed out regarding the earlier PASCAL RTE datasets. In her thesis that presented \"a test suite for adjectival inference developed as a resource for the evaluation of computational systems handling natural language inference.\" Amoia (2008) blamed \"the difficulty of defining the linguistic phenomena which are responsible for inference\" as the reason why previous RTE resources \"concentrated on the creation of applications coping with textual entailment\" rather than \"resources for the evaluation of such applications.\"", "cite_spans": [ { "start": 244, "end": 256, "text": "Amoia (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Revisiting RTE as an NLP Evaluation", "sec_num": "4" }, { "text": "As current studies began exploring what linguistic phenomena are captured by neural NLP models and auxiliary diagnostic classifiers became a common tool to evaluate sentence representations in NLP systems, ( \u00a72.3), the community saw an interest in developing RTE datasets that can provide insight into what type of linguistic phenomena are captured by neural, deep learning models. In turn, the community is answering Chatzikyriakidis et al. (2017) plea to the community to test \"more kinds of inference\" than in previous RTE challenge sets. Here, we will highlight recent efforts in creating datasets that demonstrate how the community has started answering Chatzikyriakidis et al.'s call. We group these different datasets based on how they were created. Table 3 includes additional RTE datasets focused on specific linguistic phenomena.", "cite_spans": [ { "start": 418, "end": 448, "text": "Chatzikyriakidis et al. (2017)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 757, "end": 764, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Revisiting RTE as an NLP Evaluation", "sec_num": "4" }, { "text": "White et al. (2017) advocate for using RTE as a single framework to evaluate different linguistic phenomena. They argue for creating RTE datasets focused on specific phenomena by recasting existing annotations for different semantic phenomena into RTE. Poliak et al. (2018b) introduce the Diverse Natural Language Inference Collection (DNC) of over half a million RTE examples. They create the DNC by converting 7 semantic phenomena from 13 existing datasets into RTE. These phenomena include event factuality, named entity recognition, gendered anaphora resolution, sentiment analysis, relationship extraction, pun detection, and lexicosyntactic inference. Stali\u016bnait\u0117 (2018) 's master's thesis improved Poliak et al. (2018b) 's method used to recast annotations for factuality into RTE. Other efforts have created recast datasets in Hindi that focus on sentiment and emotion detection. 12 Concurrent to the DNC, Naik et al. (2018) released the \"NLI Stress Tests\" that included RTE datasets focused on negation, word overlap between premises and hypotheses, numerical reasoning, amongst other phenomena. Naik et al. (2018) similarly create their stress tests automatically using different methods for each phenomena. They then used these datasets to evaluate how well a wide class of RTE models capture these phenomena. Other RTE datasets that target more specific phenomena were created using automatic methods, including Jeretic et al. (2020)'s \"IMPRES\" diagnostic RTE dataset that tests for IMPlicatures and PRESuppositions.", "cite_spans": [ { "start": 253, "end": 274, "text": "Poliak et al. (2018b)", "ref_id": "BIBREF99" }, { "start": 658, "end": 676, "text": "Stali\u016bnait\u0117 (2018)", "ref_id": "BIBREF117" }, { "start": 705, "end": 726, "text": "Poliak et al. (2018b)", "ref_id": "BIBREF99" }, { "start": 888, "end": 890, "text": "12", "ref_id": null }, { "start": 914, "end": 932, "text": "Naik et al. (2018)", "ref_id": "BIBREF83" }, { "start": 1105, "end": 1123, "text": "Naik et al. (2018)", "ref_id": "BIBREF83" } ], "ref_spans": [], "eq_spans": [], "section": "Automatically Created", "sec_num": "4.1" }, { "text": "If not done with thorough testing and care, recasting or other automatic methods for creating these RTE datasets can lead to annotation artifacts unrelated to RTE that limit how well a dataset tests for a specific semantic phenomena. For example, to create not-entailed hypotheses, White et al. 2017replaced a single token in a context sentence with a word that crowd-source workers labeled as not being a paraphrase of the token in the given context. In FN+ (Pavlick et al., 2015) , two words might be deemed to be incorrect paraphrases in context based on a difference in the words' part of speech tags. 13 This limits the utility of the recast version of 12 https://github.com/midas-research/ hindi-nli-data", "cite_spans": [ { "start": 459, "end": 481, "text": "(Pavlick et al., 2015)", "ref_id": "BIBREF94" }, { "start": 606, "end": 608, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Automatically Created", "sec_num": "4.1" }, { "text": "13 Table 5 (in the appendix) demonstrates such examples, and in the last example, the words \"on\" and \"dated\" in the premise and hypothesis respectively have the NN and VBN POS tag.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Automatically Created", "sec_num": "4.1" }, { "text": "FN+ to be used when evaluating how well models capture paraphrastic inference. Similar to the efforts described here to recast different NLU problems as RTE, others have recast NLU problems into a question answer format (McCann et al., 2018; Gardner et al., 2019) . Recasting problems into RTE, as opposed to question-answering, has deeper roots in linguistic theory (Seuren, 1998; Chierchia and McConnell-Ginet, 2000; Brinton, 2000) , and continues a rich history within the NLP community.", "cite_spans": [ { "start": 220, "end": 241, "text": "(McCann et al., 2018;", "ref_id": "BIBREF79" }, { "start": 242, "end": 263, "text": "Gardner et al., 2019)", "ref_id": "BIBREF45" }, { "start": 367, "end": 381, "text": "(Seuren, 1998;", "ref_id": "BIBREF114" }, { "start": 382, "end": 418, "text": "Chierchia and McConnell-Ginet, 2000;", "ref_id": "BIBREF25" }, { "start": 419, "end": 433, "text": "Brinton, 2000)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Automatically Created", "sec_num": "4.1" }, { "text": "Other RTE datasets focused on specific phenomena rely on semi-automatic methods. RTE pairs are often generated automatically using well developed heuristics. Instead of automatically labeling the RTE example pairs (like in the approaches previously discussed), the automatically created examples are often labeled by crowdsource workers. For example, Kim et al. (2019) use hueristics to create RTE pairs that test for prepositions, comparatives, quantification, spacial reasoning, and negation and then present these examples to crowdsource workers on Amazon Mechanical Turk. Similarly, generate two premise-hypothesis pairs for each RTE example in MNLI that satisfy their set of constraints. Next, they rely on crowdsource workers to annotated whether the premise likely entails the hypothesis on a 5-point Likert scale.", "cite_spans": [ { "start": 351, "end": 368, "text": "Kim et al. (2019)", "ref_id": "BIBREF61" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-Automatically Created", "sec_num": "4.2" }, { "text": "Some methods instead first manually annotate their data and then rely on automatic methods to construct hypotheses and label RTE pairs. When generating RTE examples testing for monotonicity, Richardson et al. (2020) first manually encode the \"monotonicity information of each token in the lexicon and built sentences via a controlled set of grammar rules.\" They then \"substitute upward entailing tokens or constituents with something 'greater than or equal to' them, or downward entailing ones with something 'less than or equal to' them.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Automatically Created", "sec_num": "4.2" }, { "text": "While most of these datasets rely on varying degrees of automation, some RTE datasets focused on evaluating how well models capture specific phenomena rely on manual annotations. The GLUE and SuperGlue datasets include diagnostic sets where annotators manually labeled samples of examples as requiring a broad range of linguistic phenomena. The types of phenomena manu-Proto-Roles (White et al., 2017), Paraphrastic Inference (White et al., 2017), Event Factuality (Poliak et al., 2018b; Stali\u016bnait\u0117, 2018 ), Anaphora Resolution (White et al., 2017 Poliak et al., 2018b) , Lexicosyntactic Inference (Pavlick and Callison-Burch, 2016; Poliak et al., 2018b; Glockner et al., 2018) , Compositionality (Dasgupta et al., 2018) , Prepositions (Kim et al., 2019) , Comparatives (Kim et al., 2019; Richardson et al., 2020) , Quantification/Numerical Reasoning (Naik et al., 2018; Kim et al., 2019; Richardson et al., 2020) , Spatial Expressions (Kim et al., 2019) , Negation (Naik et al., 2018; Kim et al., 2019; Richardson et al., 2020) , Tense & Aspect (Kober et al., 2019) , Veridicality (Poliak et al., 2018b; , Monotonicity (Yanaka et al., 2019 (Yanaka et al., , 2020 Richardson et al., 2020) , Presupposition (Jeretic et al., 2020) , Implicatures (Jeretic et al., 2020) , Temporal Reasoning (Vashishtha et al., 2020) ally labeled include lexical semantics, predicateargument structure, logic, and common sense or world knowledge. 14", "cite_spans": [ { "start": 465, "end": 487, "text": "(Poliak et al., 2018b;", "ref_id": "BIBREF99" }, { "start": 488, "end": 505, "text": "Stali\u016bnait\u0117, 2018", "ref_id": "BIBREF117" }, { "start": 506, "end": 548, "text": "), Anaphora Resolution (White et al., 2017", "ref_id": null }, { "start": 549, "end": 570, "text": "Poliak et al., 2018b)", "ref_id": "BIBREF99" }, { "start": 599, "end": 633, "text": "(Pavlick and Callison-Burch, 2016;", "ref_id": "BIBREF92" }, { "start": 634, "end": 655, "text": "Poliak et al., 2018b;", "ref_id": "BIBREF99" }, { "start": 656, "end": 678, "text": "Glockner et al., 2018)", "ref_id": "BIBREF49" }, { "start": 698, "end": 721, "text": "(Dasgupta et al., 2018)", "ref_id": "BIBREF37" }, { "start": 737, "end": 755, "text": "(Kim et al., 2019)", "ref_id": "BIBREF61" }, { "start": 771, "end": 789, "text": "(Kim et al., 2019;", "ref_id": "BIBREF61" }, { "start": 790, "end": 814, "text": "Richardson et al., 2020)", "ref_id": null }, { "start": 852, "end": 871, "text": "(Naik et al., 2018;", "ref_id": "BIBREF83" }, { "start": 872, "end": 889, "text": "Kim et al., 2019;", "ref_id": "BIBREF61" }, { "start": 890, "end": 914, "text": "Richardson et al., 2020)", "ref_id": null }, { "start": 937, "end": 955, "text": "(Kim et al., 2019)", "ref_id": "BIBREF61" }, { "start": 967, "end": 986, "text": "(Naik et al., 2018;", "ref_id": "BIBREF83" }, { "start": 987, "end": 1004, "text": "Kim et al., 2019;", "ref_id": "BIBREF61" }, { "start": 1005, "end": 1029, "text": "Richardson et al., 2020)", "ref_id": null }, { "start": 1032, "end": 1067, "text": "Tense & Aspect (Kober et al., 2019)", "ref_id": null }, { "start": 1083, "end": 1105, "text": "(Poliak et al., 2018b;", "ref_id": "BIBREF99" }, { "start": 1121, "end": 1141, "text": "(Yanaka et al., 2019", "ref_id": "BIBREF133" }, { "start": 1142, "end": 1164, "text": "(Yanaka et al., , 2020", "ref_id": "BIBREF132" }, { "start": 1165, "end": 1189, "text": "Richardson et al., 2020)", "ref_id": null }, { "start": 1207, "end": 1229, "text": "(Jeretic et al., 2020)", "ref_id": "BIBREF58" }, { "start": 1245, "end": 1267, "text": "(Jeretic et al., 2020)", "ref_id": "BIBREF58" }, { "start": 1289, "end": 1314, "text": "(Vashishtha et al., 2020)", "ref_id": "BIBREF123" } ], "ref_spans": [], "eq_spans": [], "section": "Manually Created", "sec_num": "4.3" }, { "text": "These efforts resulted in a consistent format and framework for testing how well contemporary, deep learning NLP systems capture a wide-range of linguistic phenomena. However, so far, most of these datasets that target specific linguistic phenomena have been used to solely evaluate how well RTE models capture a wide range of phenomena, as opposed to evaluating how well systems trained for more applied NLP tasks capture these phenomena. Since RTE was introduced as a framework to evaluate how well NLP models cope with inferences, these newly created datasets have not been used to their full potential.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recommendations", "sec_num": "5" }, { "text": "A limited number of studies used some of these datasets to evaluate how well models trained for other tasks capture these phenomena. Poliak et al. (2018a) evaluated how well a BiLSTM encoder trained as part of a neural machine translation system capture phenomena like semantic proto-roles, paraphrastic inference, and anaphora resolution. Kim et al. (2019) used their RTE datasets focused on function words to evaluate different encoders trained for tasks like CCG parsing, image-caption matching, predicting discourse markers, and others. Those studies relied on the use of auxiliary classifiers as a common probing technique to evaluate sentence representations. As the community's interest in analyzing deep learning systems increases, demonstrated by the recent work relying on (Linzen et al., 2018 and improving upon (Hewitt and Liang, 2019; Voita and Titov, 2020; Pimentel et al., 2020; Mu and Andreas, 2020) the popular auxiliary 14 https://gluebenchmark.com/diagnostics classifier-based diagnostic technique, we call on the community to leverage the increasing number of RTE datasets focused on different semantic phenomena (Table 3) Another recent line of work uses RTE to evaluate the output of text generation systems. For example, Falke et al. (2019) explore \"whether textual entailment predictions can be used to detect errors\" in abstractive summarization systems and if errors \"can be reduced by reranking alternative predicted summaries\" with a textual entailment system trained on SNLI. While Falke et al. (2019) results demonstrated that current models might not be accurate enough to rank generated summaries, Barrantes et al. (2020) demonstrate that contemporary transformer models trained on the Adversarial NLI dataset (Nie et al., 2020 ) \"achieve significantly higher accuracy and have the potential of selecting a coherent summary.\" Therefore, we are encouraged that researchers might be able to use many of these new RTE datasets focused on specific phenomena to evaluate the coherency of machine generated text based on multiple linguistic phenomena that are integral to entailment and NLU. This approach can help researchers use the RTE datasets to evaluate a wider class of models, specifically non-neural models, unlike the auxiliary classifier or probing methods previously discussed.", "cite_spans": [ { "start": 133, "end": 154, "text": "Poliak et al. (2018a)", "ref_id": "BIBREF98" }, { "start": 340, "end": 357, "text": "Kim et al. (2019)", "ref_id": "BIBREF61" }, { "start": 783, "end": 803, "text": "(Linzen et al., 2018", "ref_id": "BIBREF70" }, { "start": 823, "end": 847, "text": "(Hewitt and Liang, 2019;", "ref_id": "BIBREF53" }, { "start": 848, "end": 870, "text": "Voita and Titov, 2020;", "ref_id": "BIBREF124" }, { "start": 871, "end": 893, "text": "Pimentel et al., 2020;", "ref_id": "BIBREF96" }, { "start": 894, "end": 915, "text": "Mu and Andreas, 2020)", "ref_id": "BIBREF82" }, { "start": 1244, "end": 1263, "text": "Falke et al. (2019)", "ref_id": "BIBREF41" }, { "start": 1511, "end": 1530, "text": "Falke et al. (2019)", "ref_id": "BIBREF41" }, { "start": 1630, "end": 1653, "text": "Barrantes et al. (2020)", "ref_id": "BIBREF7" }, { "start": 1742, "end": 1759, "text": "(Nie et al., 2020", "ref_id": "BIBREF85" } ], "ref_spans": [ { "start": 1133, "end": 1142, "text": "(Table 3)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Recommendations", "sec_num": "5" }, { "text": "The overwhelming majority, if not all, of these RTE datasets targeting specific phenomena rely on categorical RTE labels, following the common for-mat of the task. However, as Chen et al. (2020) recently illustrated, categorical RTE labels do not capture the subjective nature of the task. Instead, they argue for scalar RTE labels that indicate how likely a hypothesis could be inferred by a premise. Pavlick and Kwiatkowski (2019) similarly lament how labels are currently used in RTE datasets. Pavlick and Kwiatkowski demonstrate that a single label aggregated from multiple annotations for one RTE example minimizes the \"type of uncertainty present in [valid] human disagreements.\" Instead, they argue that a \"representation should be evaluated in terms of its ability to predict the full distribution of human inferences (e.g., by reporting crossentropy against a distribution of human ratings), rather than to predict a single aggregate score (e.g., by reporting accuracy against a discrete majority label or correlation with a mean score).\" Future RTE datasets targeting specific phenomena that contain scalar RTE labels from multiple annotators (following Chen et al.'s and Pavlick and Kwiatkowski's recommendations) can provide more insight into contemporary NLP models.", "cite_spans": [ { "start": 176, "end": 194, "text": "Chen et al. (2020)", "ref_id": "BIBREF24" }, { "start": 402, "end": 432, "text": "Pavlick and Kwiatkowski (2019)", "ref_id": "BIBREF93" }, { "start": 656, "end": 663, "text": "[valid]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Recommendations", "sec_num": "5" }, { "text": "With the current zeitgeist of NLP research where researchers are interested in analyzing state-of-theart deep learning models, now is a prime time to revisit RTE as a method to evaluate the inference capabilities of NLP models. In this survey, we discussed recent advances in RTE datasets that focus on specific linguistic phenomena that are integral for determining whether one sentence is likely inferred by another. Since RTE was primarily motivated as an evaluation framework, we began this survey with a broad overview of prior approaches for evaluating NLP systems. This included the distinctions between instrinsic vs extrinsic evaluations and general purpose vs task specific evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We discussed foundational RTE datasets that greatly impacted the NLP community and included critiques of why they do not fulfill the promise of RTE as an evaluation framework. We highlighted recent efforts to create RTE datasets that focus on specific linguistic phenomena. By using these datasets to evaluate sentence representations from neural models or rank generated text from NLP systems, researchers can help fulfil the promise of RTE as unified evaluation framework. Ultimately, this will help us determine how well models understand language on a fine-grained level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "includes parts that cannot be inferred from the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "\u2022 Cases in which inference is very probable (but not completely certain) are judged as YES. For instance, in pair #387 one could claim that although Shapiro's office is in Century City, he actually never arrives to his office, and works elsewhere. However, this interpretation of t is very unlikely, and so the entailment holds with high probability. On the other hand, annotators were guided to avoid vague examples for which inference has some positive probability which is not clearly very high.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "\u2022 Our definition of entailment allows presupposition of common knowledge, such as: a company has a CEO, a CEO is an employee of the company, an employee is a person, etc. For instance, in pair #294, the entailment depends on knowing that the president of a country is also a citizen of that country.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "( Bar-Haim et al., 2006) A.3 RTE3 Guidelines", "cite_spans": [ { "start": 2, "end": 24, "text": "Bar-Haim et al., 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "As entailment is a directional relation, the hypothesis must be entailed by the given text, but the text need not be entailed by the hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "\u2022 The hypothesis must be fully entailed by the text. Judgment must be NO if the hypothesis includes parts that cannot be inferred from the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "\u2022 Cases in which inference is very probable (but not completely certain) were judged as YES.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "\u2022 Common world knowledge was assumed, e.g. the capital of a country is situated in that country, the prime minister of a state is also a citizen of that state, and so on. (Giampiccolo et al., 2007) QUANTIFIERS (14) P Neither leading tenor comes cheap. One of the leading tenors is Pavarotti. Q Is Pavarotti a leading tenor who comes cheap? H Pavarotti is a leading tenor who comes cheap. unemployment is at an all-time low unemployment is at an all-time poor aeoi 's activities and facility have been tied to several universities aeoi 's activities and local have been tied to several universities jerusalem fell to the ottomans in 1517 , remaining under their control for 400 years jerusalem fell to the ottomans in 1517 , remaining under their regulate for 400 years usually such parking spots are on the side of the lot usually such parking spots are dated the side of the lot Table 5 : Not-entailed examples from FN+'s dev set where the hypotheses are ungrammatical. The first line in each section is a premise and the lines with are corresponding hypotheses. Underline words represent the swapped paraphrases.", "cite_spans": [ { "start": 171, "end": 197, "text": "(Giampiccolo et al., 2007)", "ref_id": "BIBREF46" } ], "ref_spans": [ { "start": 880, "end": 887, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In fact, variants of the phrase \"natural language inference, also known as recognizing textual entailment\" appear in many papers(Chen et al., 2017;Williams et al., 2017;Naik et al., 2018;Chen et al., 2018; Tay et al., 2018, i.a.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Resnik and Lin (2010) summarize other evaluation approaches andParoubek et al. (2007) present a history and evolution of NLP evaluation methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These were the guidelines in RTE-1.6 https://cordis.europa.eu/programme/id/FP3-LRE 7 https://nlp.stanford.edu/\u02dcwcmac/ downloads/fracas.xml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Appendix A for the annotation guidelines for RTE1, RTE2, and RTE3. 9 Chapter 3.2 of Glickman's thesis discusses how examples from these datasets were converted into RTE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In Chapter 2.2 of his thesis", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Vanderwende and Dolan (2006) explored RTE-1 andBlake (2007) analyzed RTE-2 and RTE-3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The author would like to thank the anonymous reviewers for their very helpful comments, Benjamin Van Durme, Aaron Steven White, and Jo\u00e3o Sedoc for discussions that shaped this survey, Patrick Xia and Elias Stengel-Eskin for feedback on this draft, and Yonatan Belinkov and Sasha Rush for the encouragement to write a survey on RTE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "In the first iteration of the PASCAL RTE challeges, the task organizers were frank in their view that they expected the task definition to change over time. They wrote that \"finally, the task definition and evaluation methodologies are clearly not mature yet. We expect them to change over time and hope that participants' contributions, observations and comments will help shaping this evolving research direction.\" Here, we include snippets from the annotation guidelines for the first three PAS-CAL RTE challenges:Given that the text and hypothesis might originate from documents at different points in time, tense aspects are ignored. In principle, the hypothesis must be fully entailed by the text. Judgment would be False if the hypothesis includes parts that cannot be inferred from the text. However, cases in which inference is very probable (but not completely certain) are still judged at True. . . . To reduce the risk of unclear cases, annotators were guided to avoid vague examples for which inference has some positive probability that is not clearly very high. To keep the contexts in T and H self contained annotators replaced anaphors with the appropriate reference from preceding sentences where applicable. They also often shortened the hypotheses, and sometimes the texts, to reduce complexity.(Dagan et al., 2006)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Pascal RTE Annotation Guidelines", "sec_num": null }, { "text": "The data collection and annotation guidelines were revised and expanded . . . We say that t entails h if, typically, a human reading t would infer that h is most likely true. This somewhat informal definition is based on (and assumes) common human understanding of language as well as common background knowledge. Textual entailment recognition is the task of deciding, given t and h, whether t entails h. Some additional judgment criteria and guidelines are listed below:\u2022 Entailment is a directional relation. The hypothesis must be entailed from the given text, but the text need not be entailed from the hypothesis.\u2022 The hypothesis must be fully entailed by the text. Judgment would be NO if the hypothesis", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 RTE2 Guidelines", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "on MT Evaluation: Hands-On Evaluation", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "on MT Evaluation: Hands-On Evalu- ation.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", "authors": [ { "first": "Yossi", "middle": [], "last": "Adi", "suffix": "" }, { "first": "Einat", "middle": [], "last": "Kermany", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Lavi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2017, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. In ICLR.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The future of large-scale evaluation campaigns for information retrieval in europe", "authors": [ { "first": "Maristella", "middle": [], "last": "Agosti", "suffix": "" }, { "first": "Giorgio", "middle": [ "Maria" ], "last": "", "suffix": "" }, { "first": "Di", "middle": [], "last": "Nunzio", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Ferro", "suffix": "" }, { "first": "Donna", "middle": [], "last": "Harman", "suffix": "" }, { "first": "Carol", "middle": [], "last": "Peters", "suffix": "" } ], "year": 2007, "venue": "International Conference on Theory and Practice of Digital Libraries", "volume": "", "issue": "", "pages": "509--512", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maristella Agosti, Giorgio Maria Di Nunzio, Nicola Ferro, Donna Harman, and Carol Peters. 2007. The future of large-scale evaluation campaigns for infor- mation retrieval in europe. In International Confer- ence on Theory and Practice of Digital Libraries, pages 509-512. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Linguistic-Based Computational Treatment of Textual Entailment Recognition. Theses", "authors": [ { "first": "Marilisa", "middle": [], "last": "Amoia", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marilisa Amoia. 2008. Linguistic-Based Computa- tional Treatment of Textual Entailment Recognition. Theses, Universit\u00e9 Henri Poincar\u00e9 -Nancy 1.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Linguistic evaluation of German-English machine translation using a test suite", "authors": [ { "first": "Eleftherios", "middle": [], "last": "Avramidis", "suffix": "" }, { "first": "Vivien", "middle": [], "last": "Macketanz", "suffix": "" }, { "first": "Ursula", "middle": [], "last": "Strohriegel", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "445--454", "other_ids": { "DOI": [ "10.18653/v1/W19-5351" ] }, "num": null, "urls": [], "raw_text": "Eleftherios Avramidis, Vivien Macketanz, Ursula Strohriegel, and Hans Uszkoreit. 2019. Linguistic evaluation of German-English machine translation using a test suite. In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 445-454, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An unsupervised model for instance level subcategorization acquisition", "authors": [ { "first": "Simon", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "278--289", "other_ids": { "DOI": [ "10.3115/v1/D14-1034" ] }, "num": null, "urls": [], "raw_text": "Simon Baker, Roi Reichart, and Anna Korhonen. 2014. An unsupervised model for instance level subcate- gorization acquisition. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 278-289, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The second pascal recognising textual entailment challenge", "authors": [ { "first": "Roy", "middle": [], "last": "Bar-Haim", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Ferro", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, and Bernardo Magnini. 2006. The second pascal recognising textual entailment challenge.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Adversarial nli for factual correctness in text summarisation models", "authors": [ { "first": "Mario", "middle": [], "last": "Barrantes", "suffix": "" }, { "first": "Benedikt", "middle": [], "last": "Herudek", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mario Barrantes, Benedikt Herudek, and Richard Wang. 2020. Adversarial nli for factual correctness in text summarisation models.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On internal language representations in deep learning: An analysis of machine translation and speech recognition", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Belinkov. 2018. On internal language repre- sentations in deep learning: An analysis of machine translation and speech recognition. Ph.D. thesis, Massachusetts Institute of Technology.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "What do neural machine translation models learn about morphology?", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "861--872", "other_ids": { "DOI": [ "10.18653/v1/P17-1080" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Has- san Sajjad, and James Glass. 2017a. What do neu- ral machine translation models learn about morphol- ogy? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 861-872. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Analyzing hidden representations in end-to-end automatic speech recognition systems", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "2441--2451", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and James Glass. 2017. Analyz- ing hidden representations in end-to-end automatic speech recognition systems. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish- wanathan, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 30, pages 2441- 2451. Curran Associates, Inc.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Analysis methods in neural language processing: A survey", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "49--72", "other_ids": { "DOI": [ "10.1162/tacl_a_00254" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Belinkov, Llu\u00eds M\u00e0rquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017b. Evaluating layers of representation in neu- ral machine translation on part-of-speech and seman- tic tagging tasks. In Proceedings of the Eighth In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1-10, Taipei, Taiwan. Asian Federation of Natural Lan- guage Processing.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Intrinsic vs. extrinsic evaluation measures for referring expression generation", "authors": [ { "first": "Anja", "middle": [], "last": "Belz", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT, Short Papers", "volume": "", "issue": "", "pages": "197--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anja Belz and Albert Gatt. 2008. Intrinsic vs. extrinsic evaluation measures for referring expression gener- ation. In Proceedings of ACL-08: HLT, Short Pa- pers, pages 197-200, Columbus, Ohio. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The role of sentence structure in recognizing textual entailment", "authors": [ { "first": "Catherine", "middle": [], "last": "Blake", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, RTE '07", "volume": "", "issue": "", "pages": "101--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Catherine Blake. 2007. The role of sentence structure in recognizing textual entailment. In Proceedings of the ACL-PASCAL Workshop on Textual Entail- ment and Paraphrasing, RTE '07, pages 101-106, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Roi Reichart, and Anders S\u00f8gaard", "authors": [ { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/W17-53" ] }, "num": null, "urls": [], "raw_text": "Samuel Bowman, Yoav Goldberg, Felix Hill, Ange- liki Lazaridou, Omer Levy, Roi Reichart, and An- ders S\u00f8gaard, editors. 2017. Proceedings of the 2nd Workshop on Evaluating Vector Space Representa- tions for NLP. Association for Computational Lin- guistics, Copenhagen, Denmark.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The Structure of Modern English: A linguistic introduction", "authors": [ { "first": "L", "middle": [], "last": "Brinton", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Brinton. 2000. The Structure of Modern English: A linguistic introduction.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distributional semantics in technicolor", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Gemma", "middle": [], "last": "Boleda", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "136--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136-145.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "e-snli: Natural language inference with natural language explanations", "authors": [ { "first": "Oana-Maria", "middle": [], "last": "Camburu", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lukasiewicz", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems 31", "volume": "", "issue": "", "pages": "9539--9549", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oana-Maria Camburu, Tim Rockt\u00e4schel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Nat- ural language inference with natural language expla- nations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, ed- itors, Advances in Neural Information Processing Systems 31, pages 9539-9549. Curran Associates, Inc.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Intrinsic evaluation of text mining tools may not predict performance on realistic tasks", "authors": [ { "first": "Nita", "middle": [], "last": "Gregory Caporaso", "suffix": "" }, { "first": "Lynn", "middle": [], "last": "Deshpande", "suffix": "" }, { "first": "Philip", "middle": [ "E" ], "last": "Fink", "suffix": "" }, { "first": "", "middle": [], "last": "Bourne", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Bretonnel Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Hunter", "suffix": "" } ], "year": 2008, "venue": "Biocomputing", "volume": "", "issue": "", "pages": "640--651", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Gregory Caporaso, Nita Deshpande, J Lynn Fink, Philip E Bourne, K Bretonnel Cohen, and Lawrence Hunter. 2008. Intrinsic evaluation of text mining tools may not predict performance on realistic tasks. In Biocomputing 2008, pages 640-651. World Sci- entific.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "An overview of natural language inference data collection: The way forward?", "authors": [ { "first": "Stergios", "middle": [], "last": "Chatzikyriakidis", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Cooper", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Computing Natural Language Inference Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stergios Chatzikyriakidis, Robin Cooper, Simon Dob- nik, and Staffan Larsson. 2017. An overview of nat- ural language inference data collection: The way for- ward? In Proceedings of the Computing Natural Language Inference Workshop.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural natural language inference models enhanced with external knowledge", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2406--2417", "other_ids": { "DOI": [ "10.18653/v1/P18-1224" ] }, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2406-2417, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Recurrent neural network-based sentence encoder with gated attention for natural language inference", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP", "volume": "", "issue": "", "pages": "36--40", "other_ids": { "DOI": [ "10.18653/v1/W17-5307" ] }, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Recurrent neural network-based sentence encoder with gated atten- tion for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, pages 36-40, Copenhagen, Denmark. Association for Computational Linguis- tics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Uncertain natural language inference", "authors": [ { "first": "Tongfei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhengping", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Keisuke", "middle": [], "last": "Sakaguchi", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2020, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, and Benjamin Van Durme. 2020. Uncer- tain natural language inference. In ACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Meaning and grammar: An introduction to semantics", "authors": [ { "first": "Gennaro", "middle": [], "last": "Chierchia", "suffix": "" }, { "first": "Sally", "middle": [], "last": "Mcconnell-Ginet", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gennaro Chierchia and Sally McConnell-Ginet. 2000. Meaning and grammar: An introduction to seman- tics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "MUC-3 linguistic phenomena test experiment", "authors": [ { "first": "Nancy", "middle": [], "last": "Chinchor", "suffix": "" } ], "year": 1991, "venue": "Third Message Uunderstanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nancy Chinchor. 1991. MUC-3 linguistic phenomena test experiment. In Third Message Uunderstanding Conference (MUC-3): Proceedings of a Conference Held in San Diego, California, May 21-23, 1991.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Evaluating message understanding systems: An analysis of the third message understanding conference (MUC-3)", "authors": [ { "first": "Nancy", "middle": [], "last": "Chinchor", "suffix": "" }, { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "" }, { "first": "David", "middle": [ "D" ], "last": "Lewis", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "3", "pages": "409--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nancy Chinchor, Lynette Hirschman, and David D. Lewis. 1993. Evaluating message understanding systems: An analysis of the third message under- standing conference (MUC-3). Computational Lin- guistics, 19(3):409-450.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Intrinsic evaluation of word vectors fails to predict extrinsic performance", "authors": [ { "first": "Billy", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP", "volume": "", "issue": "", "pages": "1--6", "other_ids": { "DOI": [ "10.18653/v1/W16-2501" ] }, "num": null, "urls": [], "raw_text": "Billy Chiu, Anna Korhonen, and Sampo Pyysalo. 2016. Intrinsic evaluation of word vectors fails to predict extrinsic performance. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representa- tions for NLP, pages 1-6, Berlin, Germany. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Automatically extracting challenge sets for non-local phenomena in neural machine translation", "authors": [ { "first": "Leshem", "middle": [], "last": "Choshen", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "291--303", "other_ids": { "DOI": [ "10.18653/v1/K19-1028" ] }, "num": null, "urls": [], "raw_text": "Leshem Choshen and Omri Abend. 2019. Automat- ically extracting challenge sets for non-local phe- nomena in neural machine translation. In Proceed- ings of the 23rd Conference on Computational Nat- ural Language Learning (CoNLL), pages 291-303,", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "SentEval: An evaluation toolkit for universal sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Germ\u00e3\u00a1n", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Lo\u00e3c", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2126--2136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Germ\u00c3\u00a1n Kruszewski, Guillaume Lample, Lo\u00c3c Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Prob- ing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Proceedings of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization", "authors": [ { "first": "John", "middle": [ "M" ], "last": "Conroy", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "Karolina", "middle": [], "last": "Owczarzak", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John M. Conroy, Hoa Trang Dang, Ani Nenkova, and Karolina Owczarzak, editors. 2012. Proceedings of Workshop on Evaluation Metrics and System Com- parison for Automatic Summarization. Association for Computational Linguistics, Montr\u00e9al, Canada.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Using the framework", "authors": [ { "first": "Robin", "middle": [], "last": "Cooper", "suffix": "" }, { "first": "Dick", "middle": [], "last": "Crouch", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Van Eijck", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Fox", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Van Genabith", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Jaspars", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Kamp", "suffix": "" }, { "first": "David", "middle": [], "last": "Milward", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al. 1996. Using the framework.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "The pascal recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment", "volume": "", "issue": "", "pages": "177--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine learning challenges. evalu- ating predictive uncertainty, visual object classifica- tion, and recognising tectual entailment, pages 177- 190. Springer.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Semi-supervised sequence learning", "authors": [ { "first": "M", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Dai", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3079--3087", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural informa- tion processing systems, pages 3079-3087.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Evaluating compositionality in sentence embeddings", "authors": [ { "first": "Ishita", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Demi", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Stuhlm\u00fcller", "suffix": "" }, { "first": "J", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Noah D", "middle": [], "last": "Gershman", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.04302" ] }, "num": null, "urls": [], "raw_text": "Ishita Dasgupta, Demi Guo, Andreas Stuhlm\u00fcller, Samuel J Gershman, and Noah D Goodman. 2018. Evaluating compositionality in sentence embed- dings. arXiv preprint arXiv:1802.04302.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Semeval-2013 task 7: The joint student response analysis and 8th recognizing textual entailment challenge", "authors": [ { "first": "Myroslava", "middle": [], "last": "Dzikovska", "suffix": "" }, { "first": "Rodney", "middle": [], "last": "Nielsen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brew", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", "volume": "2", "issue": "", "pages": "263--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myroslava Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Ben- tivogli, Peter Clark, Ido Dagan, and Hoa Trang Dang. 2013. Semeval-2013 task 7: The joint student re- sponse analysis and 8th recognizing textual entail- ment challenge. In Second Joint Conference on Lex- ical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Work- shop on Semantic Evaluation (SemEval 2013), pages 263-274, Atlanta, Georgia, USA. Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Karen sparck jones & julia r. galliers, evaluating natural language processing systems: An analysis and review. lecture notes in artificial intelligence 1083", "authors": [ { "first": "Dominique", "middle": [ "Estival" ], "last": "", "suffix": "" } ], "year": 1997, "venue": "Machine Translation", "volume": "12", "issue": "4", "pages": "375--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominique Estival. 1997. Karen sparck jones & ju- lia r. galliers, evaluating natural language process- ing systems: An analysis and review. lecture notes in artificial intelligence 1083. Machine Translation, 12(4):375-379.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Assessing composition in sentence vector representations", "authors": [ { "first": "Allyson", "middle": [], "last": "Ettinger", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Elgohary", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Phillips", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1790--1801", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sen- tence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1790-1801, Santa Fe, New Mex- ico, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Ranking generated summaries by correctness: An interesting but challenging application for natural language inference", "authors": [ { "first": "Tobias", "middle": [], "last": "Falke", "suffix": "" }, { "first": "Leonardo", "middle": [ "F R" ], "last": "Ribeiro", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Prasetya Ajie Utama", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2214--2220", "other_ids": { "DOI": [ "10.18653/v1/P19-1213" ] }, "num": null, "urls": [], "raw_text": "Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An in- teresting but challenging application for natural lan- guage inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2214-2220, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Problems with evaluation of word embeddings using word similarity tasks", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP", "volume": "", "issue": "", "pages": "30--35", "other_ids": { "DOI": [ "10.18653/v1/W16-2506" ] }, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems with evaluation of word embeddings using word similarity tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 30- 35, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Letsum, an automatic legal text summarizing system", "authors": [ { "first": "Atefeh", "middle": [], "last": "Farzindar", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Lapalme", "suffix": "" } ], "year": 2004, "venue": "Legal knowledge and information systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atefeh Farzindar and Guy Lapalme. 2004. Letsum, an automatic legal text summarizing system. In Legal knowledge and information systems, JURIX.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Placing search in context: The concept revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Zach", "middle": [], "last": "Solan", "suffix": "" }, { "first": "Gadi", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "Eytan", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 10th international conference on World Wide Web", "volume": "", "issue": "", "pages": "406--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th inter- national conference on World Wide Web, pages 406- 414.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Question answering is a format", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11291" ] }, "num": null, "urls": [], "raw_text": "Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, and Sewon Min. 2019. Question an- swering is a format; when is it useful? arXiv preprint arXiv:1909.11291.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "The third PASCAL recognizing textual entailment challenge", "authors": [ { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recogniz- ing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1-9, Prague. Association for Computational Linguistics.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Proceedings of the MultiLing 2017 Workshop on Summarization and Summary Evaluation Across Source Types and Genres", "authors": [ { "first": "George", "middle": [], "last": "Giannakopoulos", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Lloret", "suffix": "" }, { "first": "John", "middle": [ "M" ], "last": "Conroy", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Steinberger", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/W17-10" ] }, "num": null, "urls": [], "raw_text": "George Giannakopoulos, Elena Lloret, John M. Con- roy, Josef Steinberger, Marina Litvak, Peter Rankel, and Benoit Favre, editors. 2017. Proceedings of the MultiLing 2017 Workshop on Summarization and Summary Evaluation Across Source Types and Gen- res. Association for Computational Linguistics, Va- lencia, Spain.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Applied textual entailment", "authors": [ { "first": "Oren", "middle": [], "last": "Glickman", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Glickman. 2006. Applied textual entailment. Ph.D. thesis, Bar Ilan University.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Breaking nli systems with sentences that require simple lexical inferences", "authors": [ { "first": "Max", "middle": [], "last": "Glockner", "suffix": "" }, { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "650--655", "other_ids": {}, "num": null, "urls": [], "raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that re- quire simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Association for Computational Linguistics", "authors": [], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jade Goldstein, Alon Lavie, Chin-Yew Lin, and Clare Voss, editors. 2005. Proceedings of the ACL Work- shop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. As- sociation for Computational Linguistics, Ann Arbor, Michigan.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Natural language inference over interaction space", "authors": [ { "first": "Yichen", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yichen Gong, Heng Luo, and Jian Zhang. 2018. Nat- ural language inference over interaction space. In International Conference on Learning Representa- tions.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Annotation artifacts in natural language inference data", "authors": [ { "first": "Swabha", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Bowman", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "107--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Designing and interpreting probes with control tasks", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2733--2743", "other_ids": { "DOI": [ "10.18653/v1/D19-1275" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2015, "venue": "Computational Linguistics", "volume": "41", "issue": "4", "pages": "665--695", "other_ids": { "DOI": [ "10.1162/COLI_a_00237" ] }, "num": null, "urls": [], "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Visualisation and'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure", "authors": [ { "first": "Dieuwke", "middle": [], "last": "Hupkes", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Veldhoen", "suffix": "" }, { "first": "Willem", "middle": [], "last": "Zuidema", "suffix": "" } ], "year": 2018, "venue": "Journal of Artificial Intelligence Research", "volume": "61", "issue": "", "pages": "907--926", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and'diagnostic classifiers' re- veal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "A challenge set approach to evaluating machine translation", "authors": [ { "first": "Pierre", "middle": [], "last": "Isabelle", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2486--2496", "other_ids": { "DOI": [ "10.18653/v1/D17-1263" ] }, "num": null, "urls": [], "raw_text": "Pierre Isabelle, Colin Cherry, and George Foster. 2017. A challenge set approach to evaluating machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2486-2496, Copenhagen, Denmark. As- sociation for Computational Linguistics.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Montague semantics", "authors": [ { "first": "M", "middle": [ "V" ], "last": "Theo", "suffix": "" }, { "first": "", "middle": [], "last": "Janssen", "suffix": "" } ], "year": 2011, "venue": "The Stanford Encyclopedia of Philosophy", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theo M. V. Janssen. 2011. Montague semantics. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, winter 2011 edition. Metaphysics Re- search Lab, Stanford University.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Are natural language inference models IMPPRESsive? Learning IMPlicature and PRESupposition", "authors": [ { "first": "Paloma", "middle": [], "last": "Jeretic", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Suvrat", "middle": [], "last": "Bhooshan", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8690--8705", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.768" ] }, "num": null, "urls": [], "raw_text": "Paloma Jeretic, Alex Warstadt, Suvrat Bhooshan, and Adina Williams. 2020. Are natural language infer- ence models IMPPRESsive? Learning IMPlicature and PRESupposition. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 8690-8705, Online. Association for Computational Linguistics.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "SciTail: A textual entailment dataset from science question answering", "authors": [ { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In AAAI.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Senseval: an exercise in evaluating world sense disambiguation programs", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 1998, "venue": "First International Conference on language resources & evaluation: Granada, Spain", "volume": "", "issue": "", "pages": "581--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff. 1998. Senseval: an exercise in evalu- ating world sense disambiguation programs. In First International Conference on language resources & evaluation: Granada, Spain, 28-30 May 1998, pages 581-588. European Language Resources As- sociation.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Probing what different NLP tasks teach machines about function word comprehension", "authors": [ { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)", "volume": "", "issue": "", "pages": "235--249", "other_ids": { "DOI": [ "10.18653/v1/S19-1026" ] }, "num": null, "urls": [], "raw_text": "Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bow- man, and Ellie Pavlick. 2019. Probing what dif- ferent NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Se- mantics (*SEM 2019), pages 235-249, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Eagles: Evaluation of natural language processing systems", "authors": [ { "first": "Maghi", "middle": [], "last": "King", "suffix": "" }, { "first": "Maegaard", "middle": [], "last": "Bente", "suffix": "" }, { "first": "Sch\u00fctz", "middle": [], "last": "Jorg", "suffix": "" }, { "first": "Tombe", "middle": [], "last": "Louis Des", "suffix": "" }, { "first": "Annelise", "middle": [], "last": "Bech", "suffix": "" }, { "first": "Neville", "middle": [], "last": "Ann", "suffix": "" }, { "first": "Arppe", "middle": [], "last": "Antti", "suffix": "" }, { "first": "Balkan", "middle": [], "last": "Lorna", "suffix": "" }, { "first": "Brace", "middle": [], "last": "Colin", "suffix": "" }, { "first": "Bunt", "middle": [], "last": "Harry", "suffix": "" }, { "first": "Carlson", "middle": [], "last": "Lauri", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Shona", "suffix": "" }, { "first": "H\u00f6ge", "middle": [], "last": "Monika", "suffix": "" }, { "first": "Krauwer", "middle": [], "last": "Steven", "suffix": "" }, { "first": "Manzi", "middle": [], "last": "Sandra", "suffix": "" }, { "first": "Mazzi", "middle": [], "last": "Cristina", "suffix": "" }, { "first": "Ann", "middle": [], "last": "June", "suffix": "" }, { "first": "Siele-Mann", "middle": [], "last": "Ragna", "suffix": "" }, { "first": "Steenbakkers", "middle": [], "last": "", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maghi King, Bente MAEGAARD, Jorg SCH\u00dcTZ, Louis des TOMBE, Annelise BECH, Ann NEVILLE, Antti ARPPE, Lorna BALKAN, Colin BRACE, Harry BUNT, Lauri CARLSON, Shona DOUGLAS, Monika H\u00d6GE, Steven KRAUWER, Sandra MANZI, Cristina MAZZI, Ann June SIELE- MANN, and Ragna STEENBAKKERS. 1995. Eagles: Evaluation of natural language processing systems. final report. Technical report.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Using test suites in evaluation of machine translation systems", "authors": [ { "first": "Margaret", "middle": [], "last": "King", "suffix": "" }, { "first": "Kirsten", "middle": [], "last": "Falkedal", "suffix": "" } ], "year": 1990, "venue": "Papers presented to the 13th International Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret King and Kirsten Falkedal. 1990. Using test suites in evaluation of machine translation systems. In COLING 1990 Volume 2: Papers presented to the 13th International Conference on Computational Linguistics.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Temporal and aspectual entailment", "authors": [], "year": 2019, "venue": "Proceedings of the 13th International Conference on Computational Semantics -Long Papers", "volume": "", "issue": "", "pages": "103--119", "other_ids": { "DOI": [ "10.18653/v1/W19-0409" ] }, "num": null, "urls": [], "raw_text": "Thomas Kober, Sander Bijl de Vroe, and Mark Steed- man. 2019. Temporal and aspectual entailment. In Proceedings of the 13th International Conference on Computational Semantics -Long Papers, pages 103- 119, Gothenburg, Sweden. Association for Computa- tional Linguistics.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "A test suite for evaluation of english-to-korean machine translation systems", "authors": [ { "first": "Sungryong", "middle": [], "last": "Koh", "suffix": "" }, { "first": "Jinee", "middle": [], "last": "Maeng", "suffix": "" }, { "first": "Ji-Young", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Young-Sook", "middle": [], "last": "Chae", "suffix": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2001, "venue": "In MT Summit'conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sungryong Koh, Jinee Maeng, Ji-Young Lee, Young- Sook Chae, and Key-Sun Choi. 2001. A test suite for evaluation of english-to-korean machine transla- tion systems. In MT Summit'conference, Santiago de Compostela.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "On evaluation of natural language processing tasks", "authors": [ { "first": "Vojt\u017ach", "middle": [], "last": "Kov\u00e1\u017a", "suffix": "" }, { "first": "Milo\u017a", "middle": [], "last": "Jakub\u00ed\u017aek", "suffix": "" }, { "first": "Ale\u017a", "middle": [], "last": "Hor\u00e1k", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 8th International Conference on Agents and Artificial Intelligence", "volume": "", "issue": "", "pages": "540--545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vojt\u017ach Kov\u00e1\u017a, Milo\u017a Jakub\u00ed\u017aek, and Ale\u017a Hor\u00e1k. 2016. On evaluation of natural language process- ing tasks. In Proceedings of the 8th International Conference on Agents and Artificial Intelligence, pages 540-545. SCITEPRESS-Science and Technol- ogy Publications, Lda.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "TSNLPtest suites for natural language processing", "authors": [ { "first": "Sabine", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Sylvie", "middle": [], "last": "Regnier-Prost", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Netter", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Lux", "suffix": "" }, { "first": "Judith", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Kirsten", "middle": [], "last": "Falkedal", "suffix": "" }, { "first": "Frederik", "middle": [], "last": "Fouvry", "suffix": "" }, { "first": "Dominique", "middle": [], "last": "Estival", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Dauphin", "suffix": "" }, { "first": "Herve", "middle": [], "last": "Compagnion", "suffix": "" }, { "first": "Judith", "middle": [], "last": "Baur", "suffix": "" }, { "first": "Lorna", "middle": [], "last": "Balkan", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Arnold", "suffix": "" } ], "year": 1996, "venue": "The 16th International Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Lehmann, Stephan Oepen, Sylvie Regnier- Prost, Klaus Netter, Veronika Lux, Judith Klein, Kirsten Falkedal, Frederik Fouvry, Dominique Esti- val, Eva Dauphin, Herve Compagnion, Judith Baur, Lorna Balkan, and Doug Arnold. 1996. TSNLP - test suites for natural language processing. In COL- ING 1996 Volume 2: The 16th International Confer- ence on Computational Linguistics.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/W16-25" ] }, "num": null, "urls": [], "raw_text": "Omer Levy, Felix Hill, Anna Korhonen, Kyunghyun Cho, Roi Reichart, Yoav Goldberg, and Antione Bor- des, editors. 2016. Proceedings of the 1st Work- shop on Evaluating Vector-Space Representations for NLP. Association for Computational Linguistics, Berlin, Germany.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Chrupa\u0142a", "suffix": "" }, { "first": "Afra", "middle": [], "last": "Alishahi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Linzen, Grzegorz Chrupa\u0142a, and Afra Alishahi, ed- itors. 2018. Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP. Association for Computa- tional Linguistics, Brussels, Belgium.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Chrupa\u0142a", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Dieuwke", "middle": [], "last": "Hupkes", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Linzen, Grzegorz Chrupa\u0142a, Yonatan Belinkov, and Dieuwke Hupkes, editors. 2019. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, Florence, Italy.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Learning natural language inference using bidirectional lstm model and inner-attention", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chengjie", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.09090" ] }, "num": null, "urls": [], "raw_text": "Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016. Learning natural language inference using bidirectional lstm model and inner-attention. arXiv preprint arXiv:1605.09090.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Suitability of ParTes test suite for parsing evaluation", "authors": [ { "first": "Marina", "middle": [], "last": "Lloberes", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Castell\u00f3n", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "Padr\u00f3", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 14th International Conference on Parsing Technologies", "volume": "", "issue": "", "pages": "61--65", "other_ids": { "DOI": [ "10.18653/v1/W15-2207" ] }, "num": null, "urls": [], "raw_text": "Marina Lloberes, Irene Castell\u00f3n, and Llu\u00eds Padr\u00f3. 2015. Suitability of ParTes test suite for parsing evaluation. In Proceedings of the 14th International Conference on Parsing Technologies, pages 61-65, Bilbao, Spain. Association for Computational Lin- guistics.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Better word representations with recursive neural networks for morphology", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "104--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Richard Socher, and Christopher Man- ning. 2013. Better word representations with re- cursive neural networks for morphology. In Pro- ceedings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 104-113, Sofia, Bulgaria. Association for Computational Lin- guistics.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Natural language inference", "authors": [ { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill MacCartney. 2009. Natural language inference. Ph.D. thesis, Stanford University.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Local textual inference: it's hard to circumscribe, but you know it when you see it-and nlp needs it", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D Manning. 2006. Local textual inference: it's hard to circumscribe, but you know it when you see it-and nlp needs it.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Computational linguistics and deep learning", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Computational Linguistics", "volume": "41", "issue": "4", "pages": "701--707", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D Manning. 2015. Computational linguis- tics and deep learning. Computational Linguistics, 41(4):701-707.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Finding contradictions in text", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Anna", "middle": [ "N" ], "last": "Rafferty", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "1039--1047", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe, Anna N. Rafferty, and Christopher D. Manning. 2008. Finding contradic- tions in text. In Proceedings of ACL-08: HLT, pages 1039-1047, Columbus, Ohio. Association for Com- putational Linguistics.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "The natural language decathlon", "authors": [ { "first": "Bryan", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Shirish Keskar", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "Multitask learning as question answering", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.08730" ] }, "num": null, "urls": [], "raw_text": "Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language de- cathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "Performance evaluation of word and sentence embeddings for finance headlines sentiment analysis", "authors": [ { "first": "Kostadin", "middle": [], "last": "Mishev", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Gjorgjevikj", "suffix": "" }, { "first": "Riste", "middle": [], "last": "Stojanov", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Mishkovski", "suffix": "" }, { "first": "Irena", "middle": [], "last": "Vodenska", "suffix": "" }, { "first": "Ljubomir", "middle": [], "last": "Chitkushev", "suffix": "" }, { "first": "Dimitar", "middle": [], "last": "Trajanov", "suffix": "" } ], "year": 2019, "venue": "ICT Innovations 2019. Big Data Processing and Mining", "volume": "", "issue": "", "pages": "161--172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kostadin Mishev, Ana Gjorgjevikj, Riste Stojanov, Igor Mishkovski, Irena Vodenska, Ljubomir Chitkushev, and Dimitar Trajanov. 2019. Performance evalua- tion of word and sentence embeddings for finance headlines sentiment analysis. In ICT Innovations 2019. Big Data Processing and Mining, pages 161- 172, Cham. Springer International Publishing.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Intrinsic versus extrinsic evaluations of parsing systems", "authors": [ { "first": "Diego", "middle": [], "last": "Moll\u00e1", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Processing: are evaluation methods, metrics and resources reusable?", "volume": "", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Moll\u00e1 and Ben Hutchinson. 2003. Intrin- sic versus extrinsic evaluations of parsing systems. In Proceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Process- ing: are evaluation methods, metrics and resources reusable?, pages 43-50, Columbus, Ohio. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Compositional explanations of neurons", "authors": [ { "first": "Jesse", "middle": [], "last": "Mu", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" } ], "year": 2020, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesse Mu and Jacob Andreas. 2020. Compositional ex- planations of neurons. In Advances in Neural Infor- mation Processing Systems 33 (NeurIPS).", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Stress test evaluation for natural language inference", "authors": [ { "first": "Aakanksha", "middle": [], "last": "Naik", "suffix": "" }, { "first": "Abhilasha", "middle": [], "last": "Ravichander", "suffix": "" }, { "first": "Norman", "middle": [], "last": "Sadeh", "suffix": "" }, { "first": "Carolyn", "middle": [], "last": "Rose", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2340--2353", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353, Santa Fe, New Mexico, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Evaluating word embeddings using a representative suite of practical tasks", "authors": [ { "first": "Neha", "middle": [], "last": "Nayak", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 1st workshop on evaluating vector-space representations for nlp", "volume": "", "issue": "", "pages": "19--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neha Nayak, Gabor Angeli, and Christopher D Man- ning. 2016. Evaluating word embeddings using a representative suite of practical tasks. In Proceed- ings of the 1st workshop on evaluating vector-space representations for nlp, pages 19-23.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Adversarial NLI: A new benchmark for natural language understanding", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4885--4901", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.441" ] }, "num": null, "urls": [], "raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Tsnlp -test suites for natural language processing", "authors": [ { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Netter", "suffix": "" } ], "year": 1995, "venue": "Linguistic Databases", "volume": "", "issue": "", "pages": "711--716", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Oepen and Klaus Netter. 1995. Tsnlp -test suites for natural language processing. In In J. Nerbonne (Ed.), Linguistic Databases (pp. 13 -36, pages 711-716. CSLI Publications.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Workshop on the evaluation of natural language processing systems", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Finin", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "3", "pages": "175--181", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer and Tim Finin. 1990. Workshop on the evaluation of natural language processing systems. Computational Linguistics, 16(3):175-181.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "Principles of evaluation in natural language processing", "authors": [ { "first": "Patrick", "middle": [], "last": "Paroubek", "suffix": "" }, { "first": "St\u00e9phane", "middle": [], "last": "Chaudiron", "suffix": "" }, { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 2007, "venue": "Traitement Automatique des Langues", "volume": "48", "issue": "1", "pages": "7--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Paroubek, St\u00e9phane Chaudiron, and Lynette Hirschman. 2007. Principles of evaluation in nat- ural language processing. Traitement Automatique des Langues, 48(1):7-31.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Proceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Processing: are evaluation methods, metrics and resources reusable? Association for Computational Linguistics", "authors": [], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katerina Pastra, editor. 2003. Proceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Processing: are evaluation methods, met- rics and resources reusable? Association for Com- putational Linguistics, Columbus, Ohio.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Compositional Lexical Entailment for Natural Language Inference", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellie Pavlick. 2017. Compositional Lexical Entailment for Natural Language Inference. Ph.D. thesis, Uni- versity of Pennsylvania.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Most \"babies\" are \"little\" and most \"problems\" are \"huge\": Compositional entailment in adjective-nouns", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2164--2173", "other_ids": { "DOI": [ "10.18653/v1/P16-1204" ] }, "num": null, "urls": [], "raw_text": "Ellie Pavlick and Chris Callison-Burch. 2016. Most \"babies\" are \"little\" and most \"problems\" are \"huge\": Compositional entailment in adjective-nouns. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2164-2173. Association for Computational Linguistics.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Inherent disagreements in human textual inferences", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "677--694", "other_ids": { "DOI": [ "10.1162/tacl_a_00293" ] }, "num": null, "urls": [], "raw_text": "Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transac- tions of the Association for Computational Linguis- tics, 7:677-694.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Framenet+: Fast paraphrastic tripling of framenet", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Travis", "middle": [], "last": "Wolfe", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "408--413", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellie Pavlick, Travis Wolfe, Pushpendre Rastogi, Chris Callison-Burch, Mark Dredze, and Benjamin Van Durme. 2015. Framenet+: Fast paraphrastic tripling of framenet. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 408-413, Beijing, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Word embeddings in sentiment analysis", "authors": [ { "first": "Ruggero", "middle": [], "last": "Petrolito", "suffix": "" } ], "year": 2018, "venue": "Italian Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruggero Petrolito. 2018. Word embeddings in senti- ment analysis. In Italian Conference on Computa- tional Linguistics.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "Information-theoretic probing for linguistic structure", "authors": [ { "first": "Tiago", "middle": [], "last": "Pimentel", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Valvoda", "suffix": "" }, { "first": "Rowan", "middle": [], "last": "Hall Maudslay", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Zmigrod", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4609--4622", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.420" ] }, "num": null, "urls": [], "raw_text": "Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4609-4622, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Automatic evaluation of linguistic quality in multidocument summarization", "authors": [ { "first": "Emily", "middle": [], "last": "Pitler", "suffix": "" }, { "first": "Annie", "middle": [], "last": "Louis", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "544--554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2010. Automatic evaluation of linguistic quality in multi- document summarization. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pages 544-554, Uppsala, Swe- den. Association for Computational Linguistics.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "On the evaluation of semantic phenomena in neural machine translation using natural language inference", "authors": [ { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "513--523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Poliak, Yonatan Belinkov, James Glass, and Ben- jamin Van Durme. 2018a. On the evaluation of se- mantic phenomena in neural machine translation us- ing natural language inference. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 513-523, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Collecting diverse natural language inference problems for sentence representation evaluation", "authors": [ { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Aparajita", "middle": [], "last": "Haldar", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "J", "middle": [ "Edward" ], "last": "Hu", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Aaron", "middle": [ "Steven" ], "last": "White", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "67--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Ed- ward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018b. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 67-81. Association for Computational Linguistics.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Hypothesis only baselines in natural language inference", "authors": [ { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Naradowsky", "suffix": "" }, { "first": "Aparajita", "middle": [], "last": "Haldar", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "180--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018c. Hypothesis only baselines in natural language in- ference. In Proceedings of the Seventh Joint Con- ference on Lexical and Computational Semantics, pages 180-191. Association for Computational Lin- guistics.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Challenge test sets for MT evaluation", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" }, { "first": "Sheila", "middle": [], "last": "Castilho", "suffix": "" } ], "year": 2019, "venue": "In Proceedings of Machine Translation Summit XVII", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107 and Sheila Castilho. 2019. Challenge test sets for MT evaluation. In Proceedings of Ma- chine Translation Summit XVII Volume 3: Tutorial Abstracts, Dublin, Ireland. European Association for Machine Translation.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "Natural language inference via dependency tree mapping: An application to question answering", "authors": [ { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2004. Natural language inference via dependency tree mapping: An application to question answering. Technical report.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "Revisiting correlations between intrinsic and extrinsic evaluations of word embeddings", "authors": [ { "first": "Yuanyuan", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Hongzheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yingdi", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Renfen", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Lijiao", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data", "volume": "", "issue": "", "pages": "209--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuanyuan Qiu, Hongzheng Li, Shen Li, Yingdi Jiang, Renfen Hu, and Lijiao Yang. 2018. Revisiting cor- relations between intrinsic and extrinsic evaluations of word embeddings. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 209-221. Springer.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "A structured review of the validity of bleu", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 2018, "venue": "Computational Linguistics", "volume": "44", "issue": "3", "pages": "393--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter. 2018. A structured review of the validity of bleu. Computational Linguistics, 44(3):393-401.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "11 evaluation of nlp systems. The handbook of computational linguistics and natural language processing", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and Jimmy Lin. 2010. 11 evaluation of nlp systems. The handbook of computational lin- guistics and natural language processing, 57.", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Using intrinsic and extrinsic metrics to evaluate accuracy and facilitation in computer-assisted coding", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Niv", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Nossal", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Schnitzer", "suffix": "" } ], "year": 2006, "venue": "Perspectives in Health Information Management Computer Assisted Coding Conference Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik, Michael Niv, Michael Nossal, and Gre- gory Schnitzer. 2006. Using intrinsic and extrin- sic metrics to evaluate accuracy and facilitation in computer-assisted coding. In Perspectives in Health Information Management Computer Assisted Cod- ing Conference Proceedings.", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Probing natural language inference models through semantic fragments", "authors": [], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyle Richardson, Hai Na Hu, Lawrence S. Moss, and Ashish Sabharwal. 2020. Probing natural language inference models through semantic fragments. In AAAI, volume abs/1909.07521.", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP. Association for Computational Linguistics", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Aleksandr", "middle": [], "last": "Drozd", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Rogers, Aleksandr Drozd, Anna Rumshisky, and Yoav Goldberg, editors. 2019. Proceedings of the 3rd Workshop on Evaluating Vector Space Represen- tations for NLP. Association for Computational Lin- guistics, Minneapolis, USA.", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "Lessons from natural language inference in the clinical domain", "authors": [ { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Shivade", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1586--1596", "other_ids": { "DOI": [ "10.18653/v1/D18-1187" ] }, "num": null, "urls": [], "raw_text": "Alexey Romanov and Chaitanya Shivade. 2018. Lessons from natural language inference in the clin- ical domain. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 1586-1596, Brussels, Belgium. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "How well do NLI models capture verb veridicality?", "authors": [ { "first": "Alexis", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2230--2240", "other_ids": { "DOI": [ "10.18653/v1/D19-1228" ] }, "num": null, "urls": [], "raw_text": "Alexis Ross and Ellie Pavlick. 2019. How well do NLI models capture verb veridicality? In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2230-2240, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Social bias in elicited natural language inferences", "authors": [ { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Chandler", "middle": [], "last": "May", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing", "volume": "", "issue": "", "pages": "74--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural lan- guage inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing, pages 74-79, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF112": { "ref_id": "b112", "title": "ask not what textual entailment can do for you", "authors": [ { "first": "Mark", "middle": [], "last": "Sammons", "suffix": "" }, { "first": "V", "middle": [ "G" ], "last": "Vinod Vydiswaran", "suffix": "" }, { "first": "Dan", "middle": [ "Roth" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1199--1208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Sammons, V.G.Vinod Vydiswaran, and Dan Roth. 2010. \"ask not what textual entailment can do for you...\". In Proceedings of the 48th Annual Meeting of the Association for Computational Lin- guistics, pages 1199-1208, Uppsala, Sweden. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF113": { "ref_id": "b113", "title": "A deductive question-answerer for natural language inference", "authors": [ { "first": "M", "middle": [], "last": "Robert", "suffix": "" }, { "first": "John", "middle": [ "F" ], "last": "Schwarcz", "suffix": "" }, { "first": "Robert F", "middle": [], "last": "Burger", "suffix": "" }, { "first": "", "middle": [], "last": "Simmons", "suffix": "" } ], "year": 1970, "venue": "Communications of the ACM", "volume": "13", "issue": "3", "pages": "167--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert M Schwarcz, John F Burger, and Robert F Sim- mons. 1970. A deductive question-answerer for nat- ural language inference. Communications of the ACM, 13(3):167-183.", "links": null }, "BIBREF114": { "ref_id": "b114", "title": "Western Linguistics: An Historical Introduction", "authors": [ { "first": "P", "middle": [ "A M" ], "last": "Seuren", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1002/9781444307467" ] }, "num": null, "urls": [], "raw_text": "P.A.M. Seuren. 1998. Western Linguistics: An Histori- cal Introduction.", "links": null }, "BIBREF115": { "ref_id": "b115", "title": "Towards better NLP system evaluation", "authors": [ { "first": "Karen Sparck", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1994, "venue": "Human Language Technology: Proceedings of a Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Sparck Jones. 1994. Towards better NLP sys- tem evaluation. In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994.", "links": null }, "BIBREF116": { "ref_id": "b116", "title": "Evaluating Natural Language Processing Systems: An Analysis and Review", "authors": [ { "first": "Karen Sparck Jones", "middle": [], "last": "", "suffix": "" }, { "first": "Julia", "middle": [ "R" ], "last": "Galliers", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Sparck Jones and Julia R. Galliers. 1996. Eval- uating Natural Language Processing Systems: An Analysis and Review. Springer-Verlag, Berlin, Hei- delberg.", "links": null }, "BIBREF117": { "ref_id": "b117", "title": "Learning about non-veridicality in textual entailment", "authors": [ { "first": "Ieva", "middle": [], "last": "Stali\u016bnait\u0117", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ieva Stali\u016bnait\u0117. 2018. Learning about non-veridicality in textual entailment. Master's thesis, Utrecht Uni- versity.", "links": null }, "BIBREF118": { "ref_id": "b118", "title": "Compare, compress and propagate: Enhancing neural architectures with alignment factorization for natural language inference", "authors": [ { "first": "Yi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Anh", "middle": [ "Tuan" ], "last": "Luu", "suffix": "" }, { "first": "Siu Cheung", "middle": [], "last": "Hui", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1565--1575", "other_ids": { "DOI": [ "10.18653/v1/D18-1185" ] }, "num": null, "urls": [], "raw_text": "Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018. Compare, compress and propagate: Enhancing neu- ral architectures with alignment factorization for nat- ural language inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1565-1575, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF119": { "ref_id": "b119", "title": "SWOW-8500: Word association task for intrinsic evaluation of word embeddings", "authors": [ { "first": "Avijit", "middle": [], "last": "Thawani", "suffix": "" }, { "first": "Biplav", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Anil", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP", "volume": "", "issue": "", "pages": "43--51", "other_ids": { "DOI": [ "10.18653/v1/W19-2006" ] }, "num": null, "urls": [], "raw_text": "Avijit Thawani, Biplav Srivastava, and Anil Singh. 2019. SWOW-8500: Word association task for intrinsic evaluation of word embeddings. In Pro- ceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP, pages 43-51, Min- neapolis, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF120": { "ref_id": "b120", "title": "Performance impact caused by hidden bias of training data for recognizing textual entailment", "authors": [ { "first": "Masatoshi", "middle": [], "last": "Tsuchiya", "suffix": "" } ], "year": 2018, "venue": "11th International Conference on Language Resources and Evaluation (LREC2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recog- nizing textual entailment. In 11th International Conference on Language Resources and Evaluation (LREC2018).", "links": null }, "BIBREF121": { "ref_id": "b121", "title": "Evaluation of word vector representations by subspace alignment", "authors": [ { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2049--2054", "other_ids": { "DOI": [ "10.18653/v1/D15-1243" ] }, "num": null, "urls": [], "raw_text": "Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guil- laume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 2049-2054, Lisbon, Portugal. Association for Com- putational Linguistics.", "links": null }, "BIBREF122": { "ref_id": "b122", "title": "What syntax can contribute in the entailment task", "authors": [ { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "", "middle": [], "last": "William B Dolan", "suffix": "" } ], "year": 2006, "venue": "Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment", "volume": "", "issue": "", "pages": "205--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucy Vanderwende and William B Dolan. 2006. What syntax can contribute in the entailment task. In Machine Learning Challenges. Evaluating Predic- tive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, pages 205-216. Springer.", "links": null }, "BIBREF123": { "ref_id": "b123", "title": "Temporal reasoning in natural language inference", "authors": [ { "first": "Siddharth", "middle": [], "last": "Vashishtha", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Yash", "middle": [], "last": "Kumar Lal", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Aaron", "middle": [ "Steven" ], "last": "White", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Findings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharth Vashishtha, Adam Poliak, Yash Kumar Lal, Benjamin Van Durme, and Aaron Steven White. 2020. Temporal reasoning in natural language in- ference. In Proceedings of the Findings of EMNLP.", "links": null }, "BIBREF124": { "ref_id": "b124", "title": "Informationtheoretic probing with minimum description length", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Voita and Ivan Titov. 2020. Information- theoretic probing with minimum description length.", "links": null }, "BIBREF125": { "ref_id": "b125", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3261--3275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. In Advances in Neural Infor- mation Processing Systems, pages 3261-3275.", "links": null }, "BIBREF126": { "ref_id": "b126", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amapreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.07461" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.", "links": null }, "BIBREF127": { "ref_id": "b127", "title": "Evaluating word embedding models: Methods and experimental results. APSIPA transactions on signal and information processing", "authors": [ { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Fenxiao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yuncheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "C-C Jay", "middle": [], "last": "Kuo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bin Wang, Angela Wang, Fenxiao Chen, Yuncheng Wang, and C-C Jay Kuo. 2019b. Evaluating word embedding models: Methods and experimental re- sults. APSIPA transactions on signal and informa- tion processing, 8.", "links": null }, "BIBREF128": { "ref_id": "b128", "title": "Inference is everything: Recasting semantic resources into a unified evaluation framework", "authors": [ { "first": "Aaron", "middle": [], "last": "Steven White", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "996--1005", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is ev- erything: Recasting semantic resources into a uni- fied evaluation framework. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 996-1005, Taipei, Taiwan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF129": { "ref_id": "b129", "title": "A preferential, pattern-seeking, semantics for natural language inference. Artificial intelligence", "authors": [ { "first": "Yorick", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1975, "venue": "", "volume": "6", "issue": "", "pages": "53--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yorick Wilks. 1975. A preferential, pattern-seeking, se- mantics for natural language inference. Artificial in- telligence, 6(1):53-74.", "links": null }, "BIBREF130": { "ref_id": "b130", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.05426" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.", "links": null }, "BIBREF131": { "ref_id": "b131", "title": "A study of neural word embeddings for named entity recognition in clinical text", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Min", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Yaoyun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "AMIA Annual Symposium Proceedings", "volume": "2015", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonghui Wu, Jun Xu, Min Jiang, Yaoyun Zhang, and Hua Xu. 2015. A study of neural word embed- dings for named entity recognition in clinical text. In AMIA Annual Symposium Proceedings, volume 2015, page 1326. American Medical Informatics As- sociation.", "links": null }, "BIBREF132": { "ref_id": "b132", "title": "Do neural models learn systematicity of monotonicity inference in natural language", "authors": [ { "first": "Hitomi", "middle": [], "last": "Yanaka", "suffix": "" }, { "first": "Koji", "middle": [], "last": "Mineshima", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Bekki", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2020, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, and Kentaro Inui. 2020. Do neural models learn sys- tematicity of monotonicity inference in natural lan- guage? In ACL.", "links": null }, "BIBREF133": { "ref_id": "b133", "title": "Can neural networks understand monotonicity reasoning?", "authors": [ { "first": "Hitomi", "middle": [], "last": "Yanaka", "suffix": "" }, { "first": "Koji", "middle": [], "last": "Mineshima", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Bekki", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Lasha", "middle": [], "last": "Abzianidze", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "31--40", "other_ids": { "DOI": [ "10.18653/v1/W19-4804" ] }, "num": null, "urls": [], "raw_text": "Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Ken- taro Inui, Satoshi Sekine, Lasha Abzianidze, and Jo- han Bos. 2019. Can neural networks understand monotonicity reasoning? In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 31-40, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF134": { "ref_id": "b134", "title": "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions", "authors": [ { "first": "Peter", "middle": [], "last": "Young", "suffix": "" }, { "first": "Alice", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Micah", "middle": [], "last": "Hodosh", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "67--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Young, Alice Lai, Micah Hodosh, and Julia Hock- enmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic in- ference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78.", "links": null }, "BIBREF135": { "ref_id": "b135", "title": "Local textual inference: Can it be defined or circumscribed?", "authors": [ { "first": "Annie", "middle": [], "last": "Zaenen", "suffix": "" }, { "first": "Lauri", "middle": [], "last": "Karttunen", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Crouch", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment", "volume": "", "issue": "", "pages": "31--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annie Zaenen, Lauri Karttunen, and Richard Crouch. 2005. Local textual inference: Can it be defined or circumscribed? In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equiv- alence and Entailment, pages 31-36, Ann Arbor, Michigan. Association for Computational Linguis- tics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "to thoroughly study the representations learned by downstream, applied NLP systems. The increasing number of RTE datasets focused on different phenomena can help researchers use one standard format to analyze how well models capture different phenomena, and in turn answer Sammons et al. (2010)'s challenge to make RTE \"a central component of evaluation for relevant NLP tasks.\"", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "The inhabitants of Cambridge voted for a Labour MP. Q Did every inhabitant of Cambridge vote for a Labour MP? H Every inhabitant of Cambridge voted for a Labour MP. A Unknown COMPARATIVES (243) P ITEL sold 3000 more computers than APCOM. APCOM sold exactly 2500 computers. Q Did ITEL sell 5500 computers? H ITEL sold 5500 computers. A YesTable 4: Examples from Fracas: P represents the premise(s), Q represents the question from FraCas, H represents the declarative statement MacCartney (2009) created and, A represents the label. The number in the parenthesis indicates the example ID from FraCas.", "num": null }, "TABREF0": { "html": null, "text": "(in the appendix) contains examples from FraCas. In total, FraCas only contains about 350 labeled examples, potentially limiting the ability to generalize how well models capture these phenomena. Additionally, the limited number of examples in FraCas Kessler 's team conducted 60,643 interviews with adults in 14 countries Kessler 's team interviewed more than 60,000 adults in 14 countries entailed Capital punishment is a catalyst for more crime Capital punishment is a deterrent to crime not-entailed Boris Becker is a former professional tennis player for Germany", "type_str": "table", "content": "
Boris Becker is a Wimbledon champion | not-entailed |