ACL-OCL / Base_JSON /prefixD /json /deeplo /2022.deeplo-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:22:23.359166Z"
},
"title": "Improving Distantly Supervised Document-Level Relation Extraction Through Natural Language Inference",
"authors": [
{
"first": "Clara",
"middle": [],
"last": "Vania",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Grace",
"middle": [
"E"
],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Andrea",
"middle": [],
"last": "Pierleoni",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Amazon",
"middle": [],
"last": "Alexa",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The distant supervision (DS) paradigm has been widely used for relation extraction (RE) to alleviate the need for expensive annotations. However, it suffers from noisy labels, which leads to worse performance than models trained on human-annotated data, even when trained using hundreds of times more data. We present a systematic study on the use of natural language inference (NLI) to improve distantly supervised document-level RE. We apply NLI in three scenarios: (i) as a filter for denoising DS labels, (ii) as a filter for model prediction, and (iii) as a standalone RE model. Our results show that NLI filtering consistently improves performance, reducing the performance gap with a model trained on human-annotated data by 2.3 F1. * Work completed at Amazon Alexa. The author now works at Thomson Reuters. 1 According to Yao et al. (2019), at least 40.7% facts in Wikipedia can only be extracted from multiple sentences.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "The distant supervision (DS) paradigm has been widely used for relation extraction (RE) to alleviate the need for expensive annotations. However, it suffers from noisy labels, which leads to worse performance than models trained on human-annotated data, even when trained using hundreds of times more data. We present a systematic study on the use of natural language inference (NLI) to improve distantly supervised document-level RE. We apply NLI in three scenarios: (i) as a filter for denoising DS labels, (ii) as a filter for model prediction, and (iii) as a standalone RE model. Our results show that NLI filtering consistently improves performance, reducing the performance gap with a model trained on human-annotated data by 2.3 F1. * Work completed at Amazon Alexa. The author now works at Thomson Reuters. 1 According to Yao et al. (2019), at least 40.7% facts in Wikipedia can only be extracted from multiple sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation extraction (RE) is the task of identifying relations between two entities in natural language text. It has an important role in many NLP applications, such as knowledge base population and question answering. Existing work on RE has been focused mostly on extraction within a sentence (Mintz et al., 2009; Zhang et al., 2017; Han et al., 2018 ). However, sentence-level RE has one major limitation: it is not designed to extract relational facts expressed in multiple sentences. 1 To address this, recent work has explored models which use document-level context to extract both intra-and inter-sentence relations from text (Li et al., 2020; Xu et al., 2021; Eberts and Ulges, 2021) Currently, high-performance RE models require large-scale human-annotated data, which is expensive and does not scale to a large number of relations or new domains. To reduce the reliance on human-annotated data, Mintz et al. (2009) introduce the distant supervision (DS) approach, which assumes that if two entities are connected through a relation in a knowledge base, sentences that mention the two entities express that relation. While this assumption allows the creation of large-scale training data without expensive human annotation, it also produces many noisy labels (Riedel et al., 2010 ). 2 As a result, the performance of models trained on DS datasets is considerably lower (\u223c5%) than models trained on human-annotated datasets.",
"cite_spans": [
{
"start": 294,
"end": 314,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF6"
},
{
"start": 315,
"end": 334,
"text": "Zhang et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 335,
"end": 351,
"text": "Han et al., 2018",
"ref_id": "BIBREF3"
},
{
"start": 488,
"end": 489,
"text": "1",
"ref_id": null
},
{
"start": 633,
"end": 650,
"text": "(Li et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 651,
"end": 667,
"text": "Xu et al., 2021;",
"ref_id": "BIBREF16"
},
{
"start": 668,
"end": 691,
"text": "Eberts and Ulges, 2021)",
"ref_id": "BIBREF2"
},
{
"start": 905,
"end": 924,
"text": "Mintz et al. (2009)",
"ref_id": "BIBREF6"
},
{
"start": 1268,
"end": 1288,
"text": "(Riedel et al., 2010",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper aims to reduce the performance gap between models trained on DS versus annotated data through natural language inference (NLI) . NLI, also known as textual entailment, is the task of determining whether a premise entails a hypothesis. Recently, Sainz et al. (2021) used an NLI model as a standalone RE model and demonstrated its effectiveness for zero-shot and few-shot sentence-level RE. In line with their work, we investigate if NLI can also benefit document-level RE in this paper. Specifically, we apply NLI for document-level RE in three scenarios: (i) as a filter for denoising DS labels, (ii) as a filter for model prediction, and (iii) as a standalone RE model. We experiment with DocRED (Yao et al., 2019 ), the largest document-level RE dataset to date. It consists of both DS and human-annotated datasets, which is ideal for our study. Across all scenarios, we find that NLI is especially effective when it is used as a filter; we observe improvement up to 2.3 F1, reducing the gap with a model trained on annotated data from 5.3 to 3.0 F1. However, the gains are less significant when the model has access to human-annotated data. Finally, we highlight the importance of having high-quality entity type information when using NLI as a standalone RE model.",
"cite_spans": [
{
"start": 132,
"end": 137,
"text": "(NLI)",
"ref_id": null
},
{
"start": 256,
"end": 275,
"text": "Sainz et al. (2021)",
"ref_id": "BIBREF13"
},
{
"start": 708,
"end": 725,
"text": "(Yao et al., 2019",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first describe the approach by Sainz et al. (2021) , which uses an NLI model as a standalone model for sentence-level RE.",
"cite_spans": [
{
"start": 34,
"end": 53,
"text": "Sainz et al. (2021)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for RE",
"sec_num": "2"
},
{
"text": "Let p be an input text containing two entity mentions m 1 and m 2 . We take p as the premise and generate the hypothesis h by verbalizing each relation r using a template t, m 1 , and m 2 . For example, the relation \"capital of \" can be verbalized using the template \"{m 1 } is the capital of {m 2 }\". One relation can be verbalized using multiple templates, leading to multiple hypotheses. To avoid mismatch between the entity types and the relation, a set of allowed types for the first and the second entities is created for each relation, e.g., the relation \"date of birth\" should involve a PERSON and a DATE entities. We use a function f r to determine whether a relation r \u2208 R matches the given entity types, e 1 and e 2 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for RE",
"sec_num": "2"
},
{
"text": "f r (e 1 , e 2 ) = 1 e 1 \u2208 E r1 \u2227 e 2 \u2208 E r2 0 otherwise (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for RE",
"sec_num": "2"
},
{
"text": "where E r1 and E r2 are the set of allowed types for the first and the second entities in r. We then compute the probability of each relation r as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for RE",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P r (p, m 1 , m 2 ) = f (e 1 , e 2 ) max t\u2208Tr P N LI (p, h|t, m 1 , m 2 )",
"eq_num": "(2)"
}
],
"section": "NLI for RE",
"sec_num": "2"
},
{
"text": "where P N LI is the entailment probability of (p, h) given by the NLI model, and T r is the set of templates for relation r, and h is the hypothesis generated using a template t and the two entity mentions, m 1 and m 2 . In practice, we only need to run NLI inference for relation with f r (e 1 , e 2 ) = 1. To identify cases when no relation exists between m 1 and m 2 , we apply a threshold T to Eq. 2. If none of the relations surpasses T , then we assume there is no relation between the two mentions, otherwise we return the relation with the highest entailment probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for RE",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r = arg max r\u2208R P r (p, m 1 , m 2 ).",
"eq_num": "(3)"
}
],
"section": "NLI for RE",
"sec_num": "2"
},
{
"text": "Adapting to Document-Level RE For our experiments with document-level RE, we adapt the same setup as Sainz et al. (2021) by treating the whole document context as the premise. We apply NLI in three scenarios: (i) as a filter to for denoising DS labels (pre-filter), (ii) as a filter for model predictions (post-filter), and (iii) as a standalone RE model. In the pre-filtering scenario, we verbalize the labels (relations) identified using the DS assumption and remove all labels that do not surpass the threshold T from the DS dataset. Similarly, in the post-filtering scenario, we verbalize the relations predicted by an RE model and remove those which do not surpass T . In both scenarios, we do not need to generate candidate relations (Eq. 1) since they are provided by the DS labels or the RE model predictions. Unlike Sainz et al. 2021which chooses one relation label that maximizes the probability of the hypothesis (Eq. 3), we use all relation labels that have entailment probability above T . 3 In our experiments, we set T = 0.5, i.e., taking all relations that the NLI model predicts as entailment. Additionally, since the DS dataset is known to be noisy, for the pre-filtering scenario, we also experiment with higher thresholds to study the effect of using more strict filters on the RE performance.",
"cite_spans": [
{
"start": 101,
"end": 120,
"text": "Sainz et al. (2021)",
"ref_id": "BIBREF13"
},
{
"start": 1003,
"end": 1004,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for RE",
"sec_num": "2"
},
{
"text": "We experiment with two types of NLI models: a model that is not trained specifically for RE (zeroshot NLI) and a model that is fine-tuned using a small number of human-annotated RE examples (few-shot NLI). The zero-shot NLI model simulates a case when we do not have any annotations, while the few-shot NLI model simulates a case when we have a small budget for annotations. We fine-tune the NLI model for a binary entailment task (entail or not entail). Since DocRED annotations do not contain negative examples (no-relation label), we generate the non-entail examples for NLI as follows. First, we train a model using the DS dataset and generate predictions for the human-annotated training data. We then use the model's incorrect predictions as the non-entail examples. We use a maximum N = {10, 100} examples per relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for RE",
"sec_num": "2"
},
{
"text": "Dataset We experiment with DocRED (Yao et al., 2019) , a document-level RE dataset created from Wikipedia articles aligned with Wikidata. It covers six entity types (ORG, LOC, PER, TIME, NUM, MISC) and 96 relation types. DocRED contains 101, 873 DS training documents and 5, 051 humanannotated documents, split into training (3, 053), development (998), and testing (1, 000). 4 RE Model For our document-level RE model, we use JEREX (Eberts and Ulges, 2021) which obtains comparable performance with the state-ofthe-art SSAN (Xu et al., 2021) model when using bert-base-case encoder. The model has four main components (entity mention localization, coreference resolution, entity classification, relation classification), which share the same encoder and mention representations, and are trained jointly. For the relation classifier module, we use the multiinstance version, which predicts relation on the mention-level. JEREX is originally designed for end-to-end RE without the need for entity information. However, since our main focus is on the RE side, we use its standard RE pipeline, which assumes that entity clusters are given.",
"cite_spans": [
{
"start": 34,
"end": 52,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 433,
"end": 457,
"text": "(Eberts and Ulges, 2021)",
"ref_id": "BIBREF2"
},
{
"start": 525,
"end": 542,
"text": "(Xu et al., 2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We use a pretrained document-level NLI model based on DeBERTaV3 (He et al., 2021) 5 , which was trained on 1.3M premisehypothesis pairs from 8 datasets: MNLI (Williams et al., 2018) , FEVER-NLI (Nie et al., 2019) , NLI dataset from Parrish et al. 2021, and Doc-NLI (Yin et al., 2021 ) (which is curated from ANLI (Nie et al., 2020) , SQuAD (Rajpurkar et al., 2016) , DUC2001 6 , CNN/DailyMail (Nallapati et al., 2016) , and Curation (Curation, 2020)). The model was trained for a binary entailment task.",
"cite_spans": [
{
"start": 158,
"end": 181,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 194,
"end": 212,
"text": "(Nie et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 265,
"end": 282,
"text": "(Yin et al., 2021",
"ref_id": "BIBREF18"
},
{
"start": 313,
"end": 331,
"text": "(Nie et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 340,
"end": 364,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 393,
"end": 417,
"text": "(Nallapati et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLI Model",
"sec_num": null
},
{
"text": "Training and Optimization For training JEREX models, we use the default hyperparameters of Eberts and Ulges (2021) . We use a maximum of 10 epochs for training with the DS dataset and 40 epochs for training with the human-annotated dataset. For NLI fine-tuning, we use a maximum of 10 epochs for the few-shot setting and one epoch when using the full annotated data. We tune the learning rate \u2208 {1e\u22125, 2e\u22125, 3e\u22125}, with a batch size of 8 and gradient accumulation steps of 4. Each model is trained using a single V100 GPU with 16GB memory. We train each model with three random restarts and report the average performance. 4 We use the revised version of DocRED development set with 998 documents after two documents were removed because they overlap with the annotated training data.",
"cite_spans": [
{
"start": 91,
"end": 114,
"text": "Eberts and Ulges (2021)",
"ref_id": "BIBREF2"
},
{
"start": 623,
"end": 624,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLI Model",
"sec_num": null
},
{
"text": "5 https://huggingface.co/MoritzLaurer/ DeBERTa-v3-base-mnli-fever-docnli-ling-2c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI Model",
"sec_num": null
},
{
"text": "6 https://www-nlpir.nist.gov/projects/ duc/guidelines/2001.html ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI Model",
"sec_num": null
},
{
"text": "Zero-shot NLI Table 1 shows the percentages of triples left in the DS dataset (out of \u223c1.5M instances) after pre-filtering with different thresholds T (for other thresholds, see Appendix A). For the zero-shot NLI, setting T to the lowest value (0.5) leaves us with 73.4% of the original DS triples, while setting it to the maximum value (0.99) leaves us with 59.0% of the original DS triples. Table 2 reports our main RE results. Our baseline is a JEREX model trained with the DS dataset. To understand how far NLI can help in reducing the gap between models trained using the DS (weakly supervised) vs. human-annotated (supervised) datasets, we also provide results of supervised models using BERT base, JEREX, SSAN (Xu et al., 2021) . All of the models use the same BERT base encoder (Devlin et al., 2019) .",
"cite_spans": [
{
"start": 717,
"end": 734,
"text": "(Xu et al., 2021)",
"ref_id": "BIBREF16"
},
{
"start": 786,
"end": 807,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 393,
"end": 400,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "We find that NLI improves RE performance in both pre-filter and post-filter scenarios. Postfiltering with NLI achieves the best performance with 56.2 F1, reducing the gap with the supervised model by 2.3 F1. Looking into the other metrics, it is evident that NLI filtering yields RE models with higher precision but lower recall. We observe that our most aggressive pre-filtering (high) outper- forms the precision of the supervised model. This result suggests that pre-filtering is especially useful for applications where having high precision is preferable to recall. We also experiment with the double-filter scenario, where we apply both our best pre-filter (low) and post-filter. We find it has minimal effect on the model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "Few-shot NLI This scenario assumes that a small human-annotated dataset is available, so in the next set of experiments, all RE models are trained using the DS dataset and then fine-tuned using the small annotated dataset. 7 Unlike NLI fine-tuning, where we limit the maximum number of examples per relation when fine-tuning the RE models, we use all annotations in the document since we want the model to learn all and not just the subset of correct triples. We fine-tune the RE models using 427 and 1,761 annotated documents for the 10-shot and the 100-shot NLI settings, respectively. As shown in Table 3 , in the few-shot settings, we can still see improvement by using NLI as a pre-filter. However, the improvements are not as large as in the DS-only training. 8 We also see 1.2",
"cite_spans": [
{
"start": 766,
"end": 767,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 600,
"end": 607,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "Coarse-grained types F1 improvements when using the full annotated data (\u223c3k documents) for fine-tuning the NLI and RE model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI Model Precision Recall F1 IgnF1",
"sec_num": null
},
{
"text": "We utilize the entity type information in the DocRED annotated training data to create the list of allowed entity types for each relation. However, we find that this strategy still leads us to mismatch types between the relation and entity, which might be due to several reasons. First, DocRED entities are annotated with coarse-grained types (Section 3), which might confuse the model when learning about relations that exist between entities. For instance, frequent location relations such as P17 (country) require the tail entity to be a country. However, with the generic LOC type and sometimes similar NLI template (e.g. \"{m 1 } is located in {m 2 }\"), other types of locations, such as cities, can also fit the slot for m 2 and be inferred as correct by the NLI model. We also find that the MISC type is especially ambiguous since it is allowed in almost all relations. Second, DocRED relations are annotated on entitylevel, where one entity can have multiple mentions with different types, e.g., the entity Finland has mentions Finland (LOC) as well as Finnish (MISC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI as a standalone RE model",
"sec_num": null
},
{
"text": "To alleviate this, we only add entity types to a relation if they co-occur more than 100 times in the data. In addition, we also experiment using \u223c500 fine-grained entity types using ReFinED (Ayoola et al., 2022) , which currently obtain state-of-the-art performance on several entity linking datasets. Table 4 presents our results. We observe that using coarse-grained entity type information leads to poor model performance. In particular, we find that the model overpredicts the relations, as shown by the high recall. Using finer-grained types improves performance up to 23.5 F1, but it is still far below the performance of a model specifically trained for RE. This result suggests that when the NLI model is provided with a set of noisy candidate relations, it predicts many of them as correct. On the other hand, when the set of candidate relations is less noisy (given by the DS labels or RE model predictions), the NLI model performs well and can improve RE performance.",
"cite_spans": [
{
"start": 191,
"end": 212,
"text": "(Ayoola et al., 2022)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "NLI as a standalone RE model",
"sec_num": null
},
{
"text": "We validate our result by running our overall best strategy, pre-filtering by NLI (T = 0.5) on the test set. Table 5 shows a similar pattern as observed in the development data: NLI filtering consistently improves performance in all settings. We only report F1 and IgnF1 since Do-cRED CodaLab output does not provide precision and recall numbers.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results on Test Set",
"sec_num": null
},
{
"text": "In this paper, we presented a systematic study on the use of NLI for distantly supervised documentlevel RE, focusing on the case when humanannotated data is not available. Our results demonstrate that NLI is most effective when used as a prefilter to denoise DS labels. In the absence of human annotations, we show that NLI filtering reduces the gap with a model trained on human-annotated data by 2.3 F1. We also show that NLI filtering still benefits the RE model (+1.1 F1) when we have small human-annotated data. Our experiment on using NLI as a standalone model for document-level RE leads to worse performance than using it as a prefilter, suggesting that using NLI directly as an RE model for document-level is more challenging than sentence-level RE. For future work, we plan to explore other strategies to better leverage the entity type information for RE with NLI and investigate if document-level NLI is also more challenging than sentence-level NLI. Another potential direction is to experiment with other DS techniques, such as integrating a denoising module to the RE model (Xiao et al., 2020) or using DS-trained models as a DS filter (Zhou and Chen, 2021) .",
"cite_spans": [
{
"start": 1089,
"end": 1108,
"text": "(Xiao et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 1151,
"end": 1172,
"text": "(Zhou and Chen, 2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "A Pre-filtering with NLI ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For document-level RE,Yao et al. (2019) report 41% and 61% incorrect labels for intra-and inter-sentence relations in DS, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The setup ofSainz et al. (2021) most likely influenced by their experimental dataset, TACRED(Zhang et al., 2017), which only allows one relation per mention pair. On the other hand, DocRED annotations may have multiple relations per entity pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The DS training followed by fine-tuning setup yields the best model performance on DocRED(Xu et al., 2021).8 We only experiment with low and high for the 10-shot experiments since the medium filtering yield very similar training data distribution(Table 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Tom Ayoola, Shubhi Tyagi, Siffi Singh, Marco Damonte, and the anonymous reviewers for helpful discussion of this work and comments on previous drafts of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Re-FinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Ayoola",
"suffix": ""
},
{
"first": "Shubhi",
"middle": [],
"last": "Tyagi",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Pierleoni",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Ayoola, Shubhi Tyagi, Joseph Fisher, Christos Christodoulopoulos, and Andrea Pierleoni. 2022. Re- FinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies: Industry Papers, Seat- tle, Washington. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An end-to-end model for entity-level relation extraction using multiinstance learning",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Eberts",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Ulges",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "3650--3660",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.319"
]
},
"num": null,
"urls": [],
"raw_text": "Markus Eberts and Adrian Ulges. 2021. An end-to-end model for entity-level relation extraction using multi- instance learning. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3650-3660, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ziyun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4803--4809",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1514"
]
},
"num": null,
"urls": [],
"raw_text": "Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803-4809, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/ARXIV.2111.09543"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pre- training with gradient-disentangled embedding shar- ing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Graph enhanced dual attention network for document-level relation extraction",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Zhonghao",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1551--1560",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.136"
]
},
"num": null,
"urls": [],
"raw_text": "Bo Li, Wei Ye, Zhonghao Sheng, Rui Xie, Xiangyu Xi, and Shikun Zhang. 2020. Graph enhanced dual attention network for document-level relation extrac- tion. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 1551- 1560, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "\u00c7aglar",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Gul\u00e7ehre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1028"
]
},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, \u00c7aglar Gul\u00e7ehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280-290, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Combining fact extraction and verification with neural semantic matching networks",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Haonan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Association for the Advancement of Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neu- ral semantic matching networks. In Association for the Advancement of Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adversarial NLI: A new benchmark for natural language understanding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4885--4901",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.441"
]
},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language under- standing. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Does putting a linguist in the loop improve NLU data collection?",
"authors": [
{
"first": "Alicia",
"middle": [],
"last": "Parrish",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Omar",
"middle": [],
"last": "Agha",
"suffix": ""
},
{
"first": "Soo-Hwan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Alexia",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Karmanya",
"middle": [],
"last": "Aggarwal",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Allaway",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2021",
"volume": "",
"issue": "",
"pages": "4886--4901",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-emnlp.421"
]
},
"num": null,
"urls": [],
"raw_text": "Alicia Parrish, William Huang, Omar Agha, Soo-Hwan Lee, Nikita Nangia, Alexia Warstadt, Karmanya Ag- garwal, Emily Allaway, Tal Linzen, and Samuel R. Bowman. 2021. Does putting a linguist in the loop improve NLU data collection? In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4886-4901, Punta Cana, Dominican Re- public. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases, pages 148-163, Berlin, Hei- delberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Label verbalization and entailment for effective zero and fewshot relation extraction",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "Sainz",
"suffix": ""
},
{
"first": "Oier",
"middle": [],
"last": "Lopez De Lacalle",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Ander",
"middle": [],
"last": "Barrena",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1199--1212",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.92"
]
},
"num": null,
"urls": [],
"raw_text": "Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, and Eneko Agirre. 2021. Label ver- balization and entailment for effective zero and few- shot relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1199-1212, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Denoising relation extraction from documentlevel distant supervision",
"authors": [
{
"first": "Chaojun",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Fen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Leyu",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "3683--3688",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.300"
]
},
"num": null,
"urls": [],
"raw_text": "Chaojun Xiao, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin, and Leyu Lin. 2020. Denoising relation extraction from document- level distant supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 3683-3688, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Entity structure within and throughout: Modeling mention dependencies for document-level relation extraction",
"authors": [
{
"first": "Benfeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhendong",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2021,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benfeng Xu, Quan Wang, Yajuan Lyu, Yong Zhu, and Zhendong Mao. 2021. Entity structure within and throughout: Modeling mention dependencies for document-level relation extraction. In AAAI.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "DocRED: A large-scale document-level relation extraction dataset",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Deming",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lixin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "764--777",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1074"
]
},
"num": null,
"urls": [],
"raw_text": "Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, pages 764-777, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "DocNLI: A large-scale dataset for documentlevel natural language inference",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "4913--4922",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.435"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin, Dragomir Radev, and Caiming Xiong. 2021. DocNLI: A large-scale dataset for document- level natural language inference. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 4913-4922, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Position-aware attention and supervised data improve slot filling",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "35--45",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1004"
]
},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 35-45, Copenhagen, Denmark. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning from noisy labels for entity-centric information extraction",
"authors": [
{
"first": "Wenxuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5381--5392",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.437"
]
},
"num": null,
"urls": [],
"raw_text": "Wenxuan Zhou and Muhao Chen. 2021. Learning from noisy labels for entity-centric information extraction. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 5381-5392, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>low (0.5) med (0.95) high (0.99)</td><td>73.4 68.6 59.0</td><td>71.1 70.1 69.1</td><td>66.0 56.4 38.8</td><td>65.1 48.4 12.3</td></tr></table>",
"num": null,
"text": "Threshold zero-shot 10-shot 100-shot full"
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\">Precision Recall</td><td>F1</td><td>IgnF1</td></tr><tr><td colspan=\"4\">Training with annotated data only (supervised)</td><td/></tr><tr><td>BERT Base \u2020 SSAN Biaffine \u2020 JEREX</td><td>--64.5</td><td>--54.8</td><td>58.6 59.2 59.2</td><td>56.3 57.0 57.4</td></tr><tr><td colspan=\"4\">Training with DS data only (weakly supervised)</td><td/></tr><tr><td>JEREX</td><td>51.5</td><td>56.5</td><td>53.9</td><td>51.0</td></tr><tr><td>+ pre-filter (low) + pre-filter (med) + pre-filter (high) + post-filter + double-filter</td><td>61.3 62.4 65.7 60.8 64.0</td><td>51.8 50.3 46.2 52.3 50.0</td><td>56.1 55.7 54.3 56.2 56.1</td><td>54.0 53.7 52.6 54.1 54.2</td></tr></table>",
"num": null,
"text": "Percentages of triples left in the DS data after pre-filtering with NLI."
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": ""
},
"TABREF4": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": ""
},
"TABREF6": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Results on DocRED development set when using NLI as a standalone RE model."
},
"TABREF8": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Results on DocRED test set."
},
"TABREF9": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>0.5 0.7 0.9 0.95 0.97 0.99</td><td>73.4 72.6 70.8 68.6 66.1 59.0</td><td>71.1 70.8 70.4 70.1 69.9 69.1</td><td>66.0 64.9 60.9 56.4 52.4 38.8</td><td>65.1 63.7 56.2 48.4 40.0 12.3</td></tr></table>",
"num": null,
"text": "Threshold zero-shot 10-shot 100-shot full"
},
"TABREF10": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">B DocRED NLI Templates</td></tr><tr><td>Relation</td><td>Templates</td></tr><tr><td colspan=\"2\">applies to jurisdiction {head} rules {tail}. {head} represents {tail}. {head} works for the {tail} government. author {head} is written by {tail}. {head} is a story by {tail}. {tail} is the author of {head}. {tail} wrote {head}. award received {head} received {tail}. {head} won {tail}. {head} was a recipient of {tail}. {head} was awarded {tail}. basin country {head} is located near {tail}. {tail} is located in {head}. capital of {head} is the capital of {tail}. {tail}'s capital is {head}. capital {head}'s capital is {tail}. {tail} is the capital of {head}. cast member {head}'s cast includes {tail}. {tail} starred in {head}. {tail} appeared in {head}. continent {head} is located in {tail}. country of citizenship {head} country of citizenship is {tail}. {head} is from {tail}. country {head} is located in {tail}. creator {head} is created by {tail}. {tail} is the creator of {tail}. date of birth {head} was born {tail}. date of death {head} died {tail}. director {head} is a movie directed by {tail}. {head} is a game directed by {tail}. {tail} is the director of {head}.</td></tr></table>",
"num": null,
"text": "Percentages of triples left in the DS data after pre-filtering with NLI with different threshold values."
},
"TABREF11": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Examples of DocRED NLI Templates. Full templates can be found in the supplementary materials."
}
}
}
}