|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:45:55.432631Z" |
|
}, |
|
"title": "Automatic Data Acquisition for Event Coreference Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Prafulla", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Texas A&M University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ruihong", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Texas A&M University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose to leverage lexical paraphrases and high precision rules informed by news discourse structure to automatically collect coreferential and non-coreferential event pairs from unlabeled English news articles. We perform both manual validation and empirical evaluation on multiple evaluation datasets with different event domains and text genres to assess the quality of our acquired event pairs. We found that a model trained on our acquired event pairs performs comparably as the supervised model when applied to new data out of the training data domains. Further, augmenting human-annotated data with the acquired event pairs provides empirical performance gains on both in-domain and out-of-domain evaluation datasets.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose to leverage lexical paraphrases and high precision rules informed by news discourse structure to automatically collect coreferential and non-coreferential event pairs from unlabeled English news articles. We perform both manual validation and empirical evaluation on multiple evaluation datasets with different event domains and text genres to assess the quality of our acquired event pairs. We found that a model trained on our acquired event pairs performs comparably as the supervised model when applied to new data out of the training data domains. Further, augmenting human-annotated data with the acquired event pairs provides empirical performance gains on both in-domain and out-of-domain evaluation datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Event coreference resolution aims to determine and cluster event mentions that refer to the same realworld event. It is a relatively less studied NLP task despite being crucial for various NLP applications such as topic detection and tracking, question answering, and summarization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A typical event coreference resolution system relies on scoring similarity between two event mentions in a document followed by clustering. However, event coreference chains are very sparsely distributed and only certain key events are repeated in a document, which makes manually labeling many event coreference relations very time-consuming. Furthermore, event mentions tend to appear in extremely diverse contexts and few are accompanied by a full set of their arguments. The two challenges, the absence of abundant human-annotated event coreference data and the high diversity of contexts containing coreferential event mentions, make it hard to build effective event coreference resolution systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We aim to improve the effectiveness of event coreference resolution systems by automatically acquiring coreferential event pairs from many documents requiring minimal supervision. Specifically, coreferential event mentions are associated with discourse function of sentences in a news document (Choubey et al., 2020) 1 . We propose to use them to identify sentence pairs that are likely to contain coreferential event mentions as well as sentence pairs that are likely to contain noncoreferential event pairs. Consider the two example sentence pairs below, each pair having an event pair with synonymous trigger words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1): [People living in absolute poverty in rural areas of the eight regions and provinces reduced to 14.52 million from 30.76 million over the last decade.] [Yang admitted , however , that ethnic minority regions still lagged far behind the developed eastern regions and the government still faced serious challenges to reduce poverty.]", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 181, |
|
"text": "[Yang admitted , however", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2): [At least 30,000 war-displaced people camped in Angola's central province of Kwanza-sul are being resettled in productive areas, the official news agency angop reported here on Friday.] [The resettlement is being carried out jointly by the local municipal authorities of Seles, located in southern Kwanza-sul, and the charity organization German Agro Action, the news agency said.]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In example (1), the first sentence describes a historical event about the reduction in poverty during the last decade, while the second sentence projects the challenges of further reducing poverty in the coming years. Here, the two reduce events are non-overlapping in the temporal space and are noncoreferential. On the contrary, in example (2), both mentions for the event resettle refer to the same real-world event and can be so ascertained by knowing that both sentences describe the same main event in a news article. In general, we can recognize pairs of sentences in news articles that are likely to contain coreferential or non-coreferential event mention pairs by knowing the sentence's discourse function following Van Dijk's theory.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To ascertain our hypothesis, we first use the discourse profiling system and dataset introduced by Choubey et al. (2020) to identify the discourse role for each sentence in a news article. Then, we use multiple rules to capture the distributional correlation between event coreference chains and discourse roles of sentences and collect a diverse set of 9,210 coreferential and 232,135 non-coreferential event pairs 2 . To assess the reliability of the proposed data augmentation strategy, we perform manual validation on subsets of both coreferential and non-coreferential event pairs. Then, we train event coreference resolution systems using the acquired data alone or using the acquired data to augment a human-annotated training dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluate trained systems on two datasets, the news portion 3 of the widely used benchmark evaluation corpus KBP 2017 as well as the news portion 4 of the Richer Event Description (RED) corpus (O'Gorman et al., 2016) . Unlike the KBP corpora that only consider eight event types for event coreference annotations, the RED corpus comprehensively annotates all the event types that appear in a document, and is arguably the only comprehensively annotated corpus of event coreference relations. Assuming the automatically acquired event coreference data is not available, we also train a supervised event coreference resolution system using the KBP 2015 corpus 5 . On the KBP 2017 corpus, the event coreference resolution system trained on the acquired data performs slightly worse than the system trained using the KBP 2015 corpus, the human-annotated in-domain training data. But, on the RED corpus, both the systems trained on either the annotated KBP 2015 corpus or the acquired data obtain roughly the same evaluation results. Further, the system trained on combined annotated KBP 2015 and automatically acquired data yields the best results on both the KBP 2017 dataset and the RED dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 218, |
|
"text": "(O'Gorman et al., 2016)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Lastly, we evaluate all the trained systems on a different text genre, discussion forum articles from the KBP 2017 corpus, and found that all the systems obtain comparable results. Overall, the performance gain of all the trained systems on discussion forum documents is marginal compared to a simple trigger word match baseline. Thus, increasing training data size does not improve the performance of an event coreference resolution system on a new text genre. We suspect that, for generalization across different text genres, we may require specialized learning algorithms, e.g., text style adaptation, which is not in the scope of this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The existing literature on supervised event coreference resolution primarily focuses on designing pairwise classifier based on the surface linguistic features such as lexical features comprising of lemma and part-of-speech tag similarity of event words (Bejan and Harabagiu, 2010; Lee et al., 2012; Liu et al., 2014; Yang et al., 2015; Cremisini and Finlayson, 2020) , argument overlap McConky et al., 2012; Sangeetha and Arock, 2012; Bejan and Harabagiu, 2014; Yang et al., 2015; Choubey and Huang, 2017) , semantic similarity based on lexical resources such as wordnet (Bejan and Harabagiu, 2010; Liu et al., 2014; Yu et al., 2016) and word embeddings (Yang et al., 2015; Choubey and Huang, 2017; Kenyon-Dean et al., 2018; Barhom et al., 2019; Zuo et al., 2019; Pandian et al., 2020; Sahlani et al., 2020; , and discourse features such as token and sentence distance (Liu et al., 2014; Cybulska and Vossen, 2015) . The resulting classifier is used to cluster event mentions. The commonly used strategies include agglomerative clustering that selects the antecedent closest in mention distance that is classified as coreferent or the antecedent with the highest coreference likelihood Chen and Ng, 2014) , hierarchical bayesian (Yang et al., 2015) or spectral clustering algorithms . In this work, we use the pre-trained BERT model to extract both event and context features and use agglomerative clustering to form event coreference chains.", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 280, |
|
"text": "(Bejan and Harabagiu, 2010;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 298, |
|
"text": "Lee et al., 2012;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 316, |
|
"text": "Liu et al., 2014;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 335, |
|
"text": "Yang et al., 2015;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 366, |
|
"text": "Cremisini and Finlayson, 2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 407, |
|
"text": "McConky et al., 2012;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 434, |
|
"text": "Sangeetha and Arock, 2012;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 461, |
|
"text": "Bejan and Harabagiu, 2014;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 480, |
|
"text": "Yang et al., 2015;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 481, |
|
"end": 505, |
|
"text": "Choubey and Huang, 2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 598, |
|
"text": "(Bejan and Harabagiu, 2010;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 599, |
|
"end": 616, |
|
"text": "Liu et al., 2014;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 633, |
|
"text": "Yu et al., 2016)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 673, |
|
"text": "(Yang et al., 2015;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 698, |
|
"text": "Choubey and Huang, 2017;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 724, |
|
"text": "Kenyon-Dean et al., 2018;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 725, |
|
"end": 745, |
|
"text": "Barhom et al., 2019;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 746, |
|
"end": 763, |
|
"text": "Zuo et al., 2019;", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 785, |
|
"text": "Pandian et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 786, |
|
"end": 807, |
|
"text": "Sahlani et al., 2020;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 869, |
|
"end": 887, |
|
"text": "(Liu et al., 2014;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 888, |
|
"end": 914, |
|
"text": "Cybulska and Vossen, 2015)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1186, |
|
"end": 1204, |
|
"text": "Chen and Ng, 2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1229, |
|
"end": 1248, |
|
"text": "(Yang et al., 2015)", |
|
"ref_id": "BIBREF48" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Supervised models suffer from a lack of human-annotated event coreference data. To address the annotation scarcity problem, Peng et al. (2016) proposed to learn structured event representations on large amounts of text and use the similarity score between two event representations to form event coreference chains. Their model uses a small human-annotated event coreference dataset to find the appropriate similarity score threshold for linking two events. Unsupervised models based on probabilistic generative modeling have also been successfully used for event coreference resolution (Bejan and Harabagiu, 2010; Chen and Ng, 2015) . However, both semi-supervised and unsupervised approaches have been found empirically lagging behind the supervised models (Lu and Ng, 2018) . The closest to our work are weakly-supervised and self-training methods that have been shown useful for many information extraction and classification tasks (Riloff, 1996; Riloff and Wiebe, 2003; Xie et al., 2019) . But, to the best of our knowledge, we are the first to explore discourse-aware strategies to automatically label event coreference relations and use them exclusively or use them to augment existing human-annotated data for training event coreference resolution systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 142, |
|
"text": "Peng et al. (2016)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 614, |
|
"text": "(Bejan and Harabagiu, 2010;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 633, |
|
"text": "Chen and Ng, 2015)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 759, |
|
"end": 776, |
|
"text": "(Lu and Ng, 2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 936, |
|
"end": 950, |
|
"text": "(Riloff, 1996;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 951, |
|
"end": 974, |
|
"text": "Riloff and Wiebe, 2003;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 975, |
|
"end": 992, |
|
"text": "Xie et al., 2019)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To acquire coreferential event-pairs without direct supervision, we first collect event trigger words along with their potential set of coreferential event mentions using The Paraphrase Database (PPDB 2.0) (Ganitkevitch et al., 2013; Pavlick et al., 2015) 6 . Then, we use high precision rules informed by the functional news discourse structures (Teun A, 1986; Choubey et al., 2020) to identify seed coreferential and non-coreferential event pairs followed by a single bootstrapping iteration to collect additional non-coreferential event pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 233, |
|
"text": "(Ganitkevitch et al., 2013;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 255, |
|
"text": "Pavlick et al., 2015)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 361, |
|
"text": "(Teun A, 1986;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 383, |
|
"text": "Choubey et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Coreference Data Acquisition", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Candidates using The PPDB Database", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Identifying Coreferential Event Trigger", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We collect lexically diverse candidate coreferential event pairs using the paraphrases from PPDB-2.0-slexical (Pavlick et al., 2015) database. The corpus 7 contains 213,716 highest scoring lexical paraphrase pairs, each annotated with one of the equivalence, forward or reverse entailment, and contradiction relation classes. First, we extract all the verb paraphrase pairs as the potential event trigger words. While event mentions can take other part of speech types, we limit our paraphrase pairs to verbs to ensure high precision among the collected event trigger words. Additionally, many of the verb paraphrase pairs include nominalization (e.g., investing and investment), which adds to the syntactic diversity in the event pairs without compromising their quality. Then, among all verb paraphrase pairs, we filter out only three relation classes, namely equivalence, reverse entailment and forward entailment, as the potential coreferential event pairs. The forward and reverse entailment relations characterize hyponym and hypernym relations, which are not semantically equivalent but can often be coreferential and thus, add diversity to the pairs. Finally, we manually remove noisy event trigger words and cluster the remaining event pairs through pivoting, based on a common event trigger word shared between two paraphrase pairs 8 . Overall, we obtain 1023 clusters with an average of 3.375 event trigger words per cluster.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 132, |
|
"text": "(Pavlick et al., 2015)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Identifying Coreferential Event Trigger", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To generate the news discourse structure proposed by Van Dijk (Teun A, 1986; Van Dijk, 1988a,b) and specify the discourse role of a sentence with respect to events in the document, we use the discourse profiling system proposed by Choubey et al. (2020) . Note that the above discourse structure is functional (Webber and Joshi, 2012) and does not specify relations between two discourse units. Instead, it classifies each sentence in a document into one of the eight content types. Each content type describes the specific role of a sentence in describing the main event, context informing events, and other historical or future projected events. The eight content types include main event (M1) sentences that describe the most newsworthy event of a news article. Sentences describing events that happen recently and act as triggers for the main event and events that are triggered by the main event constitute the previous event (C1) and consequence (M2) sentences respectively. The remaining context-informing events and states with temporal co-occurrence with the main event are covered in current context (C2) sentences. In addition to the above four content types, a news article may contain sentences describing lesser relevant events such as historical events (D1) that temporally precedes the main event by months and years, anecdotal events (D2) that are unverifiable personal account of incidents, evaluation (D3) containing reactions from immediate participants, experts or known personalities and expectation (D4) that projects the possible consequences of the main event.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 76, |
|
"text": "(Teun A, 1986;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 77, |
|
"end": 95, |
|
"text": "Van Dijk, 1988a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 252, |
|
"text": "Choubey et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 333, |
|
"text": "(Webber and Joshi, 2012)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Filtering Paraphrase-based Event Pairs using Functional News Discourse Structure", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Among the eight content types, events described in main event sentences are central to the main news topic. They routinely appear in headline and sentences of other content types and consequently are more likely to form event coreference chains. On the contrary, events in the historical event content type are restricted to describing certain historical background and might only be mentioned once in the document. Additionally, events mentioned in previous event sentences tend to happen before those in main event and consequence sentences, and are unlikely to be coreferential with the events from the later two content types. Overall, content types provide cues for determining whether the events from a certain sentence possess coreferential event mentions and we leverage them to locate both coreferential and non-coreferential event pairs in a news article. Our event coreference data acquisition method works in two phases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Filtering Paraphrase-based Event Pairs using Functional News Discourse Structure", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Rule-based Filtering to extract Coreferential and Non-coreferential Event Pairs: In the first phase, we extract both coreferential and noncoreferential event mention pairs based on their respective rules. Specifically, two event mentions from the headline or main event sentences with synonymous event trigger words are identified as coreferential event pairs. Considering that coreferential event mentions are very sparsely distributed, simple trigger-word matching is extremely noisy and damaging when used to train an event coreference classifier. However, narrowing coreferential event mention pairs to synonymous event trigger words from main event sentences or headline significantly eliminates false coreferential event pairs. To get non-coreferential event pairs, we require both trigger words to be non-synonymous and belong to either the same sentence or two sentences of different non-main content types. Further, considering that events in historical event sentences tend to precede the main event by months and years, we identify non-synonymous event pairs with one mention in a historical event sentence and another mention in a main event sentence as non-coreferential. The latter rule allows us to also acquire non-coreferential event pairs with one event from main event sentences, adding to the overall diversity of the acquired dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Filtering Paraphrase-based Event Pairs using Functional News Discourse Structure", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Distilling Non-coreferential Event Pairs with Synonymous Trigger Words: All the noncoreferential event pairs acquired in phase one have non-synonymous trigger words. However, we know that many of the synonymous words are noncoreferential. Therefore, to further diversify the acquired event coreference data, we use the secondphase bootstrapping to extract non-coreferential pairs with synonymous trigger words. We once again leverage the temporal separation between historical and other content types. We first identify synonymous event pairs that have one mention in a historical sentence and another mention in any nonhistorical sentence as candidate non-coreferential pairs. Then, we use an event coreference classifier trained on the dataset extracted in phase one to filter out high scoring non-coreferential event pairs (likelihood \u2265 0.9) from the candidate pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Filtering Paraphrase-based Event Pairs using Functional News Discourse Structure", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We use Xinhua news articles 9 from the English Gigaword (Napoles et al., 2012) corpus to acquire coreferential and non-coreferential event pairs using the proposed methodology. We limit the number of coreferential and non-coreferential event pairs for each trigger word to 20 and 200, respectively, to ensure diversity and reduce repetitions of common event trigger words. We compare our acquired event pairs with the KBP 2015 corpus, which has 179 news documents annotated with eight event types and 38 event subtypes. It is the most widely used corpus for training a withindocument event coreference resolution system. Table 1 shows the number of event pairs obtained in the first and second phases of our data acquisition strategy and the human-annotated KBP 2015 corpus. Overall, the total number of extracted coreferential event pairs is more than twice the number of pairs in news documents from the KBP 2015 corpus. Note that we can increase the number of acquired pairs by expanding the synonymous event trigger word list or the unlabeled news article collection.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 78, |
|
"text": "(Napoles et al., 2012)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics of Acquired Coreference Data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We randomly selected 300 event pairs from each of the coreferential and non-coreferential samples extracted in the first phase, 100 event pairs from non-coreferential samples distilled in the second phase, and 300 event pairs having synonymous event trigger words to evaluate the proposed data acquisition methodology. Then, we asked a human annotator to validate all the 1000 samples manually. Table 2 shows the precision and bootstrapped 80% confidence interval of precision for event pairs from each category. Rows 1 and 2 show that only 49% of synonymous event pairs are coreferential while the remaining are non-coreferential. By comparing rows 1 and 3, we can see that limiting coreferential event pairs to the synonymous event trigger words from the headline and main event sentences improves the precision from 49% to 83%. As shown in rows 4 and 5, our rules achieve high precision in identifying non-coreferential event pairs as well, achieving 99.3% for event pairs with nonsynonymous trigger words acquired in the first phase and even 93% for event pairs with synonymous trigger words acquired in the second phase. Note that the high precision of non-coreferential event pair identification in both phases is partly due to the distributional sparsity of event coreference chains.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 402, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Evaluation of Acquired Event Pairs", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We design a neural network-based mention-pair classifier for event coreference resolution. We represent each event pair using 50 context words to the left and right of the first and second event trigger words respectively, and with the maximum of 200 words in between the two event words 10 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Coreference Resolution System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Given the event context (w 1 , ., e 1 , ., e 2 , ., w n ), we first transform the context words sequence to word embeddings sequence (b w1 , ., b e1 , ., b e2 , ., b wn ) using the pre-trained Bert-Large-uncased model (Devlin et al., 2019) . Then, we model the semantic associations between two event mentions by measuring similarity between their event embeddings (b e1 , b e2 ) through element-wise product and difference. Further, we obtain context embedding (C) through maxpool operation over the word embeddings sequence to model contextual cues. While the context provides important cues for identifying coreferential event mentions, it may not always be relevant for resolving coreference links. For instance, many event trigger word pairs such as (\"injuries\", \"recommended\") are extremely unlikely to exhibit coreferential relations irrespective of their context. Therefore, we use the similarity between event embeddings to control the context input and use them only in the scenarios where event trigger words are likely to possess coreferential link. To achieve so, we apply linear neural layer over element-wise product and differences of two event mention embeddings followed by the sigmoid activation, and multiply them with context embedding C. Finally, we concatenate the resulting set of embeddings and then use a three-layer feed-forward neural network classifier to score the coreference likelihood. The exact formulation of the coreference classifier is described in Eq. 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 239, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Coreference Resolution System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "(bw1.be1.be2.bwn) = BERT [(w1.e1.e2.wn)] \u2208 R n\u00d71024 C = maxpool(bw1, ., be1, ., be2, ., bwn) \u2208 R 1024", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Coreference Resolution System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s1 = sigmoid(W s 1 (bw1 bw2) + b s 1 ) \u2208 R 1024 s2 = sigmoid(W s 2 (bw1 \u2212 bw2) + b s 2 ) \u2208 R 1024 R = [bw1 bw2; bw1 \u2212 bw2; s1 C; s2 C] \u2208 R 4096 yi = W3(gelu(W2(gelu(W3R + b3)) + b2)) + b3 \u2208 R", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Event Coreference Resolution System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We train the model using binary cross-entropy 10 We take 100 context words to the right and left of the first and second event trigger words respectively when the number of context words in between them exceeds 200. loss. During inference, we use the best-first clustering approach, where we select the antecedent having the highest pairwise coreference score based on the coreference classifier, to build event chains.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 48, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Coreference Resolution System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use the news documents from the KBP 2016 for validation, and use news documents from KBP 2017 and RED corpora as well as discussion forum documents from the KBP 2017 corpus to evaluate the usefulness of our acquired data 11 . KBP 2016, KBP 2017 and RED corpora contain 85, 83 and 30 news documents respectively, and KBP 2017 has 84 discussion forum documents. KBP corpora have been widely used for evaluating in-document event coreference resolution systems. We further evaluate our models on the RED corpus to examine systems' performance across different event types. KBP 2016 and 2017 corpora are annotated using a subset of 20 subtypes from 38 subtypes used in KBP 2015. On the contrary, RED documents are comprehensively annotated with event coreference relations with no restriction on event types or subtypes, thus, allowing us to evaluate coreference resolution performance on a broad range of events. Besides, we evaluate the performance of models across text genres by evaluating our models trained with news articles on KBP 2017 discussion forum documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Following previous work on event coreference resolution, we evaluate all the event coreference resolution systems using the official KBP 2017 scorer v1.8. The scorer employs four coreference scoring measures, namely B 3 (Bagga and Baldwin, 1998) , CEAFe (Luo, 2005) , MUC (Vilain et al., 1995) and BLANC (Recasens and Hovy, 2011) and the unweighted average of their F1 scores AV G F 1 . In addition, since MUC directly evaluates pairwise coreference links, we also report MUC precision and recall scores.", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 245, |
|
"text": "(Bagga and Baldwin, 1998)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 265, |
|
"text": "(Luo, 2005)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 293, |
|
"text": "(Vilain et al., 1995)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 329, |
|
"text": "(Recasens and Hovy, 2011)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We use an ensemble of multi-layer feed-forward neural network classifiers to identify event men-tions (Choubey and Huang, 2018) for both news and discussion forum documents in KBP 2017 corpus. For the RED corpus, we use gold event mentions as that event extraction system can identify events from only eight event types annotated in KBP 2015 corpus. The coreference classifier uses a three-layer feed-forward neural network with 1024-512-1 units for scoring coreference likelihood. Two single-neural layers, used to transform elementwise dot product and difference between two event embeddings used for controlling context input, use 1024 units each. All hidden activations are followed by dropout with the rate of 0.1 for regularization (Srivastava et al., 2014) . All models are trained using AdamW optimizer (Loshchilov and Hutter, 2017; Kingma and Ba, 2014) with four different learning rates (1e-4, 5e-5, 1e-5, 5e-6) and for maximum of 100,000 updates. We use the batch size of 16 and evaluate the model after every 5,000 steps. The epoch and learning rate yielding the best validation performance, average F1 score on KBP 2016 news documents, are used to obtain the final model. Bert model is kept fixed during the training. All experiments are performed on NVIDIA GTX 2080 Ti 11GB using PyTorch 1.2.0+cu92 (Paszke et al., 2019) and HuggingFace Transformer libraries (Wolf et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 738, |
|
"end": 763, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 811, |
|
"end": 840, |
|
"text": "(Loshchilov and Hutter, 2017;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 841, |
|
"end": 861, |
|
"text": "Kingma and Ba, 2014)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1313, |
|
"end": 1334, |
|
"text": "(Paszke et al., 2019)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1373, |
|
"end": 1392, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Trigger Match (+Paraphrase): It links event mentions with the same trigger word (or are lexical paraphrases) as coreferential. Trigger match is a strong baseline for event coreference resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Systems", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Feature-based Classifier: The neural network classifier that uses GloVe (Pennington et al., 2014) based event trigger word embeddings and binary features indicating argument overlaps.", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 97, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Systems", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Choubey and Huang (2018): It models correlations between event coreference chains and document topic structures through a heuristics-based ILP formulation and has achieved the best event coreference resolution performance to date on both KBP 2016 and KBP 2017 datasets. Student Training: The mention pair model trained using the recently proposed self-training approach with a student network (Xie et al., 2019) . We first train a teacher mention pair model on the KBP 2015 corpus, then use the teacher model to annotate samples from unannotated news articles. We use the same set of event pairs from Xinhua articles in the Gigaword corpus, set the same upper bound of 20 coreferential and 200 non-coreferential pairs per event trigger word. Also, to allow fair comparisons, we selected only high scoring event pairs (likelihood \u2265 0.9) and collected 11,390 coreferential and 272,083 non-coreferential pairs. Finally, we train a new student network with the combined KBP 2015 and teacher-annotated event pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 411, |
|
"text": "(Xie et al., 2019)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Systems", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Masked Training: The mention pair model trained on all annotated and automatically acquired (or teacher annotated in case of student training model) event pairs. However, to limit the overdependence on lexical features 12 , we replace both the event trigger words with the [MASK] token for all acquired event pairs. Annotated event pairs from KBP 2015 are left unchanged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Systems", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "The first segment in Table 3 shows the results for all models on KBP 2017 news articles corpus. The mention-pair model trained on KBP 2015 corpus using pre-trained language model and larger event context outperforms both local feature-based as well as the discourse-structure aware previous best model (Choubey and Huang, 2018), outperforming Choubey and Huang (2018) by 2.26 points in average F1 score. The improvement is consistent across all metrics. Specifically, the used mention pair model gains MUC F1 score by 9.76 and 3.33 points over feature-based and discourse aware systems, indicating that BERT-based embedding is more effective in resolving coreference links without exclusively modeling event-arguments or discourserelated features. The model trained on event pairs acquired following the proposed automatic strategy also outperforms Choubey and Huang (2018) by 1.24 and 0.56 points on MUC F1 and average F1 scores respectively. However, this model does worse than the equivalent model trained on KBP 2015 data, which can be explained by the related distribution of KBP 2015 and KBP 2017 datasets. Overall, training the model on KBP 2015 data combined with the acquired event pairs performs the best, outperforming both models trained on KBP 2015 only and the one trained with student training by 1.04 and 0.14 points respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "As shown in the second segment of Table 3 , the improvement in the average F1 of the model trained on KBP 2015 over the trigger match baseline reduces to 2.3 points on the RED news articles corpus, compared to 5.69 points on KBP 2017 news articles. Mainly, RED annotates all event types while KBP has only 8 event types, and the change in event domains affects the overall performance gain of model. The model trained on our Post-Filtering Paraphrase event pairs performs similarly to the one trained on KBP 2015, implying that the former generalizes similarly to the model trained on humanannotated data when applied to new data out of the training data distribution. Similar to the performance gain on KBP 2017 news articles, combining both KBP 2015 and acquired event pairs improves the average F1 on RED news articles, achieving the highest average F1 gain of 3.98 points against the trigger match baseline. Note that student training also improves performance on RED news articles. However, it is 1.26 points lower on average F1 score than the KBP 2015+Post-Filtering Paraphrase pairs model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 41, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "In the third segment of Table 3 , we compare the performance of all models on a different text genre by evaluating them on the discussion forum documents from the KBP 2017 corpus. With shared event types, the model trained on KBP 2015 achieves the best result with 1.76 points improvement in the average F1 score over the lemma match baseline. The model trained using acquired event pairs, Post-Filtering Paraphrase pairs, achieves performance comparable to the model trained on KBP 2015. However, combining the KBP 2015 data with acquired event pairs (the model KBP 2015+Post-Filtering Paraphrase pairs) does not further improve the performance. Overall, we observe that none of the models obtain substantial performance improvement. The smaller improvements for all models on discussion forum documents, with the increased data size, also indicate the need for specialized learning algorithms to build a model that can generalize to a new text genre.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 31, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "Masked Training: The model trained on Post-Filtering Paraphrase event pairs outperforms the one trained on paraphrase-based pairs by 2.69 and 5.04 average F1 points on KBP 2017 and RED news articles test sets respectively. Using news discourse structure-based rules to first constrain coreferential event paraphrase pairs within main sentences or headline and then add non-coreferential event paraphrase pairs from historical sentences inhibits the model from exclussively relying on lexical features. Further, masked training helps to completely circumvent any bias induced in a model by limiting coreferential event pairs to lexical paraphrases, which slightly improved the average F1 score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Filtering Paraphrase Filtering and", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Distributional Analysis of Predicted Coreferential Event Pairs across different Discourse Content Type Pairs: Finally, we analyze the distribution of predicted coreferential event pairs across sentence pairs with different discourse content types on the validation dataset. We use the gold coreferential event pairs to identify the top 10 content type pairs of sentences that most frequently contain coreferential event mention pairs. Then, for the models trained on KBP 2015, Post-Filtering Paraphrase pairs and their combination with masked training, we report true-positive, falsepositive, and false-negative predictions, shown in Figure 1 . To ensure uniformity with rules used in Figure 1 : Distributions of Predicted Coreferential Event Pairs across different Discourse Content Type Pairs. \u00a73.2, we merge the headline with main sentences. Contrary to the rule that exclusively acquires coreferential event pairs from main sentences or headline, the classifier trained on acquired event pairs predicts coreferential event pairs across all discourse content type pairs. Notably, the Post-Filtering Pairs model predicted a comparable number of coreferential event pairs, 248, 244 and 240, in the (M1, M1), (M1, D3) and (D3, D3) content type pairs respectively. However, the number of true positives in (M1, M1) content pair is more than twice the number in either of the (M1, D3) or (D3, D3). This is expected given that the distribution of gold coreferential event pairs is normally skewed towards (M1, M1).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 634, |
|
"end": 642, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 685, |
|
"end": 693, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Post-Filtering Paraphrase Filtering and", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In comparison, models trained on KBP 2015 or combined KBP 2015 and Post-Filtering pairs have lower false-positives while exhibiting similar distributions for true-positive predictions. Intuitively, despite second phase bootstrapping to include noncoreferential paraphrase pairs, the model trained solely on acquired event pairs focuses on lexical features more than the model trained on humanannotated corpus. On the other hand, masked training effectively overcomes excessive reliance on lexical cues and helps achieve a higher true positive rate without increasing false positives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Filtering Paraphrase Filtering and", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We presented an automatic data acquisition strategy for event coreference resolution by mining the func-tional news discourse structure. We performed both qualitative and empirical studies to determine the effectiveness of our proposed strategy. We found that the model trained on automatically acquired event pairs performs similarly to the model trained on human-annotated corpus when evaluated on the test set covering general event domains. Further, augmenting acquired event pairs to existing humanannotated data improves the performance of the model on both training-domain and broader domain test sets. For future work, we intend to develop new training algorithms to improve the generalization capability of models on a new text genre. Further, we plan to evaluate a similar event coreference data acquisition strategy for new languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The discourse roles are roughly based on the Van Dijk's theory of news discourse(Teun A, 1986). It assigns discourse function to sentences in a news article, where the function is characterized by the operative role of sentence's content in describing the main event, context informing events, and other historical or future projected events", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The acquired coreferential and non-coreferential event pairs can be found at https://github.com/ prafulla77/Event-Coref-EACL-20213 All the KBP corpora include news articles as well as documents from discussion forums.4 In addition to news articles, the RED corpus contains several other types of documents, including news summaries, discussion forum posts, and web posts.5 We only use the news articles from KBP 2015 to train the supervised system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A contemporary work byMeged et al. (2020) has also studied the potential correlation between coreferential event trigger words and predicate paraphrases.7 http://nlpgrid.seas.upenn.edu/PPDB/ eng/ppdb-2.0-tldr.gz", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The processed event clusters are available at https: //git.io/JtnMf", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The discourse profiling system (Choubey et al., 2020) obtains the best performance on Xinhua news articles compared to NYT and Reuters", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ECB+(Cybulska and Vossen, 2014) is another popular dataset for evaluating event coreference resolution. However, documents in ECB+ are selectively annotated, comprising only of event mentions and within-document coreference chains that are relevant to cross-document event coreference chains. Since our data acquisition methodology is designed for collecting within-document event pairs, we decided to exclude evaluations on the ECB+ corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All acquired event pairs are either synonyms or exhibit hypernym or hyponym relations", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank our anonymous reviewers for providing insightful review comments. We gratefully acknowledge support from National Science Foundation via the awards IIS-1942918 and IIS-1755943. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Algorithms for scoring coreference chains", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Bagga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Breck", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "The first international conference on language resources and evaluation workshop on linguistics coreference", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "563--566", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first interna- tional conference on language resources and evalua- tion workshop on linguistics coreference, volume 1, pages 563-566. Granada.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Revisiting joint modeling of cross-document entity and event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Shany", |
|
"middle": [], |
|
"last": "Barhom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vered", |
|
"middle": [], |
|
"last": "Shwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Eirew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Bugert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4179--4189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and Ido Dagan. 2019. Re- visiting joint modeling of cross-document entity and event coreference resolution. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 4179-4189.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unsupervised event coreference resolution with rich linguistic features", |
|
"authors": [ |
|
{ |
|
"first": "Cosmin", |
|
"middle": [], |
|
"last": "Bejan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1412--1422", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cosmin Bejan and Sanda Harabagiu. 2010. Unsuper- vised event coreference resolution with rich linguis- tic features. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, pages 1412-1422.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Cosmin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Bejan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computational Linguistics", |
|
"volume": "40", |
|
"issue": "2", |
|
"pages": "311--347", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cosmin Adrian Bejan and Sanda Harabagiu. 2014. Un- supervised event coreference resolution. Computa- tional Linguistics, 40(2):311-347.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Sinocoreferencer: An end-to-end chinese event coreference resolver", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "LREC", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Chen and Vincent Ng. 2014. Sinocoreferencer: An end-to-end chinese event coreference resolver. In LREC, volume 2, page 3. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Chinese zero pronoun resolution: A joint unsupervised discourseaware model rivaling state-of-the-art resolvers", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "320--326", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-2053" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Chen and Vincent Ng. 2015. Chinese zero pro- noun resolution: A joint unsupervised discourse- aware model rivaling state-of-the-art resolvers. In Proceedings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 320- 326, Beijing, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Graph-based event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheng Chen and Heng Ji. 2009. Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Lan- guage Processing, pages 54-57. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A pairwise event coreference model, feature impact and evaluation for event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji", |
|
"middle": [], |
|
"last": "Heng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Haralick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the workshop on events in emerging text types", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheng Chen, Heng Ji, and Robert Haralick. 2009. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In Proceedings of the workshop on events in emerging text types, pages 17-22. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Event coreference resolution by iteratively unfolding inter-dependencies among events", |
|
"authors": [ |
|
{ |
|
"first": "Prafulla", |
|
"middle": [], |
|
"last": "Kumar Choubey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruihong", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2124--2133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prafulla Kumar Choubey and Ruihong Huang. 2017. Event coreference resolution by iteratively unfold- ing inter-dependencies among events. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2124-2133.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Improving event coreference resolution by modeling correlations between event coreference chains and document topic structures", |
|
"authors": [ |
|
{ |
|
"first": "Prafulla", |
|
"middle": [], |
|
"last": "Kumar Choubey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruihong", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "485--495", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prafulla Kumar Choubey and Ruihong Huang. 2018. Improving event coreference resolution by modeling correlations between event coreference chains and document topic structures. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 485-495.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Discourse as a function of event: Profiling discourse structure in news articles around the main event", |
|
"authors": [ |
|
{ |
|
"first": "Prafulla", |
|
"middle": [], |
|
"last": "Kumar Choubey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruihong", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5374--5386", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prafulla Kumar Choubey, Aaron Lee, Ruihong Huang, and Lu Wang. 2020. Discourse as a function of event: Profiling discourse structure in news arti- cles around the main event. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5374-5386, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "New insights into cross-document event coreference: Systematic comparison and a simplified approach", |
|
"authors": [ |
|
{ |
|
"first": "Andres", |
|
"middle": [], |
|
"last": "Cremisini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Finlayson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.nuse-1.1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andres Cremisini and Mark Finlayson. 2020. New in- sights into cross-document event coreference: Sys- tematic comparison and a simplified approach. In Proceedings of the First Joint Workshop on Narra- tive Understanding, Storylines, and Events, pages 1- 10, Online. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Agata", |
|
"middle": [], |
|
"last": "Cybulska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piek", |
|
"middle": [], |
|
"last": "Vossen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4545--4552", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agata Cybulska and Piek Vossen. 2014. Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC-2014), pages 4545- 4552, Reykjavik, Iceland. European Languages Re- sources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Translating granularity of event slots into features for event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Agata", |
|
"middle": [], |
|
"last": "Cybulska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piek", |
|
"middle": [], |
|
"last": "Vossen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agata Cybulska and Piek Vossen. 2015. Translating granularity of event slots into features for event coreference resolution. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 1-10.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The hitchhiker's guide to testing statistical significance in natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Rotem", |
|
"middle": [], |
|
"last": "Dror", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gili", |
|
"middle": [], |
|
"last": "Baumer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Segev", |
|
"middle": [], |
|
"last": "Shlomov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1383--1392", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1128" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing statis- tical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Aus- tralia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Ppdb: The paraphrase database", |
|
"authors": [ |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "758--764", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 758-764.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Resolving event coreference with supervised representation learning and clusteringoriented regularization", |
|
"authors": [ |
|
{ |
|
"first": "Kian", |
|
"middle": [], |
|
"last": "Kenyon-Dean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jackie Chi Kit", |
|
"middle": [], |
|
"last": "Cheung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doina", |
|
"middle": [], |
|
"last": "Precup", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kian Kenyon-Dean, Jackie Chi Kit Cheung, and Doina Precup. 2018. Resolving event coreference with supervised representation learning and clustering- oriented regularization. In Proceedings of the Sev- enth Joint Conference on Lexical and Computa- tional Semantics, pages 1-10.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Joint entity and event coreference resolution across documents", |
|
"authors": [ |
|
{ |
|
"first": "Heeyoung", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Recasens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angel", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "489--500", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mi- hai Surdeanu, and Dan Jurafsky. 2012. Joint en- tity and event coreference resolution across docu- ments. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 489-500. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Supervised withindocument event coreference using information propagation", |
|
"authors": [ |
|
{ |
|
"first": "Zhengzhong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Araki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Eduard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teruko", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mitamura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4539--4544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengzhong Liu, Jun Araki, Eduard H Hovy, and Teruko Mitamura. 2014. Supervised within- document event coreference using information prop- agation. In LREC, pages 4539-4544.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Event coreference resolution: A survey of two decades of research", |
|
"authors": [ |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5479--5486", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.24963/ijcai.2018/773" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jing Lu and Vincent Ng. 2018. Event coreference resolution: A survey of two decades of research. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI- 18, pages 5479-5486. International Joint Confer- ences on Artificial Intelligence Organization.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Joint inference for event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deepak", |
|
"middle": [], |
|
"last": "Venugopal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vibhav", |
|
"middle": [], |
|
"last": "Gogate", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3264--3275", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jing Lu, Deepak Venugopal, Vibhav Gogate, and Vin- cent Ng. 2016. Joint inference for event corefer- ence resolution. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3264-3275.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "End-to-end neural event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Yaojie", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongyu", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jialong", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xianpei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaojie Lu, Hongyu Lin, Jialong Tang, Xianpei Han, and Le Sun. 2020. End-to-end neural event coref- erence resolution.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "On coreference resolution performance metrics", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoqiang", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of the conference on human language technology and empirical meth- ods in natural language processing, pages 25-32. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Improving event coreference by context extraction and dynamic feature weighting", |
|
"authors": [ |
|
{ |
|
"first": "Katie", |
|
"middle": [], |
|
"last": "Mcconky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rakesh", |
|
"middle": [], |
|
"last": "Nagi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Moises", |
|
"middle": [], |
|
"last": "Sudit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Hughes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katie McConky, Rakesh Nagi, Moises Sudit, and William Hughes. 2012. Improving event co- reference by context extraction and dynamic fea- ture weighting. In 2012 IEEE International Multi- Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, pages 38-43. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Vered Shwartz, and Ido Dagan. 2020. Paraphrasing vs coreferring: Two sides of the same coin", |
|
"authors": [ |
|
{ |
|
"first": "Yehudit", |
|
"middle": [], |
|
"last": "Meged", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Avi", |
|
"middle": [], |
|
"last": "Caciularu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yehudit Meged, Avi Caciularu, Vered Shwartz, and Ido Dagan. 2020. Paraphrasing vs coreferring: Two sides of the same coin.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Annotated Gigaword", |
|
"authors": [ |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated Gigaword. In Pro- ceedings of the Joint Workshop on Automatic Knowl- edge Base Construction and Web-scale Knowl- edge Extraction (AKBC-WEKEX), pages 95-100, Montr\u00e9al, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Richer event description: Integrating event coreference with temporal, causal and bridging annotation", |
|
"authors": [ |
|
{ |
|
"first": "Kristin", |
|
"middle": [], |
|
"last": "Tim O'gorman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Wright-Bettner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "47--56", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-5706" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tim O'Gorman, Kristin Wright-Bettner, and Martha Palmer. 2016. Richer event description: Integrating event coreference with temporal, causal and bridg- ing annotation. In Proceedings of the 2nd Work- shop on Computing News Storylines (CNS 2016), pages 47-56, Austin, Texas. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Kemal Oflazer, and Amna AlZeyara. 2020. Precision event coreference resolution using neural network classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Arun", |
|
"middle": [], |
|
"last": "Pandian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lamana", |
|
"middle": [], |
|
"last": "Mulaffer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Computaci\u00f3n y Sistemas", |
|
"volume": "", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arun Pandian, Lamana Mulaffer, Kemal Oflazer, and Amna AlZeyara. 2020. Precision event coreference resolution using neural network classifiers. Com- putaci\u00f3n y Sistemas, 24(1).", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Pytorch: An imperative style, high-performance deep learning library", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Paszke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Massa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lerer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bradbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Chanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Killeen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeming", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Gimelshein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Antiga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alban", |
|
"middle": [], |
|
"last": "Desmaison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Kopf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Devito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Raison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alykhan", |
|
"middle": [], |
|
"last": "Tejani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sasank", |
|
"middle": [], |
|
"last": "Chilamkurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benoit", |
|
"middle": [], |
|
"last": "Steiner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junjie", |
|
"middle": [], |
|
"last": "Bai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumith", |
|
"middle": [], |
|
"last": "Chintala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "8024--8035", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification", |
|
"authors": [ |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpendre", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "425--430", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-2070" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 425-430, Beijing, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Event detection and co-reference with minimal supervision", |
|
"authors": [ |
|
{ |
|
"first": "Haoruo", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yangqiu", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "392--402", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1038" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haoruo Peng, Yangqiu Song, and Dan Roth. 2016. Event detection and co-reference with minimal su- pervision. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 392-402, Austin, Texas. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Blanc: Implementing the rand index for coreference evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Recasens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Natural Language Engineering", |
|
"volume": "17", |
|
"issue": "4", |
|
"pages": "485--510", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marta Recasens and Eduard Hovy. 2011. Blanc: Imple- menting the rand index for coreference evaluation. Natural Language Engineering, 17(4):485-510.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Automatically generating extraction patterns from untagged text", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1044--1049", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Riloff. 1996. Automatically generating extrac- tion patterns from untagged text. In Proceedings of the Thirteenth National Conference on Artificial In- telligence -Volume 2, AAAI'96, page 1044-1049. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Learning extraction patterns for subjective expressions", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Riloff and Janyce Wiebe. 2003. Learning extrac- tion patterns for subjective expressions. In Proceed- ings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 105-112.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Coreference resolution using semantic features and fully connected neural network in the persian language", |
|
"authors": [ |
|
{ |
|
"first": "Hossein", |
|
"middle": [], |
|
"last": "Sahlani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maryam", |
|
"middle": [], |
|
"last": "Hourali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Behrouz", |
|
"middle": [], |
|
"last": "Minaei-Bidgoli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Journal of Computational Intelligence Systems", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "1002--1013", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.2991/ijcis.d.200706.002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hossein Sahlani, Maryam Hourali, and Behrouz Minaei-Bidgoli. 2020. Coreference resolution us- ing semantic features and fully connected neural net- work in the persian language. International Jour- nal of Computational Intelligence Systems, 13:1002- 1013.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Event coreference resolution using mincut based graph clustering", |
|
"authors": [ |
|
{ |
|
"first": "Satyan", |
|
"middle": [], |
|
"last": "Sangeetha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Arock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Fourth International Workshop on Computer Networks & Communications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "253--260", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Satyan Sangeetha and Michael Arock. 2012. Event coreference resolution using mincut based graph clustering. In Proceedings of the Fourth Interna- tional Workshop on Computer Networks & Commu- nications, pages 253-260.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Dropout: a simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "News schemata. Studying writing: linguistic approaches", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Van Dijk Teun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "155--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Van Dijk Teun A. 1986. News schemata. Studying writing: linguistic approaches, 1:155-186.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "News analysis. Case Studies of International and National News in the Press", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Teun A Van Dijk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Teun A Van Dijk. 1988a. News analysis. Case Stud- ies of International and National News in the Press. New Jersey: Lawrence.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "News as discourse", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Teun A Van Dijk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Teun A Van Dijk. 1988b. News as discourse. Hillsdale, NJ, US: Lawrence Erlbaum Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "A modeltheoretic coreference scoring scheme", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Vilain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Burger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Aberdeen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 6th conference on Message understanding", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the 6th conference on Message understand- ing, pages 45-52. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Discourse structure and computation: Past, present and future", |
|
"authors": [ |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "42--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bonnie Webber and Aravind Joshi. 2012. Discourse structure and computation: Past, present and future. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 42- 54, Jeju Island, Korea. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Huggingface's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R'emi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Self-training with noisy student improves imagenet classification", |
|
"authors": [ |
|
{ |
|
"first": "Qizhe", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V. Le. 2019. Self-training with noisy student improves imagenet classification.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "A hierarchical distance-dependent bayesian model for event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Bishan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Frazier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association of Computational Linguistics", |
|
"volume": "3", |
|
"issue": "1", |
|
"pages": "517--528", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bishan Yang, Claire Cardie, and Peter Frazier. 2015. A hierarchical distance-dependent bayesian model for event coreference resolution. Transactions of the Association of Computational Linguistics, 3(1):517- 528.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Rpi blender tac-kbp2016 system description", |
|
"authors": [ |
|
{ |
|
"first": "Dian", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoman", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boliang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifu", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Spencer", |
|
"middle": [], |
|
"last": "Whitehead", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dian Yu, Xiaoman Pan, Boliang Zhang, Lifu Huang, Di Lu, Spencer Whitehead, and Heng Ji. 2016. Rpi blender tac-kbp2016 system description. In TAC.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Event co-reference resolution via a multi-loss neural network without using argument information", |
|
"authors": [ |
|
{ |
|
"first": "Xinyu", |
|
"middle": [], |
|
"last": "Zuo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yubo", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Science China Information Sciences", |
|
"volume": "", |
|
"issue": "11", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s11432-018-9833-1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinyu Zuo, Yubo Chen, Kang Liu, and Jun Zhao. 2019. Event co-reference resolution via a multi-loss neural network without using argument information. Sci- ence China Information Sciences, 62(11).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "KBP 2015, Paraphrase-based pairs, Post-Filtering Paraphrase pairs and KBP 2015+Post-Filtering Paraphrase pairs: The mention pair model, proposed in \u00a7 4, trained on different combinations of acquired and human-annotated datasets. KBP 2015 is trained on event pairs from news docu-ments in the KBP 2015 corpus. Paraphrase-based pairs is trained on paraphrase event pairs without rules-based filtering ( \u00a73.1). Post-Filtering Paraphrase pairs is trained on paraphrase event pairs that are filtered using rules defined over news discourse structure ( \u00a73.2). KBP 2015+Post-Filtering Paraphrase pairs is trained on aggregation of KBP 2015 and Post-Filtering Paraphrase event pairs." |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">Row Data</td><td>Prec.</td><td>80% CI</td></tr><tr><td>1</td><td>Synonyms: Coref</td><td colspan=\"2\">49.0 45.3-52.6</td></tr><tr><td>2</td><td colspan=\"3\">Synonyms: Non-Coref 51.0 47.3-54.6</td></tr><tr><td>3</td><td>Phase I: Coref</td><td colspan=\"2\">83.0 80.3-85.6</td></tr><tr><td>4</td><td>Phase I: Non-Coref</td><td>99.3</td><td>98.6-100</td></tr><tr><td>5</td><td>Phase II: Non-Coref</td><td colspan=\"2\">93.0 90.0-96.0</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Number of coreferential and non-coreferential events pairs acquired through the proposed methodology and the human annotated KBP 2015 corpus." |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table><tr><td>: Precision (Prec.) and bootstrap 80% confi-</td></tr><tr><td>dence interval (80% CI) score of precision for acquired</td></tr><tr><td>event pairs based on human evaluation.</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Results for event coreference resolution systems on the KBP 2017 and RED corpora. Feature-based Classifier results are directly taken from Choubey and Huang (2018). The results are statistically significant using bootstrap and permutation test(Dror et al., 2018) with p<0.01 between Post-Filtering Paraphrase pairs and Paraphrase-based Pairs and p<0.002 between KBP 2015+Post-Filtering Paraphrase pairs+Masked Training and KBP 2015 models on both KBP 2017 and RED news articles test sets. Further, results for KBP 2015+Post-Filtering Paraphrase pairs+Masked Training are statistically significant compared to both Student Training and Student Training+Masked Training with p<0.002 on the RED news articles test set." |
|
} |
|
} |
|
} |
|
} |