ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-main.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:12:01.901387Z"
},
"title": "Neural Gibbs Sampling for Joint Event Argument Extraction",
"authors": [
{
"first": "Xiaozhi",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BNRist",
"location": {}
},
"email": ""
},
{
"first": "Shengyu",
"middle": [],
"last": "Jia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BNRist",
"location": {}
},
"email": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BNRist",
"location": {}
},
"email": ""
},
{
"first": "Juanzi",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BNRist",
"location": {}
},
"email": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Event Argument Extraction (EAE) aims at predicting event argument roles of entities in text, which is a crucial subtask and bottleneck of event extraction. Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles. In this paper, we propose a Bayesian model named Neural Gibbs Sampling (NGS) to jointly extract event arguments. Specifically, we train two neural networks to model the prior distribution and conditional distribution over event arguments respectively and then use Gibbs sampling to approximate the joint distribution with the learned distributions. For overcoming the shortcoming of the high complexity of the original Gibbs sampling algorithm, we further apply simulated annealing to efficiently estimate the joint probability distribution over event arguments and make predictions. We conduct experiments on the two widely-used benchmark datasets ACE 2005 and TAC KBP 2016. The Experimental results show that our NGS model can achieve comparable results to existing state-of-the-art EAE methods. The source code can be obtained from https:// github.com/THU-KEG/NGS.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Event Argument Extraction (EAE) aims at predicting event argument roles of entities in text, which is a crucial subtask and bottleneck of event extraction. Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles. In this paper, we propose a Bayesian model named Neural Gibbs Sampling (NGS) to jointly extract event arguments. Specifically, we train two neural networks to model the prior distribution and conditional distribution over event arguments respectively and then use Gibbs sampling to approximate the joint distribution with the learned distributions. For overcoming the shortcoming of the high complexity of the original Gibbs sampling algorithm, we further apply simulated annealing to efficiently estimate the joint probability distribution over event arguments and make predictions. We conduct experiments on the two widely-used benchmark datasets ACE 2005 and TAC KBP 2016. The Experimental results show that our NGS model can achieve comparable results to existing state-of-the-art EAE methods. The source code can be obtained from https:// github.com/THU-KEG/NGS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Event argument extraction (EAE) is a crucial subtask of Event Extraction, which aims at predicting entities and their event argument roles in event mentions. For instance, given the sentence \"Fox's stock price rises after the acquisition of its entertainment businesses by Disney\", the event detection (ED) model will first identify the trigger word \"acquisition\" triggering a Transfer-Ownership event. Then, with the trigger word and event type, the EAE model is required to identify that \"Fox\" and \"Disney\" are event arguments whose roles are \"Seller\" and \"Buyer\" respectively. As ED is well-studied in recent years (Liu et al., 2018a; Nguyen and Grishman, 2018; Zhao et al., 2018; Wang et al., 2019a) , EAE becomes the bottleneck and has drawn growing attention.",
"cite_spans": [
{
"start": 618,
"end": 637,
"text": "(Liu et al., 2018a;",
"ref_id": "BIBREF29"
},
{
"start": 638,
"end": 664,
"text": "Nguyen and Grishman, 2018;",
"ref_id": "BIBREF38"
},
{
"start": 665,
"end": 683,
"text": "Zhao et al., 2018;",
"ref_id": "BIBREF58"
},
{
"start": 684,
"end": 703,
"text": "Wang et al., 2019a)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As EAE is the bottleneck of event extraction, especially is also important for various NLP applications (Yang et al., 2003; Basile et al., 2014; Cheng and Erk, 2018) , intensive efforts have already been devoted to designing effective EAE systems. The early feature-based methods (Patwardhan and Riloff, 2009; Gupta and Ji, 2009) manually design sophisticated features and heuristic rules to extract event arguments. As the development of neural networks, various neural methods adopt convolutional (Chen et al., 2015) or recurrent (Nguyen et al., 2016) neural networks to automatically represent sentence semantics with lowdimensional vectors, and independently determine argument roles with the vectors. Recently, some advanced techniques have also been adopted to further enhance the performance of EAE models, such as zero-shot learning (Huang et al., 2018) , multimodal integration and weak supervision .",
"cite_spans": [
{
"start": 104,
"end": 123,
"text": "(Yang et al., 2003;",
"ref_id": "BIBREF55"
},
{
"start": 124,
"end": 144,
"text": "Basile et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 145,
"end": 165,
"text": "Cheng and Erk, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 280,
"end": 309,
"text": "(Patwardhan and Riloff, 2009;",
"ref_id": "BIBREF42"
},
{
"start": 310,
"end": 329,
"text": "Gupta and Ji, 2009)",
"ref_id": "BIBREF15"
},
{
"start": 499,
"end": 518,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 532,
"end": 553,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 841,
"end": 861,
"text": "(Huang et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, above-mentioned methods do not model the correlation among event arguments in event mentions. As shown in Figure 1 , all event arguments are correlated with each other. It is more likely to see a \"Seller\" when you have seen a \"Buyer\" and an \"Artifact\" in event mentions, and vice versa. Formally, with x i denoting the random variable of the i-th event argument candidate, the required probability distribution for EAE is P (x 1 , x 2 , . . . , x n |o), where o is the observation from sentence semantics of event mentions. The existing methods which independently extract event arguments solely model P (x i |o), totally ignoring the correlation among event arguments, which may lead models to trapping in a local optimum.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, some proactive works view EAE as a sequence labeling problem (Yang and Mitchell, 2016; Nguyen et al., 2016; Zeng et al., 2018) and adopt conditional random field (CRF) with the Viterbi algorithm (Rabiner, 1989) to solve the problem. These explorations consider the correlation of event arguments unintentionally. Yet limited by the Markov property, their linear-chain CRF only considers the correlation between two adjacent event arguments in the sequence and finds a maximum likelihood path to model the joint distribution, i.e, these sequence models cannot adequately handle the complex situation that each event argument is correlated with each other in event mentions, just like the example shown in Figure 1 .",
"cite_spans": [
{
"start": 71,
"end": 96,
"text": "(Yang and Mitchell, 2016;",
"ref_id": "BIBREF54"
},
{
"start": 97,
"end": 117,
"text": "Nguyen et al., 2016;",
"ref_id": "BIBREF39"
},
{
"start": 118,
"end": 136,
"text": "Zeng et al., 2018)",
"ref_id": "BIBREF56"
},
{
"start": 205,
"end": 220,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 714,
"end": 722,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To adequately model the genuine joint distribution P (x 1 , x 2 , . . . , x n |o) rather than n i P (x i |o) for EAE, we propose a Bayesian method named Neural Gibbs Sampling (NGS) inspired by previous work (Finkel et al., 2005; Sun et al., 2014) . Gibbs sampling (Geman and Geman, 1987 ) is a Markov Chain Monte Carlo (MCMC) algorithm, which defines a Markov chain in the space of possible variable assignments whose stationary distribution is the desired joint distribution. Then, a Monte Carlo method is adopted to sample a sequence of observations, and the sampled sequence can be used to approximate the joint distribution.",
"cite_spans": [
{
"start": 207,
"end": 228,
"text": "(Finkel et al., 2005;",
"ref_id": "BIBREF12"
},
{
"start": 229,
"end": 246,
"text": "Sun et al., 2014)",
"ref_id": "BIBREF48"
},
{
"start": 264,
"end": 286,
"text": "(Geman and Geman, 1987",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More specifically, for NGS, we first adopt a neural network to model the prior distribution P p (x i |o) and independently predict an argument role for each event argument candidate to get an initial state for the random variable sequence x 1 , x 2 , . . . , x n , which is similar to the previous methods. Then, we train a special neural network to model the conditional probability distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "P c (x i |x 1 , x 2 , . . . , x i\u22121 , x i+1 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ". . , x n , o) and iteratively change the sequence state by this conditional distribution. Intuitively, the network modeling the conditional probability distribution aims to predict unknown argument roles based on both sentence semantics and some known argument roles. After enough steps, the state of the sequence will accurately follow the posterior joint distribution P (x 1 , x 2 , . . . , x n |o), and the most frequent state in history will be the best result of EAE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Considering that it will take many steps to accurately estimate the shape of the joint distribution and each step uses neural networks for inference, it is time-consuming and impractical. Due to what we want for EAE is the max-likelihood state of the argument roles, we follow Geman and Geman (1987) and adopt simulated annealing (Kirkpatrick et al., 1983) to efficiently find the max-likelihood state based on the Gibbs sampling.",
"cite_spans": [
{
"start": 277,
"end": 299,
"text": "Geman and Geman (1987)",
"ref_id": "BIBREF13"
},
{
"start": 330,
"end": 356,
"text": "(Kirkpatrick et al., 1983)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To conclude, our main contributions can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Our NGS method combines both the advantages of neural networks and the Gibbs sampling method. The neural networks have shown their strong ability to fit a distribution from data. Gibbs sampling has remarkable advantages in performing Bayesian inference and modeling the complex correlation among event arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Considering the shortcoming of high complexity of the original Gibbs sampling algorithm, we further apply simulated annealing to efficiently estimate the joint probability distribution and find the max-likelihood state for NGS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) Experimental results on the widely-used benchmark datasets ACE 2005 and TAC KBP 2016 show that our NGS works well to consider the correlation among event arguments and achieves the state-of-the-art results. The experiments also show that the simulated annealing method can significantly improve the convergence speed and the stability of Gibbs sampling, which demonstrate that our NGS is both effective and efficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Event Extraction (EE) aims to extract structured information from plain text, which is a challenging task in the field of information extraction. EE consists of two subtasks, one is event detection (ED) to detect words triggering events and identify event types, the other is event argument extraction (EAE) to extract argument entities in event mentions and identify event argument roles. As EE is important and beneficial for various downstream",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Prior Neural Model Prior Distribution \u2026 P p (x i |o) Prior State Initialization x 2 x n \u2026 x 1 Conditional Neural Model Conditional Distribution \u2026 Sampling Process . x (t) 1 x (t) 2 x (t) n x (t) i . t step state t+1 step state x (t+1) i . x (t) 1 x (t) 2 x (t) n . P c (x (t+1) i |X (t) i , o) i \u21e0 max(P c (x (t+1) i |X (t) i , o)) 1/c P n j=1 max(P c (x (t+1) j |X (t) j , o)) 1/c Simulated Annealing decrease c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling",
"sec_num": null
},
{
"text": "'s stock price rises after the acquisition of its entertainment businesses by .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling",
"sec_num": null
},
{
"text": "Buyer Seller Artifact Fox 's stock price rises after the acquisition of its entertainment businesses by Disney. Figure 2 : Overall framework of our Neural Gibbs Sampling model. NLP tasks, e.g., question answering (Yang et al., 2003) , information retrieval (Basile et al., 2014) , and reading comprehension (Cheng and Erk, 2018) , it has attracted wide attentions recently.",
"cite_spans": [
{
"start": 213,
"end": 232,
"text": "(Yang et al., 2003)",
"ref_id": "BIBREF55"
},
{
"start": 257,
"end": 278,
"text": "(Basile et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 307,
"end": 328,
"text": "(Cheng and Erk, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Disney",
"sec_num": null
},
{
"text": "ED has been well-studied by the previous works due to its simple and clear definition, including feature-based and rule-based methods (Ahn, 2006; Ji and Grishman, 2008; Gupta and Ji, 2009; Riedel et al., 2010; Hong et al., 2011; McClosky et al., 2011; Huang and Riloff, 2012a,b; Araki and Mitamura, 2015; Li et al., 2013; Yang and Mitchell, 2016; Liu et al., 2016b) , neural methods (Chen et al., 2015; Nguyen and Grishman, 2015; Nguyen et al., 2016; Duan et al., 2017; Nguyen et al., 2016; Ghaeini et al., 2016; Lin et al., 2018) , the methods with external heterogeneous knowledge (Liu et al., 2016a Duan et al., 2017; Zhao et al., 2018; Liu et al., 2018b) . Some advanced architectures, such as graph convolutional networks (Nguyen and Grishman, 2018) and adversarial training (Hong et al., 2018; Wang et al., 2019a) , have also been applied recently.",
"cite_spans": [
{
"start": 134,
"end": 145,
"text": "(Ahn, 2006;",
"ref_id": "BIBREF0"
},
{
"start": 146,
"end": 168,
"text": "Ji and Grishman, 2008;",
"ref_id": "BIBREF23"
},
{
"start": 169,
"end": 188,
"text": "Gupta and Ji, 2009;",
"ref_id": "BIBREF15"
},
{
"start": 189,
"end": 209,
"text": "Riedel et al., 2010;",
"ref_id": "BIBREF44"
},
{
"start": 210,
"end": 228,
"text": "Hong et al., 2011;",
"ref_id": "BIBREF16"
},
{
"start": 229,
"end": 251,
"text": "McClosky et al., 2011;",
"ref_id": "BIBREF35"
},
{
"start": 252,
"end": 278,
"text": "Huang and Riloff, 2012a,b;",
"ref_id": null
},
{
"start": 279,
"end": 304,
"text": "Araki and Mitamura, 2015;",
"ref_id": "BIBREF1"
},
{
"start": 305,
"end": 321,
"text": "Li et al., 2013;",
"ref_id": "BIBREF25"
},
{
"start": 322,
"end": 346,
"text": "Yang and Mitchell, 2016;",
"ref_id": "BIBREF54"
},
{
"start": 347,
"end": 365,
"text": "Liu et al., 2016b)",
"ref_id": "BIBREF33"
},
{
"start": 383,
"end": 402,
"text": "(Chen et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 403,
"end": 429,
"text": "Nguyen and Grishman, 2015;",
"ref_id": "BIBREF40"
},
{
"start": 430,
"end": 450,
"text": "Nguyen et al., 2016;",
"ref_id": "BIBREF39"
},
{
"start": 451,
"end": 469,
"text": "Duan et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 470,
"end": 490,
"text": "Nguyen et al., 2016;",
"ref_id": "BIBREF39"
},
{
"start": 491,
"end": 512,
"text": "Ghaeini et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 513,
"end": 530,
"text": "Lin et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 583,
"end": 601,
"text": "(Liu et al., 2016a",
"ref_id": "BIBREF31"
},
{
"start": 602,
"end": 620,
"text": "Duan et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 621,
"end": 639,
"text": "Zhao et al., 2018;",
"ref_id": "BIBREF58"
},
{
"start": 640,
"end": 658,
"text": "Liu et al., 2018b)",
"ref_id": "BIBREF30"
},
{
"start": 727,
"end": 754,
"text": "(Nguyen and Grishman, 2018)",
"ref_id": "BIBREF38"
},
{
"start": 780,
"end": 799,
"text": "(Hong et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 800,
"end": 819,
"text": "Wang et al., 2019a)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Converge Initialization",
"sec_num": null
},
{
"text": "As ED models has achieved relatively promising results, the more difficult EAE becomes the bottleneck of EE, and have drawn growing research interests. The early works (Patwardhan and Riloff, 2009; Gupta and Ji, 2009; Liao and Grishman, 2010b,a; Huang and Riloff, 2012b; Li et al., 2013) focus on designing hand-crafted features and heuristic rules to extract event arguments, which suffer from the problem of both implementation complexity and low recall. As the rapid develop-ment of neural networks, various neural methods have been proposed, such as utilizing convolutional models (Chen et al., 2015) , utilizing recurrent models (Nguyen et al., 2016; Sha et al., 2018) , and finetuning pre-trained language model BERT (Wang et al., 2019b) . As compared with the early featurebased and rule-based methods, neural methods automatically represent sentence semantics with lowdimensional vectors, and independently determine argument roles with the vectors, leading to getting rid of designing sophisticated features and rules. Recently, some works adopt some advanced techniques to further improve EAE models in different scenarios, including zero-shot learning (Huang et al., 2018) , multi-modal integration , cross-lingual (Subburathinam et al., 2019) , end-to-end (Wadden et al., 2019) , and weak supervision Zeng et al., 2018) .",
"cite_spans": [
{
"start": 168,
"end": 197,
"text": "(Patwardhan and Riloff, 2009;",
"ref_id": "BIBREF42"
},
{
"start": 198,
"end": 217,
"text": "Gupta and Ji, 2009;",
"ref_id": "BIBREF15"
},
{
"start": 218,
"end": 245,
"text": "Liao and Grishman, 2010b,a;",
"ref_id": null
},
{
"start": 246,
"end": 270,
"text": "Huang and Riloff, 2012b;",
"ref_id": "BIBREF21"
},
{
"start": 271,
"end": 287,
"text": "Li et al., 2013)",
"ref_id": "BIBREF25"
},
{
"start": 585,
"end": 604,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 634,
"end": 655,
"text": "(Nguyen et al., 2016;",
"ref_id": "BIBREF39"
},
{
"start": 656,
"end": 673,
"text": "Sha et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 723,
"end": 743,
"text": "(Wang et al., 2019b)",
"ref_id": "BIBREF53"
},
{
"start": 1163,
"end": 1183,
"text": "(Huang et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 1226,
"end": 1254,
"text": "(Subburathinam et al., 2019)",
"ref_id": "BIBREF47"
},
{
"start": 1268,
"end": 1289,
"text": "(Wadden et al., 2019)",
"ref_id": "BIBREF50"
},
{
"start": 1313,
"end": 1331,
"text": "Zeng et al., 2018)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Converge Initialization",
"sec_num": null
},
{
"text": "The current methods for EAE have achieved some promising results. However, they focus on independently handling each argument entity to predict its role. Because of ignoring to capture rich correlated knowledge among event arguments, the above-mentioned methods are easy to trap in a local optimum and make some inexplicable mistakes. Inspired by some methods in named entity recognition (Huang et al., 2015) and relation extraction (Miwa and Bansal, 2016) , some recent proactive works view EAE as a sequence labeling problem. Following the methods for sequence labeling problem (Ma and Hovy, 2016) , these sequential EAE models (Yang and Mitchell, 2016; Zeng et al., 2018 ) adopt conditional random field (CRF) with the Viterbi algorithm (Rabiner, 1989) , and unintentionally consider the correlation of event arguments. Limited by the Markov property, the linear-chain CRF sequentially considers the correlation between two adjacent event arguments, which cannot adequately handle the complex situation in EAE that each argument and any other arguments may be correlated. To this end and inspired by some proactive works (Finkel et al., 2005; Sun et al., 2014) , we adapt Gibbs sampling (Geman and Geman, 1987) for EAE to perform approximate inference from the joint distribution. Moreover, we incorporate simulated annealing (Kirkpatrick et al., 1983) to accelerate the sampling process, leading to an effective and efficient method.",
"cite_spans": [
{
"start": 388,
"end": 408,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 433,
"end": 456,
"text": "(Miwa and Bansal, 2016)",
"ref_id": "BIBREF37"
},
{
"start": 580,
"end": 599,
"text": "(Ma and Hovy, 2016)",
"ref_id": "BIBREF34"
},
{
"start": 630,
"end": 655,
"text": "(Yang and Mitchell, 2016;",
"ref_id": "BIBREF54"
},
{
"start": 656,
"end": 673,
"text": "Zeng et al., 2018",
"ref_id": "BIBREF56"
},
{
"start": 740,
"end": 755,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF43"
},
{
"start": 1124,
"end": 1145,
"text": "(Finkel et al., 2005;",
"ref_id": "BIBREF12"
},
{
"start": 1146,
"end": 1163,
"text": "Sun et al., 2014)",
"ref_id": "BIBREF48"
},
{
"start": 1190,
"end": 1213,
"text": "(Geman and Geman, 1987)",
"ref_id": "BIBREF13"
},
{
"start": 1329,
"end": 1355,
"text": "(Kirkpatrick et al., 1983)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Converge Initialization",
"sec_num": null
},
{
"text": "For convenience, we denote The neural models, including a prior neural model to model the prior distribution P p (x i |o), and a conditional neural model to model the conditional distribution P c (x i |X \u2212i , o). The prior neural model is similar with existing EAE methods, which takes the event mention text as input and outputs the labels of event argument candidates. The labels will serve as the prior state for the Gibbs sampling module. The conditional neural model takes the text and the results of the last step as input and outputs the probability distribution over labels for each event argument candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "X = {x 1 , . . . , x n } and X \u2212i = {x 1 , . . . , x i\u22121 , x i+1 , . . . , x n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "The Gibbs sampling module to sample variable assignments X with P p (x i |o) and P c (x i |X \u2212i , o), which gradually match the implicit posterior joint distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "The simulated annealing method to efficiently find the optimal state in the Markov chain of Gibbs sampling. It uses a \"temperature\" parameter to control the sharpness of the transition distribution. With the \"temperature\" decreasing, the algorithm will more and more tend to choose the max-likelihood state as the next state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "The Prior Neural Model is to model the prior distribution P p (x i |o). In this paper, we use DM-CNN (Chen et al., 2015) and DMBERT as the prior neural models. Given a sentence consisting of several words {w 1 , . . . , t, . . . , w i , . . . , w n }, where t and w i denote the trigger word and the candidate argument entity respectively.",
"cite_spans": [
{
"start": 101,
"end": 120,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models",
"sec_num": "3.2"
},
{
"text": "DMCNN transfers each word in the word sequence into an input embedding e i , which consists of word embedding, event type embedding, and position embedding. Then, DMCNN feeds the input embeddings into a convolutional encoding layer to automatically learn the features and a dynamic multi-pooling layer to aggregate the features into a unified sentence observation embedding to predict an argument role",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models",
"sec_num": "3.2"
},
{
"text": "x i for w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models",
"sec_num": "3.2"
},
{
"text": "DMBERT is a variation of BERT (Devlin et al., 2019) proposed by Wang et al. (2019b) . It adopts a pre-trained BERT to represent the word sequence as feature vectors and also uses a dynamic multipooling mechanism like DMCNN to aggregate the features into an instance embedding for prediction. It inserts special tokens around the event argument candidates to indicate their positions.",
"cite_spans": [
{
"start": 30,
"end": 51,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 64,
"end": 83,
"text": "Wang et al. (2019b)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models",
"sec_num": "3.2"
},
{
"text": "We sample an argument role following P p (x i |o) for each argument candidate and finally predict an initial argument role state",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models",
"sec_num": "3.2"
},
{
"text": "X (0) = {x (0) 1 , . . . , x (0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models",
"sec_num": "3.2"
},
{
"text": "n } as the start point of Gibbs sampling. Note that, our NGS method does not have any special requirements for the prior neural model, any other neural networks can also be used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models",
"sec_num": "3.2"
},
{
"text": "Conditional Neural Model is to model the conditional distribution P c (x i |X \u2212i , o) for the state transition in Gibbs sampling. Considering that it requires to integrate the argument role information of X \u2212i to compute P c (x i |X \u2212i , o), we set an argument role embedding a i for each word w i to represent whether it is an event argument and which role it is of. Then, we modify the input layer of DMCNN and DMBERT to feed the argument role embeddings in. More specifically, DMCNN concatenates the original input embedding e i with the argument role embedding a i as new inputs. DMBERT utilizes the pre-trained parameters and adds a i into the input embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Models",
"sec_num": "3.2"
},
{
"text": "The Gibbs sampling module aims at sampling from the implicit joint distribution P (X|o). As Algorithm 1 shows, we use the prior neural model to initialize an initial state X (0) . In step t, for each random variable x i , we input the other random variables' states X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling Module",
"sec_num": "3.3"
},
{
"text": "(t\u22121) \u2212i into the conditional neu- Algorithm 1 Neural Gibbs sampling Input: Initial state X (0) = {x (0) 1 , . . . , x (0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling Module",
"sec_num": "3.3"
},
{
"text": "n } predicted by the prior neural network Result: N samples matching the joint distribution P (X|o) Train the conditional neural model to fit",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling Module",
"sec_num": "3.3"
},
{
"text": "P c (x i |X \u2212i , o) for t \u2190 1 to N do // iteratively change the state for i \u2190 to n do x (t) i \u2190 sample P c x (t) i |X (t\u22121) \u2212i , o end X (t) \u2190 {x (t) 1 , . . . , x (t) n } end Return X (1) , . . . , X (N ) ral model to get the distribution P c x (t) i |X (t\u22121) \u2212i , o . Then we sample x (t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling Module",
"sec_num": "3.3"
},
{
"text": "i from the distribution, and finally get the new state X (t) . We can approximately sample N samples X (1) , . . . , X (N ) with the Gibbs sampling module. Our Appendix gives the proof that the samples will accurately follow the joint distribution after enough steps. Geman and Geman (1987) have shown that the samples from the beginning of the Markov chain (the burn-in period) may not accurately follow the desired distribution, hence we choose the most frequent state from X ( N 2 ) , . . . , X (N ) as the result.",
"cite_spans": [
{
"start": 57,
"end": 60,
"text": "(t)",
"ref_id": null
},
{
"start": 119,
"end": 123,
"text": "(N )",
"ref_id": null
},
{
"start": 268,
"end": 290,
"text": "Geman and Geman (1987)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling Module",
"sec_num": "3.3"
},
{
"text": "The Gibbs sampling module is to accurately estimate the shape of P (X|o), which will take many steps to reach the convergence. As what we want for EAE is only the max-likelihood state, we adopt a simulated annealing method to efficiently find the optimal state following Geman and Geman (1987) . As shown in Algorithm 2, in step t, the simulated annealing method randomly sample an i from the",
"cite_spans": [
{
"start": 271,
"end": 293,
"text": "Geman and Geman (1987)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annealing Method",
"sec_num": "3.4"
},
{
"text": "distribution max Pc x (t) i |X (t\u22121) \u2212i ,o 1/c n j=1 max Pc x (t) j |X (t\u22121) \u2212j ,o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annealing Method",
"sec_num": "3.4"
},
{
"text": "1/c . The probability of i being chosen has positive correlation with the probability of the max-likelihood state in the conditional distribution of x i . Then we only need to update x i with its max-likelihood state in conditional distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annealing Method",
"sec_num": "3.4"
},
{
"text": "P c x (t) i |X (t\u22121) \u2212i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annealing Method",
"sec_num": "3.4"
},
{
"text": "modeled by the conditional neural model to get the next state X (t) , which is more efficient than the original Gibbs sampling method. The simulated annealing method adopts a time-varying parameter c Algorithm 2 NGS + simulated annealing Input: Initial state",
"cite_spans": [
{
"start": 64,
"end": 67,
"text": "(t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annealing Method",
"sec_num": "3.4"
},
{
"text": "X (0) = {x (0) 1 , . . . , x (0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annealing Method",
"sec_num": "3.4"
},
{
"text": "n } predicted by the prior neural network Result: The max-likelihood state X (N ) Train the conditional neural model to fit",
"cite_spans": [
{
"start": 77,
"end": 81,
"text": "(N )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annealing Method",
"sec_num": "3.4"
},
{
"text": "P c (x i |X \u2212i , o) c = 1 for t \u2190 1 to N do // randomly choose i to transit i \u2190 sample max Pc x (t) i |X (t\u22121) \u2212i ,o 1/c n j=1 max Pc x (t) j |X (t\u22121) \u2212j ,o 1/c x (t) i \u2190 arg max P c x (t) i |X (t\u22121) \u2212i , o X (t) \u2190 X (t\u22121) \u2212i \u222a {x (t) i } decrease c end Return X (N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annealing Method",
"sec_num": "3.4"
},
{
"text": "to control the sharpness of the distribution. With c gradually decreasing, the algorithm more and more tends to transit in the max-likelihood way and will quickly reach the max-likelihood state. When c is large, it performs like the original Gibbs sampling, so that can avoid falling into suboptimal results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annealing Method",
"sec_num": "3.4"
},
{
"text": "We evaluate the proposed models on two real-world datasets: the most widely-used ACE 2005 (Walker et al., 2006) and the newly-developed TAC KBP 2016 (Ellis et al., 2015) . They are both often used as the benchmark in the previous works.",
"cite_spans": [
{
"start": 90,
"end": 111,
"text": "(Walker et al., 2006)",
"ref_id": "BIBREF51"
},
{
"start": 149,
"end": 169,
"text": "(Ellis et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "ACE 2005 1 is the most widely-used dataset in EE, consisting of 599 documents, 8 event types, 33 event subtypes, and 35 argument roles. We evaluate our models by the performance of argument classification. When testing models, an argument is correctly classified only if its event subtype, offsets and argument role match the annotation results. For fair comparison with the previous works (Liao and Grishman, 2010b; Chen et al., 2015) , we follow them to use the same test set containing 40 newswire documents, the similar development set with 30 randomly selected documents and training set with the remaining 529 documents.",
"cite_spans": [
{
"start": 390,
"end": 416,
"text": "(Liao and Grishman, 2010b;",
"ref_id": "BIBREF27"
},
{
"start": 417,
"end": 435,
"text": "Chen et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "TAC KBP 2016 2 indicates the data of the TAC KBP 2016 Event Argument Extraction track, which is the latest benchmark dataset in EE. Different from ACE 2005, this competition only annotates difficult test data but no training data. Accordingly, they encourage participants to construct training data from any other sources by themselves. Considering the argument roles of TAC KBP 2016 are almost the same with ACE 2005 expect TAC KBP 2016 merges all the time-related roles in ACE 2005. We use the ACE 2005 dataset as our training data, which is also provided to the participants of the competition. Hence we can have a fair comparison with the baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "For fair comparison with the baselines, we use the same evaluation metrics with previous works: (1) Precision (P), which is defined as the number of correct argument predictions divided by the number of all argument predictions returned by the model. (2) Recall (R), which defined as the number of correct argument predictions divided by the number of all correct golden results in the test set. (3) F1 score (F1), which is defined as the harmonic mean of the precision and recall. F1 score is the most important metric to evaluate EAE performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "To directly show the improvement of our method from the comparisons, we reproduce DMCNN and DMBERT as baselines on both of the two datasets. In addition, we also select some state-of-the-art baselines on the two datasets respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "On ACE 2005, we compare our models with various state-of-the-art baselines, including: (1) Feature-based methods. Li's joint (Li et al., 2013) adopts structure prediction to extract events, which is the best traditional feature-based method. RBPB (Sha et al., 2016) adopts a regularization-based method to balance the effect of features and patterns, and also consider the relationship between argument candidates. (2) Vanilla neural network methods. JRNN (Nguyen et al., 2016) jointly conducts event detection and event argument extraction with bidirectional recurrent neural networks.",
"cite_spans": [
{
"start": 125,
"end": 142,
"text": "(Li et al., 2013)",
"ref_id": "BIBREF25"
},
{
"start": 247,
"end": 265,
"text": "(Sha et al., 2016)",
"ref_id": "BIBREF45"
},
{
"start": 456,
"end": 477,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "(3) Advanced neural network method with external information. The dbRNN (Sha et al., 2018) utilizes a recurrent neural network with dependency bridges to carry syntactically related information between words, which considers not only sequence structures but also tree structures of the sentences. The HMEAE (Wang et al., 2019b) dependency but still classify each event argument independently.",
"cite_spans": [
{
"start": 72,
"end": 90,
"text": "(Sha et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 307,
"end": 327,
"text": "(Wang et al., 2019b)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "On TAC KBP 2016, we compare our models with the top systems of the competition, including: DISCERN-R (Dubbin et al., 2016) , CMU CS Event1 (Hsi et al., 2016) , Washington1 and Washington4 (Ferguson et al., 2016) .",
"cite_spans": [
{
"start": 101,
"end": 122,
"text": "(Dubbin et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 139,
"end": 157,
"text": "(Hsi et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 188,
"end": 211,
"text": "(Ferguson et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "Our methods with DMCNN and DMBERT as the prior and conditional neural networks are named as NGS (CNN) and NGS (BERT) respectively. They both transit for 200 steps and the c linearly decrease from 1 to 0. As our work focuses on extracting event arguments and their roles and our methods do not involve the event detection stage (to identify the trigger and determine the event type), we conduct EAE based on the event detection models in (Chen et al., 2015) and (Wang et al., 2019a) for the CNN and BERT models respectively.",
"cite_spans": [
{
"start": 437,
"end": 456,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 461,
"end": 481,
"text": "(Wang et al., 2019a)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameter Settings",
"sec_num": "4.3"
},
{
"text": "For NGS (CNN), the hyperparameters of the prior and conditional neural networks are set as the same as in the original DMCNN (Chen et al., 2015) . We also use the pre-trained word embeddings learned by Skip-Gram (Mikolov et al., 2013) as the initial word embeddings. The detailed hyperparameters are shown in Table 1 .",
"cite_spans": [
{
"start": 125,
"end": 144,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 212,
"end": 234,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Hyperparameter Settings",
"sec_num": "4.3"
},
{
"text": "For NGS (BERT), the two BERT models for the prior and conditional probability distributions are both based on the BERT BASE model in Devlin et al. (2019) . We apply the pre-trained model 3 to initialize the parameters. To utilize the event type information in our model, we append a special token into each input sequence for BERT to indicate DISCERN-R (Dubbin et al., 2016) 7.9 7.4 7.7 Washington4 (Ferguson et al., 2016) 32.1 5.0 8.7 CMU CS Event1 (Hsi et al., 2016) 31.2 4.9 8.4 Washington1 (Ferguson et al., 2016) 26.5 6.8 10.8 DMCNN (Chen et al., 2015) 17.9 16.0 16.9 HMEAE (CNN) (Wang et al., 2019b) 15.3 22.5 18.2 DMBERT (Wang et al., 2019b) 22.6 24.7 23.6 HMEAE (BERT) (Wang et al., 2019b) the event type. Additional hyperparameters used in our experiments are shown in Table 2 .",
"cite_spans": [
{
"start": 133,
"end": 153,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 353,
"end": 374,
"text": "(Dubbin et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 399,
"end": 422,
"text": "(Ferguson et al., 2016)",
"ref_id": null
},
{
"start": 450,
"end": 468,
"text": "(Hsi et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 494,
"end": 517,
"text": "(Ferguson et al., 2016)",
"ref_id": null
},
{
"start": 538,
"end": 557,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 585,
"end": 605,
"text": "(Wang et al., 2019b)",
"ref_id": "BIBREF53"
},
{
"start": 628,
"end": 648,
"text": "(Wang et al., 2019b)",
"ref_id": "BIBREF53"
},
{
"start": 677,
"end": 697,
"text": "(Wang et al., 2019b)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [
{
"start": 778,
"end": 785,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Hyperparameter Settings",
"sec_num": "4.3"
},
{
"text": "The overall results of various baseline methods and NGS on ACE 2005 are shown in Table 3 . And the results on TAC KBP 2016 are shown in Table 4 . From the results, we observe that:",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 136,
"end": 143,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Overall Evaluation Results",
"sec_num": "4.4"
},
{
"text": "(1) NGS (CNN) and NGS (BERT) achieve significant improvements as compared with DMCNN and DMBERT respectively. Meanwhile, our models still outperform other baseline methods, which are either the typical EAE models or the recent state-of-the-art models. It indicates that our Gibbs sampling with simulated annealing works well to improve EAE with the help of adequately model- ing the correlation between event arguments. This demonstrates that our method is effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Evaluation Results",
"sec_num": "4.4"
},
{
"text": "(2) As NGS enhances both CNN models and BERT models on different datasets, it shows that our Gibbs sampling with simulated annealing is independent of EAE models. In other words, our method can be easily adapted for other EAE models to enhance their extraction performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Evaluation Results",
"sec_num": "4.4"
},
{
"text": "(3) From the experimental results on both ACE 2005 and TAC KBP 2016, we can find that the recall scores and F1 scores of our models are much better than the baseline models. The precision scores of our models do not achieve such obvious improvements. This is consistent with what we mention in the previous sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Evaluation Results",
"sec_num": "4.4"
},
{
"text": "We argue that the baseline models focusing on independently handling each event argument candidates may sever the constraints among argument roles, and may trap in a local optimum or over-fit the training set. The models without considering argument correlations may predict various argument roles with high confidence, even make some inexplicable mistakes. Hence the precision scores of these models may increase, but their recall scores and F1 scores may decrease.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Evaluation Results",
"sec_num": "4.4"
},
{
"text": "Our models adopt Gibbs sampling for EAE to perform approximate inference from the joint distribution, and make the most of the corrleation and constraints among argument roles. Accordingly, our models can avoid these issues and achieve the state-of-the-art results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Evaluation Results",
"sec_num": "4.4"
},
{
"text": "In order to verify the effectiveness of our method, especially for the simulated annealing method and the prior neural network, we conduct ablation studies on ACE 2005 and TAC KBP 2016.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.5"
},
{
"text": "To demonstrate the effectiveness of the simulated annealing method, we show the F1-step curves of Gibbs sampling with and without the simulated annealing in Figure 3 . We can observe that:",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 165,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Effectiveness of the Simulated Annealing",
"sec_num": null
},
{
"text": "(1) The simulated annealing method can significantly improve the convergence speed and the stability. Our methods just require quarter to half of the steps to reach the convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of the Simulated Annealing",
"sec_num": null
},
{
"text": "(2) The simulated annealing method does not weaken the performance of our models. Although the methods with the simulated annealing are much more efficient than those without the simulated annealing, their results are comparable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of the Simulated Annealing",
"sec_num": null
},
{
"text": "As the mathematical proof in the Appendix shows, a prior distribution is not necessary for Gibbs sampling. To demonstrate the effectiveness of the prior neural model, we show the F1-step curves of the prior neural model initialization and a random initialization for our NGS method (with simulated annealing) in Figure 4 . As it shows in figures, our NGS models with the prior neural network initialization take much fewer steps to reach the convergence than those models with random initialization, which is important and meaningful for the application. Combining the prior neural network initialization and the simulated annealing for our NGS will lead to a more efficient model. ",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 320,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Effectiveness of the Prior Neural Network",
"sec_num": null
},
{
"text": "To analyze whether NGS can successfully capture the event argument correlations and further improve EAE performance, we conduct a case study in Table 5 and a quantitative analysis in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 5",
"ref_id": "TABREF8"
},
{
"start": 183,
"end": 190,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Analysis on Modeling Event Argument Correlations",
"sec_num": "4.6"
},
{
"text": "The sentence in Table 5 is a real sentence containing an Appeal event, which is sampled from the test set of ACE 2005. From the EAE results, we can see that the vanilla DMCNN correctly classifies most of the event argument candidates. But because \"sodomy\" is a rare word, it misclassified \"sodomy\" into \"N/A\" (not an event argument). With the help of our NGS method's ability to model the joint distribution among event arguments, NGS (CNN) can infer that \"sodomy\" is a crime from the event argument correlations as it has known there are some crime-related arguments (adjudicator and plaintiff) in the sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Analysis on Modeling Event Argument Correlations",
"sec_num": "4.6"
},
{
"text": "On the other side, we show the comparisons between the basic model DMCNN and NGS (CNN) on data with different numbers of event arguments in Table 6 . With the increase of event argument number, our improvements significantly rise, which demonstrates our improvements come from modeling the correlations among event arguments. Note that the F1 scores are higher than the overall F1 scores, which is due to we filter out the negative instances without event arguments.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Analysis on Modeling Event Argument Correlations",
"sec_num": "4.6"
},
{
"text": "In this paper, we propose a novel Neural Gibbs Sampling (NGS) method to adequately model the correlation between event arguments and argument roles, which combines the advantages of the Gibbs sampling method to model the joint distribution among random variables and the neural network models to automatically learn the effective representations. Considering the shortcoming of high complexity of Gibbs sampling algorithm, we further apply simulated annealing to accelerate the whole estimation process, which lead our method to being both effective and efficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "The experimental results on two widely-used real-world datasets show that NGS can achieve comparable results to existing state-of-the-art EAE methods. The empirical analyses and ablation studies further verify the effectiveness and efficiency of our method. In the future: (1) We will try to extend NGS to other tasks and scenarios to evaluate its general effectiveness of modeling the latent correlations. (2) We will also explore more effective and simple methods to consider the correlations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "In this section, we will prove the convergence of Gibbs sampling, by which we implement sampling from the implicit joint distribution in this paper. Suppose that X = (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "X 0 , \u2022 \u2022 \u2022 , X n , \u2022 \u2022 \u2022 ), X i \u2208 E \u2286 R n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "is a Markov chain (abbr. MC). For a \u03bd-measurable set A, the transition kernel of A, K : E \u00d7 E \u2192 R n is defined via the following equation,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "K(X i , A) = P {X i+1 \u2208 A|X 0 , \u2022 \u2022 \u2022 , X i } (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "Assume that X satisfies that for any \u03c3-finite Borel measure \u03bd on R n , for any \u03bd-measurable set A, we have that,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (X i \u2208 A|X i\u22121 = x) = A K(x, y)d\u03bd(y) + \u03c7 A (x)r(x)",
"eq_num": "(2)"
}
],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "where r(x) := 1 \u2212 E K(x, y)d\u03bd(y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "A fundamental property of K is sub-stochastic. Assume that K is non-degenerate, hence r(x) < 1 for all x \u2208 E. Then, following the convention, we can define the iterative form as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "K (t) (x, y) = R n K (t\u22121) (x, z)K(z, y)d\u03bd(z) + K (t\u22121) (x, y)r(y) + [1 \u2212 r(x)] t\u22121 K(x, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "(3) Define the invariant distribution as \u03c0(X) for this MC and D = {x \u2208 E; \u03c0(x) > 0}. We know that \u03c0(X) must satisfy that, for any \u03bd-measurable set A, \u03c0(A) = P (X 1 \u2208 A|X 0 = x) \u03c0(x)d\u03bd(x) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "For \u03bd-measurable A, K is called \u03c0-irreducible when for all x \u2208 D, \u03c0(A) > 0, and is called aperiodic when there exists no partition E = (E 1 , \u2022 \u2022 \u2022 , E k\u22121 ) such that P(X i+1 \u2208 A j+1 |X i \u2208 A j ) = 1 for all j = 1, \u2022 \u2022 \u2022 , k \u2212 1 (mod k). Due to the work of Nummerlin (1984) and Tierney (1991) , we have the following theorem: If K is \u03c0-irreducible and aperiodic then, for all x \u2208 D.",
"cite_spans": [
{
"start": 258,
"end": 274,
"text": "Nummerlin (1984)",
"ref_id": "BIBREF41"
},
{
"start": 279,
"end": 293,
"text": "Tierney (1991)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Proof of the Convergence of Gibbs Sampling",
"sec_num": null
},
{
"text": "x \u2212 \u03c0 \u2192 0 as t \u2192 \u221e;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K (t)",
"sec_num": "1."
},
{
"text": "2. for real-valued, \u03c0-integrable function f ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K (t)",
"sec_num": "1."
},
{
"text": "t \u22121 {f (X 1 ) + \u2022 \u2022 \u2022 + f (X t )} \u2192 E f (x)\u03c0(x)d\u03bd(x) a.s. as t \u2192 \u221e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K (t)",
"sec_num": "1."
},
{
"text": "where following the conventional transformation between multi-variable functions and parameter families, K (t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K (t)",
"sec_num": "1."
},
{
"text": "x is defined as K (t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K (t)",
"sec_num": "1."
},
{
"text": "x (y) := K (t) (x, y). Indeed, with respect to \u03bd, it is the density of X t provided that X 0 = x, excluding the realizations X j = x, j = 1, \u2022 \u2022 \u2022 , t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K (t)",
"sec_num": "1."
},
{
"text": "Let P(X) = P(X 1 , \u2022 \u2022 \u2022 , X n ) denote the target density in our case. What we shall prove is that this P(X) is the invariant distribution of the MC constructed by Gibbs sampling. Provided with the theorem above, the remaining key issue is to prove that the transition kernel K satisfies \u03c0-irreducibility and aperiodicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K (t)",
"sec_num": "1."
},
{
"text": "https://catalog.ldc.upenn.edu/LDC2006T06 2 https://tac.nist.gov//2016/KBP/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "github.com/google-research/bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Hedong (Ben) Hou for his help in the mathematical proof. This work is supported by the Key-Area Research and Development Program of Guangdong Province (2019B010153002), NSFC Key Projects (U1736204, 61533018), a grant from Institute for Guo Qiang, Tsinghua University (2019GQB0003) and THUNUS NExT Co-Lab. This work is also supported by the Pattern Recognition Center, WeChat AI, Tencent Inc. Xiaozhi Wang is supported by Tsinghua University Initiative Scientific Research Program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
},
{
"text": "Equipped with the product measure, for the blocking x = (x 1 , \u2022 \u2022 \u2022 , x n ), it is required that the conditionals of Gibbs sampler construction,are well-defined over the appropriate regions, where X \u2212i shares the same definition as Sec. 2. With D = {x \u2208 E; \u03c0(x) > 0}, we seek to construct the kernel as K :where \u03a5 denotes the condition thatIt is then straightforward to check that, when K(x, y) is well-defined, \u03c0 is an invariant distribution of the chain attained by K.Observe that since we have a discrete distribution, it is trivial that all the subjects here are welldefined. Also the aperiodicity of K is ensured by the fact that K(x, x) > 0 for all x \u2208 D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The stages of event extraction",
"authors": [
{
"first": "David",
"middle": [],
"last": "Ahn",
"suffix": ""
}
],
"year": 2006,
"venue": "ARTE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Ahn. 2006. The stages of event extraction. In ARTE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Joint event trigger identification and event coreference resolution with structured perceptron",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Araki",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1247"
]
},
"num": null,
"urls": [],
"raw_text": "Jun Araki and Teruko Mitamura. 2015. Joint event trig- ger identification and event coreference resolution with structured perceptron. In EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Extending an information retrieval system through time event extraction",
"authors": [
{
"first": "P",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Caputo",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Semeraro",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Siciliani",
"suffix": ""
}
],
"year": 2014,
"venue": "DART",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P Basile, A Caputo, G Semeraro, and L Siciliani. 2014. Extending an information retrieval system through time event extraction. In DART.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatically labeled data generation for large scale event extraction",
"authors": [
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1038"
]
},
"num": null,
"urls": [],
"raw_text": "Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data genera- tion for large scale event extraction. In ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Event extraction via dynamic multipooling convolutional neural networks",
"authors": [
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1017"
]
},
"num": null,
"urls": [],
"raw_text": "Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi- pooling convolutional neural networks. In ACL- IJCNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Implicit argument prediction with event knowledge",
"authors": [
{
"first": "Pengxiang",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1076"
]
},
"num": null,
"urls": [],
"raw_text": "Pengxiang Cheng and Katrin Erk. 2018. Implicit argu- ment prediction with event knowledge. In ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Exploiting document level information to improve event detection via recurrent neural networks",
"authors": [
{
"first": "Ruifang",
"middle": [],
"last": "Shaoyang Duan",
"suffix": ""
},
{
"first": "Wenli",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaoyang Duan, Ruifang He, and Wenli Zhao. 2017. Exploiting document level information to improve event detection via recurrent neural networks. In IJCNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving discern with deep learning",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Dubbin",
"suffix": ""
},
{
"first": "Archna",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Dalton",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
},
{
"first": "Suriya",
"middle": [],
"last": "Kandaswamy",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Perera",
"suffix": ""
},
{
"first": "Jena",
"middle": [
"D"
],
"last": "Hwang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Dubbin, Archna Bhatia, Bonnie Dorr, Adam Dal- ton, Kristy Hollingshead, Suriya Kandaswamy, Ian Perera, and Jena D Hwang. 2016. Improving discern with deep learning. In TAC.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of linguistic resources for the tac kbp 2016 evaluations: Methodologies and results",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Getman",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Fore",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Kuster",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [
"M"
],
"last": "Strassel",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Ellis, Jeremy Getman, Dana Fore, Neil Kuster, Zhiyi Song, Ann Bies, and Stephanie M Strassel. 2015. Overview of linguistic resources for the tac kbp 2016 evaluations: Methodologies and results. In TAC.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "University of washington tackbp 2016 system description",
"authors": [
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel S Weld. 2016. University of washington tac- kbp 2016 system description. In TAC.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Incorporating non-local information into information extraction systems by Gibbs sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/1219840.1219885"
]
},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by Gibbs sampling. In ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Stochastic relaxation, gibbs distributions, and the bayesian restoration of images",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Geman",
"suffix": ""
}
],
"year": 1987,
"venue": "Readings in computer vision",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Geman and Donald Geman. 1987. Stochas- tic relaxation, gibbs distributions, and the bayesian restoration of images. In Readings in computer vi- sion.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Event nugget detection with forward-backward recurrent neural networks",
"authors": [
{
"first": "Reza",
"middle": [],
"last": "Ghaeini",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Tadepalli",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2060"
]
},
"num": null,
"urls": [],
"raw_text": "Reza Ghaeini, Xiaoli Fern, Liang Huang, and Prasad Tadepalli. 2016. Event nugget detection with forward-backward recurrent neural networks. In ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Predicting unknown time arguments based on cross-event propagation",
"authors": [
{
"first": "Prashant",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prashant Gupta and Heng Ji. 2009. Predicting un- known time arguments based on cross-event propa- gation. In ACL-IJCNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using cross-entity inference to improve event extraction",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In ACL-HLT.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Self-regulation: Employing a generative adversarial network to improve event detection",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Wenxuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Hong, Wenxuan Zhou, Guodong Zhou, Qiaoming Zhu, et al. 2018. Self-regulation: Employing a gen- erative adversarial network to improve event detec- tion. In ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Cmu cs event tac-kbp2016 event argument extraction system",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Hsi",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Jaime",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Hsi, Jaime G Carbonell, and Yiming Yang. 2016. Cmu cs event tac-kbp2016 event argument extraction system. In TAC.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Zero-shot transfer learning for event extraction",
"authors": [
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Clare",
"middle": [],
"last": "Voss",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifu Huang, Heng Ji, Kyunghyun Cho, Ido Dagan, Se- bastian Riedel, and Clare Voss. 2018. Zero-shot transfer learning for event extraction. In ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bootstrapped training of event extraction classifiers",
"authors": [
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruihong Huang and Ellen Riloff. 2012a. Bootstrapped training of event extraction classifiers. In Proceed- ings of EACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Modeling textual cohesion for event extraction",
"authors": [
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2012,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruihong Huang and Ellen Riloff. 2012b. Modeling tex- tual cohesion for event extraction. In AAAI.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. ArXiv.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Refining event extraction through cross-document inference",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Ji and Ralph Grishman. 2008. Refining event ex- traction through cross-document inference. In ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Optimization by simulated annealing",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Kirkpatrick",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gelatt",
"suffix": ""
},
{
"first": "Mario",
"middle": [
"P"
],
"last": "Vecchi",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Kirkpatrick, C Daniel Gelatt, and Mario P Vecchi. 1983. Optimization by simulated annealing. Sci- ence.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Joint event extraction via structured prediction with global features",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global fea- tures. In ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Filtered ranking for bootstrapping in event extraction",
"authors": [
{
"first": "Shasha",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shasha Liao and Ralph Grishman. 2010a. Filtered ranking for bootstrapping in event extraction. In COLING.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Using document level cross-event inference to improve event extraction",
"authors": [
{
"first": "Shasha",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shasha Liao and Ralph Grishman. 2010b. Using doc- ument level cross-event inference to improve event extraction. In ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Nugget proposal networks for chinese event detection",
"authors": [
{
"first": "Hongyu",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yaojie",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Xianpei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2018. Nugget proposal networks for chinese event detection. In ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Event detection via gated multilingual attention mechanism",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2018a. Event detection via gated multilingual attention mechanism. In AAAI.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploiting contextual information via dynamic memory network for event detection",
"authors": [
{
"first": "Shaobo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaobo Liu, Rui Cheng, Xiaoming Yu, and Xueqi Cheng. 2018b. Exploiting contextual information via dynamic memory network for event detection. In EMNLP.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Leveraging framenet to improve automatic event detection",
"authors": [
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016a. Leveraging framenet to improve auto- matic event detection. In ACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Exploiting argument information to improve event detection via supervised attention mechanisms",
"authors": [
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1164"
]
},
"num": null,
"urls": [],
"raw_text": "Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A probabilistic soft logic based approach to exploiting latent and global information in event classification",
"authors": [
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2016,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shulin Liu, Kang Liu, Shizhu He, and Jun Zhao. 2016b. A probabilistic soft logic based approach to exploit- ing latent and global information in event classifica- tion. In AAAI.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In ACL.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Event extraction as dependency parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Mihai Surdeanu, and Christopher D Manning. 2011. Event extraction as dependency parsing. In ACL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In ICLR.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "End-to-end relation extraction using lstms on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1105"
]
},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In ACL.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Graph convolutional networks with argument-aware pooling for event detection",
"authors": [
{
"first": "Thien",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Nguyen and Ralph Grishman. 2018. Graph con- volutional networks with argument-aware pooling for event detection. In AAAI.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Joint event extraction via recurrent neural networks",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Thien Huu Nguyen",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1034"
]
},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen, Kyunghyun Cho, and Ralph Gr- ishman. 2016. Joint event extraction via recurrent neural networks. In NAACL-HLT.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Event detection and domain adaptation with convolutional neural networks",
"authors": [
{
"first": "Huu",
"middle": [],
"last": "Thien",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2060"
]
},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In ACL-IJCNLP.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "General Irreducible Markov Chains and Non-negative Operators",
"authors": [
{
"first": "E",
"middle": [],
"last": "Nummerlin",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Nummerlin. 1984. General Irreducible Markov Chains and Non-negative Operators.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A unified model of phrasal and sentential evidence for information extraction",
"authors": [
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2009,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddharth Patwardhan and Ellen Riloff. 2009. A uni- fied model of phrasal and sentential evidence for in- formation extraction. In EMNLP.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A tutorial on hidden markov models and selected applications in speech recognition",
"authors": [
{
"first": "",
"middle": [],
"last": "Lawrence R Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence R Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech recognition. IEEE.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "ECML-PKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-3-642-15939-8_10"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In ECML-PKDD.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "RBPB: Regularization-based pattern balancing method for event extraction",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1116"
]
},
"num": null,
"urls": [],
"raw_text": "Lei Sha, Jing Liu, Chin-Yew Lin, Sujian Li, Baobao Chang, and Zhifang Sui. 2016. RBPB: Regularization-based pattern balancing method for event extraction. In ACL.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Jointly extracting event triggers and arguments by dependency-bridge rnn and tensor-based argument interaction",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Sha, Feng Qian, Baobao Chang, and Zhifang Sui. 2018. Jointly extracting event triggers and argu- ments by dependency-bridge rnn and tensor-based argument interaction. In AAAI.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Cross-lingual structure transfer for relation and event extraction",
"authors": [
{
"first": "Ananya",
"middle": [],
"last": "Subburathinam",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Shih-Fu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Avirup",
"middle": [],
"last": "Sil",
"suffix": ""
},
{
"first": "Clare",
"middle": [],
"last": "Voss",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "313--325",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1030"
]
},
"num": null,
"urls": [],
"raw_text": "Ananya Subburathinam, Di Lu, Heng Ji, Jonathan May, Shih-Fu Chang, Avirup Sil, and Clare Voss. 2019. Cross-lingual structure transfer for relation and event extraction. In Proceedings of EMNLP- IJCNLP, pages 313-325.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Parsing low-resource languages using Gibbs sampling for PCFGs with latent annotations",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Mielens",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1035"
]
},
"num": null,
"urls": [],
"raw_text": "Liang Sun, Jason Mielens, and Jason Baldridge. 2014. Parsing low-resource languages using Gibbs sam- pling for PCFGs with latent annotations. In EMNLP.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Ace 2005 multilingual training corpus",
"authors": [
{
"first": "L",
"middle": [],
"last": "Tierney",
"suffix": ""
}
],
"year": 1991,
"venue": "Tech. Rept., School of Statist., Univ. of Minnesota",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Tierney. 1991. Ace 2005 multilingual training cor- pus. Tech. Rept., School of Statist., Univ. of Min- nesota.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Entity, relation, and event extraction with contextualized span representations",
"authors": [
{
"first": "David",
"middle": [],
"last": "Wadden",
"suffix": ""
},
{
"first": "Ulme",
"middle": [],
"last": "Wennberg",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "5784--5789",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1585"
]
},
"num": null,
"urls": [],
"raw_text": "David Wadden, Ulme Wennberg, Yi Luan, and Han- naneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of EMNLP-IJCNLP, pages 5784- 5789.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Ace 2005 multilingual training corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Medero",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilin- gual training corpus. Linguistic Data Consortium, Philadelphia.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Adversarial training forweakly supervised event detection",
"authors": [
{
"first": "Xiaozhi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1105"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, and Peng Li. 2019a. Adversarial training forweakly supervised event detection. In NAACL-HLT.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "HMEAE: Hierarchical modular event argument extraction",
"authors": [
{
"first": "Xiaozhi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ziqi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Juanzi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1584"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaozhi Wang, Ziqi Wang, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Maosong Sun, Jie Zhou, and Xiang Ren. 2019b. HMEAE: Hierarchical modular event argument extraction. In EMNLP-IJCNLP.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Joint extraction of events and entities within a document context",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1033"
]
},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Tom Mitchell. 2016. Joint extraction of events and entities within a document context. In NAACL-HLT.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Structured use of external knowledge for event-based open domain question answering",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
},
{
"first": "Shuguang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chun-Keat",
"middle": [],
"last": "Koh",
"suffix": ""
}
],
"year": 2003,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/860435.860444"
]
},
"num": null,
"urls": [],
"raw_text": "Hui Yang, Tat-Seng Chua, Shuguang Wang, and Chun- Keat Koh. 2003. Structured use of external knowl- edge for event-based open domain question answer- ing. In SIGIR.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Scale up event extraction learning via automatic training data generation",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Zeng, Yansong Feng, Rong Ma, and Zheng Wang. 2018. Scale up event extraction learning via auto- matic training data generation. In AAAI.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Improving event extraction via multimodal integration",
"authors": [
{
"first": "Tongtao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Spencer",
"middle": [],
"last": "Whitehead",
"suffix": ""
},
{
"first": "Hanwang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hongzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Shih-Fu",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2017,
"venue": "MM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3123266.3123294"
]
},
"num": null,
"urls": [],
"raw_text": "Tongtao Zhang, Spencer Whitehead, Hanwang Zhang, Hongzhi Li, Joseph Ellis, Lifu Huang, Wei Liu, Heng Ji, and Shih-Fu Chang. 2017. Improving event extraction via multimodal integration. In MM.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Document embedding enhanced event detection with hierarchical and supervised attention",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Yuanzhuo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhao, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. 2018. Document embedding enhanced event detection with hierarchical and supervised attention. In ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "An example of event extraction, including event detection and event argument extraction.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Figure 2shows the overall framework of our Neural Gibbs Sampling (NGS) method, consisting of the following modules:",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "F1-step curves of NGS (CNN) with the simulated annealing method and the original Gibbs sampling on ACE 2005 (left) and TAC KBP 2016 (right).",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "F1-step curves of NGS (CNN) with prior neural network initialization and random initialization on ACE 2005 (left) and TAC KBP 2016 (right).",
"num": null,
"uris": null
},
"TABREF1": {
"text": "Hyperparameter settings for CNN models.",
"content": "<table><tr><td>Learning Rate</td><td>6 \u00d7 10 \u22125</td></tr><tr><td>Batch Size</td><td>50</td></tr><tr><td>Warmup Rate for the Prior Neural Model</td><td>0.1</td></tr><tr><td colspan=\"2\">Warmup Rate for the Conditional Nueral Model 0.05</td></tr><tr><td>Argument Role Embedding Dimension</td><td>768</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "Hyperparameter settings for BERT models.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "",
"content": "<table><tr><td colspan=\"4\">: The overall EAE results (%) of various base-</td></tr><tr><td colspan=\"4\">lines and NGS on ACE 2005. EAE performances are</td></tr><tr><td colspan=\"4\">influenced by the trigger quality, hence we also provide</td></tr><tr><td colspan=\"4\">the trigger classification (event detection) results. Note</td></tr><tr><td colspan=\"4\">that as our work does not involve the event detection</td></tr><tr><td colspan=\"4\">stage, the NGS (CNN) and NGS (BERT) use the trig-</td></tr><tr><td colspan=\"4\">gers predicted by DMCNN and DMBERT respectively.</td></tr><tr><td/><td colspan=\"3\">Argument Role</td></tr><tr><td>Method</td><td colspan=\"3\">Classification</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "The overall EAE results (%) of various baseline methods and our NGS on TAC KBP 2016 Event Argument Task. All the models use golden triggers.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF7": {
"text": "Justice Subtype: Appeal Text: Malaysia's second highest court on Friday rejected an appeal by ... Anwar Ibrahim against his conviction and nine-year prison sentence for sodomy.",
"content": "<table><tr><td colspan=\"2\">Event Argument Candidate Malaysia</td><td>court</td><td>Friday</td><td colspan=\"2\">Anwar Ibrahim sodomy</td></tr><tr><td>DMCNN</td><td>Place</td><td>Adjudicator</td><td>Time-Within</td><td>Plaintiff</td><td>N/A\u00d7</td></tr><tr><td>NGS (CNN)</td><td>Place</td><td>Adjudicator</td><td>Time-Within</td><td>Plaintiff</td><td>Crime</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF8": {
"text": "Top: An example sentence highlighting the event argument candidates, which is sampled from ACE 2005. Bottom: EAE results of DMCNN and NGS (CNN). NGS (CNN) correctly classifies \"sodomy\" into Crime with the help of correlations among event arguments.",
"content": "<table><tr><td>)</td><td>)</td></tr><tr><td>3ULRU1HXUDO0RGHO,QLWLDOL]DWLRQ 5DQGRP,QLWLDOL]DWLRQ</td><td>3ULRU1HXUDO0RGHO,QLWLDOL]DWLRQ 5DQGRP,QLWLDOL]DWLRQ</td></tr><tr><td>6WHS</td><td>6WHS</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF10": {
"text": "F1 scores (%) of DMCNN and NGS (CNN) on different parts of ACE 2005 dev set with different event argument numbers per sentence.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}