|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:10:36.306926Z" |
|
}, |
|
"title": "Let's be explicit about that: Distant supervision for implicit discourse relation classification via connective prediction", |
|
"authors": [ |
|
{ |
|
"first": "Murathan", |
|
"middle": [], |
|
"last": "Kurfal\u0131", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Stockholm University Stockholm", |
|
"location": { |
|
"country": "Sweden" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In implicit discourse relation classification, we want to predict the relation between adjacent sentences in the absence of any overt discourse connectives. This is challenging even for humans, leading to shortage of annotated data, a fact that makes the task even more difficult for supervised machine learning approaches. In the current study, we perform implicit discourse relation classification without relying on any labeled implicit relation. We sidestep the lack of data through explicitation of implicit relations to reduce the task to two subproblems: language modeling and explicit discourse relation classification, a much easier problem. Our experimental results show that this method can even marginally outperform the state-of-the-art, in spite of being much simpler than alternative models of comparable performance. Moreover, we show that the achieved performance is robust across domains as suggested by the zero-shot experiments on a completely different domain. This indicates that recent advances in language modeling have made language models sufficiently good at capturing inter-sentence relations without the help of explicit discourse markers.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In implicit discourse relation classification, we want to predict the relation between adjacent sentences in the absence of any overt discourse connectives. This is challenging even for humans, leading to shortage of annotated data, a fact that makes the task even more difficult for supervised machine learning approaches. In the current study, we perform implicit discourse relation classification without relying on any labeled implicit relation. We sidestep the lack of data through explicitation of implicit relations to reduce the task to two subproblems: language modeling and explicit discourse relation classification, a much easier problem. Our experimental results show that this method can even marginally outperform the state-of-the-art, in spite of being much simpler than alternative models of comparable performance. Moreover, we show that the achieved performance is robust across domains as suggested by the zero-shot experiments on a completely different domain. This indicates that recent advances in language modeling have made language models sufficiently good at capturing inter-sentence relations without the help of explicit discourse markers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Discourse relations describe the relationship between discourse units, e.g. clauses or sentences. These relations are either signalled explicitly with a discourse connective (e.g. because, and) or expressed implicitly and are inferred by sequential reading (Example 1 below).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) A figure above 50 indicates the economy is likely to expand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "[While] One below 50 indicates a contraction may be ahead. (Comparison -wsj 0233)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The relations in the latter category are called implicit discourse relations and they are of special significance because their lack of an explicit signal makes them challenging to annotate for even humans, suggested by the lower inter-annotator agreements on implicit relations (Zeyrek and Kurfal\u0131, 2017; Zik\u00e1nov\u00e1 et al., 2019) , let alone classify automatically.", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 305, |
|
"text": "(Zeyrek and Kurfal\u0131, 2017;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 328, |
|
"text": "Zik\u00e1nov\u00e1 et al., 2019)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Resources for implicit discourse relations, therefore, are very limited. Even the Penn Discourse Tree Bank 2.0 (PDTB 2.0) (Prasad et al., 2008) , which is the most popular resource, includes merely 16K implicit discourse relations, all annotated on the same domain. Explicit discourse relations, on the other hand, are proven to be simple enough to be obtained both manually and automatically. Previous work shows that explicit relations in English have a low level of ambiguity, so the discourse relation can be classified with more than 94% accuracy from the discourse connective alone . This has inspired others to predict connectives for the implicit discourse relations and add them as additional features to existing supervised classifiers (Zhou et al., 2010; Xu et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 143, |
|
"text": "(Prasad et al., 2008)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 746, |
|
"end": 765, |
|
"text": "(Zhou et al., 2010;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 766, |
|
"end": 782, |
|
"text": "Xu et al., 2012)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our work takes this idea one step further by reducing the amount of supervision required. Instead of training a separate connective classifier, we generate a set of candidate explicit relations that are obtained by inserting explicit discourse markers between sentences and score the resulting segments using a large pre-trained language model. 1 The candidates are then classified with an accurate explicit discourse relation classifier, and the final implicit relation prediction can be obtained by either using the candidate with the highest-scoring connective, or marginalizing over the whole distribution of explicit connectives.", |
|
"cite_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 346, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main contributions of our papers are as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We show that this simple approach is very effective and even marginally outperforms the current state-of-the-art method that does not use labeled implicit discourse relation data, even though that method uses a significantly more complex adversarial domain adaptation model (Huang and Li, 2019).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 To the best of our knowledge, this is the first study to go beyond the default four-way classification under the low-resource scenario assumption where no labeled implicit discourse relation is available. We show that the proposed pipeline maintains its performance (relative to the baselines) in a more challenging 11-way classification as well as across domains (i.e., biomedical texts (Prasad et al., 2011) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 390, |
|
"end": 411, |
|
"text": "(Prasad et al., 2011)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We offer explicitation of implicit discourse relations as a probing task to evaluate language models. Despite their relevancy, discourse relations are mostly overlooked in the assessments of language models' understanding of context. As a secondary aim, we investigate a wide range of pre-trained language models' understanding of inter-sentential relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We hope that the proposed pipeline will be another step in overcoming the data-bottleneck problem in discourse studies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2.1 Implicit Discourse Relations PDTB 2.0 adopts a lexicalized approach where each relation consists of a discourse connective (e.g. \"but\", \"and\") which acts as a predicate taking two arguments. For each relation, annotators were asked to annotate the connective, the two text spans that hold the relation and the sense it conveys based on the PDTB sense hierarchy (Prasad et al., 2008) . The text span which is syntactically bound to the connective is called the second argument (arg2) whereas the other is the first argument (arg1). \"Additionally, implicit relations are annotated with that explicit connective which according to judgements best expresses the sense of the relation.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 365, |
|
"end": 386, |
|
"text": "(Prasad et al., 2008)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, in certain cases, a relation holds between the adjacent sentences despite the lack of an overt connective (see Example 1). PDTB 2.0 recognizes such relations as implicit discourse relations. Additionally, implicit relations are annotated with an explicit connective which best expresses the sense of the relation is according to annotators. The connective inserted by the annotators is termed as \"implicit connective\" (e.g. \"while\" in Example 1). Unlike explicit relations where there is an explicit textual cue (the connective), implicit relations can only be inferred which makes them more challenging to spot and annotate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The research on implicit discourse relation classification is overwhelmingly supervised Rutherford and Xue, 2015; Lan et al., 2017; Nie et al., 2019; Kim et al., 2020) . Although unsupervised methods were present in the earliest attempts (Marcu and Echihabi, 2002) , they haven't received serious attention and much research concentrated on increasing the available supervision to deal with the data; most prominently, either by automatically generating artificial data (Sporleder and Lascarides, 2008; Braud and Denis, 2014; Rutherford and Xue, 2015; Wu et al., 2016; Shi et al., 2017) or through introducing auxiliary but similar tasks to the training routine to leverage additional information (Zhou et al., 2010; Xu et al., 2012; Liu et al., 2016; Lan et al., 2017; Qin et al., 2017; Shi and Demberg, 2019a; Nie et al., 2019) . Zhou et al. (2010) and Xu et al. (2012) constitute the earliest examples where the classification of implicit relations are assisted via connective prediction. Both studies employ language models to predict suitable connectives for implicit relations which are, then, either used as additional features or classified directly. Ji et al. (2015) is one of the few recent distantly supervised 2 studies which tackle implicit relation classification as a domain adaptation problem where the labeled explicit relations are regarded as the source domain and the unlabeled implicit relations as the target. Huang and Li (2019) improves upon Ji et al. (2015) by employing adversarial domain adaption with a novel reconstruction component.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 113, |
|
"text": "Rutherford and Xue, 2015;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 114, |
|
"end": 131, |
|
"text": "Lan et al., 2017;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 149, |
|
"text": "Nie et al., 2019;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 167, |
|
"text": "Kim et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 264, |
|
"text": "(Marcu and Echihabi, 2002)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 502, |
|
"text": "(Sporleder and Lascarides, 2008;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 503, |
|
"end": 525, |
|
"text": "Braud and Denis, 2014;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 526, |
|
"end": 551, |
|
"text": "Rutherford and Xue, 2015;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 568, |
|
"text": "Wu et al., 2016;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 586, |
|
"text": "Shi et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 697, |
|
"end": 716, |
|
"text": "(Zhou et al., 2010;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 717, |
|
"end": 733, |
|
"text": "Xu et al., 2012;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 734, |
|
"end": 751, |
|
"text": "Liu et al., 2016;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 769, |
|
"text": "Lan et al., 2017;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 770, |
|
"end": 787, |
|
"text": "Qin et al., 2017;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 788, |
|
"end": 811, |
|
"text": "Shi and Demberg, 2019a;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 812, |
|
"end": 829, |
|
"text": "Nie et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 832, |
|
"end": 850, |
|
"text": "Zhou et al. (2010)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 855, |
|
"end": 871, |
|
"text": "Xu et al. (2012)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1159, |
|
"end": 1175, |
|
"text": "Ji et al. (2015)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1466, |
|
"end": 1482, |
|
"text": "Ji et al. (2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "BERT Bidirectional Encoder Representations for Transformers (BERT) is a multi-layer Transformer encoder based language model (Devlin et al., 2019) . As opposed to directional models where the input is processed from one direction to another, the transformer encoder reads its input at once; hence, BERT learns word representations in full context (both from left and from right). BERT is trained with two pre-training objectives on largescale unlabeled text: (i) Masked Language Modelling and (ii) Next Sentence Prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 146, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Language Models", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A number of BERT variants are available that differ in terms of (i) their architecture, e.g. BERTbase (12-layer, 110M parameters) and BERT-large (24-layer, 340M parameters); (ii) whether the letter casing in its input is preserved (-cased) or not (uncased) ; (iii) their masking strategy, e.g. word pieces (default) or whole words (-whole-wordmasking).", |
|
"cite_spans": [ |
|
{ |
|
"start": 247, |
|
"end": 256, |
|
"text": "(uncased)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Language Models", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "RoBERTa RoBERTa (Liu et al., 2019) shares the same architecture as BERT but improves upon it via introducing a number of refinements to the training procedure, such as using more data with larger batch sizes, adopting a larger vocabulary, removal of the NSP objective and dynamic masking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 34, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Language Models", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "DistilBERT DistilBERT was introduced by (Sanh et al., 2019) . It is created by applying knowledge distillation to BERT which is a compression technique in which a small model learns to mimic the full output distribution of the target model (in this case: BERT). DistilBERT is claimed to retain 97% of BERT performance despite being 40% smaller and 60% faster, as suggested by its performance on Question Answering task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 59, |
|
"text": "(Sanh et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Language Models", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "GPT-2 Generatively Pre-trained Transformer (GPT-2) is a unidirectional transformer based language model trained on a dataset of 40 GB of web crawling data (Radford et al., 2019) . Unlike BERT, GPT-2 works like a traditional language model where each token can only attend to its previous context. GPT-2 has four variants which differ from each other in the number of layers, ranging from 12 (small) to 48 (XL).", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 177, |
|
"text": "(Radford et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Language Models", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The proposed method consists of three main components: (i) a candidate generator that generates sentence pairs connected by each of a set of discourse connectives, (ii) a language model that estimates the likelihood of each candidate, and (iii) an explicit discourse relation classifier to be used on the candidates. Whole pipeline is shown in Figure 1. The proposed methodology does not require even a single implicit discourse relation annotation and is only distantly supervised where the supervision comes from the explicit discourse relations used in training the classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 350, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The main motivation behind the proposed pipeline is the finding that discourse relations are easily classifiable if they are explicitly marked . We further verify this finding via a preliminary experiment which showed that four-way classification could be performed with an F-score of 88.74 when the implicit discourse relations are \"explicitated\" with the gold implicit connectives they are annotated with (see Table 2 ). This finding is significant not only because it justifies our motivation but also shows the potential of the current approach. Secondarily, the task requires a high level understanding of the context which allows us to investigate the pretrained language models capabilities in detecting inter-sentential relations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 419, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Recall Example 1, which contains an implicit relation between argument 1 (\"A figure above . . . to expand.\") and argument 2 (\"one below . . . be ahead.\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Generation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given a list of English connectives (and, because, but, etc.), we generate the following explicit relation candidates for a given implicit relation: (Prasad et al., 2008) . Of the listed 100 connectives, 3 we limit ourselves to 65 one-word connectives to generate the candidates due to masked language models' inability to predict multiple tokens simultaneously.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 170, |
|
"text": "(Prasad et al., 2008)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Generation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A 1 C A 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Generation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our next task is to produce a distribution over connectives C conditioned on the context (arguments A 1 and A 2 ). For unidirectional language models (in our case: GPT-2 variants), we estimate this by computing the language model likelihood of the entire candidates and normalizing over the connec- tives:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prediction of Implicit Connectives", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P Conn (C|A 1 , A 2 ) \u221d P LM (A 1 C A 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prediction of Implicit Connectives", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "With bidirectional masked language models (in our case: DistilBERT, BERT and RoBERTa) we need to instead provide a candidate template by inserting the special sentence separation ([SEP]) and masking ([MASK]) tokens. Then it is simply a matter of normalizing over the model's estimated probability of the connective being inserted at the position of the masking token:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prediction of Implicit Connectives", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P Conn (C|A 1 , A 2 ) \u221d P LM (C|A 1 [SEP] [MASK] A 2 [SEP])", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prediction of Implicit Connectives", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We regard discourse relation classification as a sentence pair classification task and build a classifier on top of the pre-trained BERT model from Devlin et al. (2019) using the recommended fine-tuning strategy. Specifically, the first and second arguments are separated via the special separator token ([SEP]) with the connective on the second argument and the [CLS] token is used for classification through a fully connected layer with softmax activation. This classifier gives us a model for the distribution P Exp (l|C, A 1 , A 2 ) of relation labels l conditioned on the connective C and its arguments A 1 and A 2 . The annotation of explicit and implicit relations in the PDTB 2.0 differ in several aspects. In the case of implicit relations, PDTB 2.0 annotates arguments in the order they appear in the text, hence implicit relations can only manifest one configuration (i.e. arg1, [conn], arg2). On the other hand, the relative argument order of the explicit relations can vary to the extent that sometimes the arguments may interrupt each other (e.g. Of course, if the film contained dialogue, Mr. Lane's Artist would be called a homeless person. [from wsj-0039]). In order to remedy for this disparity to some extent, we only use the explicit relations which share the same relative argument order with implicit relations (i.e. arg1, conn, arg2) in training the classifier so that there is not any discrepancy in terms of the relation structure between training and inference phases. In total, 2558 (13.85%) explicit relations that do not follow the (arg1,conn,arg2) order are left out.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit Discourse Relation Classifier", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In our experiments we combine the models in two ways. The simplest way is a straightforward pipeline approach, where the single most likely implicit connective is predicted, and then fed to the explicit relation classifier:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Model", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "P (l|A 1 , A 2 ) = P Exp (l| arg max C P Conn (C|A 1 , A 2 ), A 1 , A 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Model", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Even though the level of ambiguity in English discourse connectives is relatively low, we also try to account for this ambiguity by marginalizing over all connectives:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Model", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "P (l|A 1 , A 2 ) = C P Exp (l|C, A 1 , A 2 ) \u00d7 P Conn (C|A 1 , A 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Model", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We follow the experimental setting of Huang and Li (2019) which is originally adopted by (Ji et al., 2015) . The implicit relations in the PDTB 2.0 sections 21-22 are allocated as the test set whereas the explicit relations in sections 2-20;23-24 are used as the training and 0-1 as the development set of the explicit relation classifier. The evaluation is performed for both the four first-level and the most common 11 second-level senses. For the former, we report both per-class and the macro-average F1-scores similar to Huang and Li (2019) whereas the accuracy is also reported on the second level Table 2 : The results of the proposed methodology with various pre-trained language models. The average performance over four runs is reported (numbers within parentheses indicate the standard deviation). L stands for 'large' and wwm stands for 'whole-word-masking'. \"+ Margin\" refers to the second inference strategy explained in Section 3.4. Best scores are presented in bold, second bests are in italics (excluding the baselines).", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 106, |
|
"text": "(Ji et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 604, |
|
"end": 611, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "senses following the standard in the literature. The statistics of the used datasets are provided in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 109, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The classifiers are implemented using the Transformers library by Huggingface (Wolf et al., 2020) . We use the uncased BERT large model for the explicit relation classifier (Section 3.3). The model is fine-tuned for ten epochs with a batch size of 16, learning rate of 5 \u00d7 10 \u22126 . To optimize the loss function, we use Adam with fixed weight decay (Loshchilov and Hutter, 2018) and warm-up linearly for the first 1K steps. The model is evaluated with the step size of 500 and the one with the best development performance is used as the final model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 97, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 377, |
|
"text": "(Loshchilov and Hutter, 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We mainly compare our results against the recent unsupervised studies we are aware of (Huang and Li, 2019; Ji et al., 2015) . Additionally, we report the performance of a number baselines and upper bounds to put the results into a perspective:", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 106, |
|
"text": "(Huang and Li, 2019;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 123, |
|
"text": "Ji et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Most Common Sense: The performance when the most common sense of each evaluation level is predicted for every relation in the test set (Expansion for the first level; Contingency.cause for the second).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Most Common Connective: The performance when the candidate with the most common explicit connective (but) is selected for every relation in the test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Gold Connective: The performance when the candidate with the gold implicit connective is selected. This baseline also shows the upper bound of the proposed pipeline (see Section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Supervised baseline: This is the results of the BERT classifier fine-tuned on the implicit discourse relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The results are provided in Table 2 . Overall, the 4-way classification F-score ranges between 33.86 (DistilBERT) to 41.10 (GPT2-large) where three models outperform the previous state-of-the-art (RoBERTa-large, GPT2-large, GPT2-XL). Moreover, the performance is robust across different sense levels as suggested by its relative performance to the baselines in the more challenging 11-way classification.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 35, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation on PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In addition to the increase in the overall performance, the most substantial gain is observed in Comparison relations where the unsupervised stateof-the-art is improved by almost 25% points to 49.52%, bringing it closer to the supervised baseline (58.35%). The relatively successful performance in Comparison relations hold for all language models, suggesting that language models are good at detecting the cues for these relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation on PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Marginalizing over all connectives leads constant improvements with all language models. Marginalization yields average gain of 2.12% when with BERT-variants and 2.04% with GPT2 models. This step alters only a small portion of predictions, on average 10.1% of the predictions change after marginalization. Relation-wise Contingency benefits from this step most with the average increase of 4.20%. In order to have a better insight, we closely inspect the label shifts in RoBERTa-large's predictions which reveals that the most frequent label shift is from Expansion to Contingency relations (41.1%). These changes mostly occur when there is a clear mismatch between the top connective and others following it in terms of their sense. To illustrate, Example 2 presents a relation, label of which was changed from Expansion to Contingency where the top five selected connectives were: \"and\",\"as\",\"because\",\"since\",\"for\". Of these connectives, only \"and\" dominantly conveys Table 3 : The agreement in percent of the language models for connective and sense prediction (see text for details). The first two rows show the results when only the respective connectives are predicted for all relations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 971, |
|
"end": 978, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation on PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Expansion whereas others commonly convey Contingency. Marginalization acts as a corrective step in such cases and saves the model from depending on the top-rank connective by allowing it to consider the connective predictions with lower ranks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation on PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "(2) Experts are predicting a big influx of new shows in 1990, when a service called \"automatic number information\" will become widely available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation on PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "[IMP=because] This service identifies each caller's phone number, and it can be used to generate instant mailing lists.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation on PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Finally, as for 11-way classification, the same pattern also holds where marginalization leads to the average of 1.07% and 2.27% improvement in F-score and accuracy, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation on PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In order to investigate how well the language models perform their task, we present in Table 3 the agreement between the human-annotated implicit connective and each model's top-ranked connective 4 (column Conn) as well as the agreement between the most frequent sense of that top-ranked connective and the gold sense label (column Sense).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 94, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Language Models via Selected Candidates", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "From the low connective agreement figures, we see that the models generally fail to prioritize the connective favored by the annotators; yet, as evidenced by the high sense agreement, they are able to select a connective which suits the given context and thereby helps the explicit relation classifier. We further illustrate the connective predictions of the top language models from each family (RoBERTalarge and GPT2-large) via confusion matrices in Figure 2 . As can be seen, the connective predictions are very scattered showing that language models struggle to predict annotators' decisions. However, we would like to note that matching human annotators' performance in connective insertion does not yield informative insights due to ambiguity; that is, for many implicit relations, there are multiple connectives that work as fine. Therefore, we suggest the evaluation focusing on the sense conveyed by the implicit relation and the connective (column Sense) as a more reliable way to assess the language models' performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 452, |
|
"end": 460, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Language Models via Selected Candidates", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "too harsh a criteria to assess the language models since in many cases, there are more than one possible connectives that work as fine. Therefore, we would like to note that the second evaluation, matching the sense Table 3 also suggests that BERT-based models perform better when it comes to selecting a suitable connective than the GPT2 family. We hypothesize that this is because bidirectional gap-filling language models have a training objective that is very close to the type of candidates we use. Finally, despite yielding the worst results, DistilBERT can retain most of BERT-base's performance (\u223c 97%), proving that even the smaller models can be utilized for the current task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 223, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Language Models via Selected Candidates", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The limited number of the manual annotations does not account for the whole data bottleneck problem in discourse parsing, as the available corpora lack textual variety as much as numbers. Inarguably, PDTB is used as both the training and validation data in the bulk of studies; hence, most research on discourse parsing is confined to one domain. Unfortunately, initial attempts show that sub-tasks of discourse parsing generalize poorly across-domains (Stepanov and Riccardi, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 453, |
|
"end": 482, |
|
"text": "(Stepanov and Riccardi, 2014)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cross-domain Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In order to test how our pipeline generalizes to another domain, we run a set of experiments on the Biomedical Discourse Relation Bank (BioDRB) (Prasad et al., 2011) . BioDRB closely follows the PDTB 2.0 annotation framework 5 and is annotated over 24 full-text articles in the biomedical domain which is quite different from that of PDTB. Probably due to this difference and its relatively smaller size, BioDRB is mostly overlooked in computational studies. Consequently, there are only few Table 4 : The results of the cross-domain experiments on BioDRB set. Test set refers to the results on the designated test set of BioDRB whereas Full data is the whole corpus. All baselines are supervised and their results are taken from (Shi and Demberg, 2019b) . results on BioDRB and unsurprisingly they are all from supervised methods. We compare our results with (Shi and Demberg, 2019b) which reports the state-of-the-art cross-domain results, along with the results from a number of baselines. For the sake of comparability, we follow their experimental settings and report both 4-and 11-way classification results on the BioDRB test set 6 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 165, |
|
"text": "(Prasad et al., 2011)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 730, |
|
"end": 754, |
|
"text": "(Shi and Demberg, 2019b)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 860, |
|
"end": 884, |
|
"text": "(Shi and Demberg, 2019b)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 492, |
|
"end": 499, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cross-domain Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Additionally, as a more rigorous evaluation, we also report results on the whole BioDRB corpus. That way, we aim to free the evaluation of the generalization abilities of our pipeline from any bias that may rise from using a certain sub-part of the corpus. Finally, it must be noted that the LMs are 6 which is originally suggested by (Xu et al., 2012) and consists of the files GENIA 1421503 and GENIA 1513057 not fine-tuned in any way on the target corpus (Bio-DRB) in either setting. The results are provided in Table 4 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 352, |
|
"text": "(Xu et al., 2012)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 515, |
|
"end": 522, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cross-domain Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The results suggest that our pipeline has strong cross-domain performance despite explicit relation classifier's being trained on only PDTB. In both 4-way and 11-way classification, we are able to outperform the zero-shot performance of even the supervised approaches, including the recent neural approaches (Bai and Zhao, 2018) . We hypothesize that our two-step pipeline plays the key role in mitigating the domain-specific problems. Since we are using the \"raw\" (unfinetuned) language models to rank candidates, we are able to directly leverage the knowledge of these models that they learn from numerous domains thanks to their diverse training data. Once the suitable connectives are highlighted by the language model, the explicit relation classifier can mainly rely on them to make the prediction; hence, less affected by the domain change.", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 328, |
|
"text": "(Bai and Zhao, 2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cross-domain Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In addition to its inherent difficulty, implicit discourse relation classification becomes even more challenging with the lack of sufficient data. In the current study, we focus on the latter problem by assuming the extreme low-resource scenario where there are no labeled implicit discourse relations. The data shortage is mitigated by leveraging the contextual information of the available pre-trained language models through explicitation of the implicit relations. We show that the proposed pipeline, despite its simplicity, is able to outperform the previous attempts. Furthermore, by taking another step, we tested the proposed architecture in the more challenging 11-way setting as well as on a completely different domain. The experimental results confirm that our model is robust and generalizes well, even compared to recent supervised approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In the reminder of the paper, these candidate explicit relations are simply referred as candidates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Previous work uses the term unsupervised (domain adaptation). Although we use the same amount of supervision with earlier work (no labeled implicit relation are utilized), we believe distant supervision describes the method better.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The modified connectives such as \"partly because\" are not counted as distinct types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We limit this analysis only to the relations annotated with an one-word gold implicit connective due to our design criteria (see Section 3.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Yet, BioDRB uses slightly different sense hierarchy. We follow the instructions on(Prasad et al., 2011) to map the senses back to PDTB 2.0 hierarchy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Dmitry Nikolaev, Johan Sjons, Bernhard W\u00e4lchli and Faruk B\u00fcy\u00fcktekin for their useful comments. The three outstanding reviews from the workshop also helped us greatly. We thank NVIDIA for their GPU grant, and the Swedish National Infrastructure For Computing (SNIC) for providing computational resources under Project 2020/33-26.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Deep enhanced representation for implicit discourse relation recognition", |
|
"authors": [ |
|
{ |
|
"first": "Hongxiao", |
|
"middle": [], |
|
"last": "Bai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "571--583", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongxiao Bai and Hai Zhao. 2018. Deep enhanced representation for implicit discourse relation recog- nition. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 571- 583, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Combining natural and artificial examples to improve implicit discourse relation identification", |
|
"authors": [ |
|
{ |
|
"first": "Chlo\u00e9", |
|
"middle": [], |
|
"last": "Braud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1694--1705", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chlo\u00e9 Braud and Pascal Denis. 2014. Combining natu- ral and artificial examples to improve implicit dis- course relation identification. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1694-1705, Dublin, Ireland. Dublin City Uni- versity and Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT (1).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised adversarial domain adaptation for implicit discourse relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Ping", |
|
"middle": [], |
|
"last": "Hsin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junyi Jessy", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "686--695", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K19-1064" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hsin-Ping Huang and Junyi Jessy Li. 2019. Unsu- pervised adversarial domain adaptation for implicit discourse relation classification. In Proceedings of the 23rd Conference on Computational Natural Lan- guage Learning (CoNLL), pages 686-695, Hong Kong, China. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Closing the gap: Domain adaptation from explicit to implicit discourse relations", |
|
"authors": [ |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gongbo", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2219--2224", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yangfeng Ji, Gongbo Zhang, and Jacob Eisenstein. 2015. Closing the gap: Domain adaptation from explicit to implicit discourse relations. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2219-2224.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Implicit discourse relation classification: We need to talk about evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Najoung", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Song", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chulaka", |
|
"middle": [], |
|
"last": "Gunasekara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Lastras", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5404--5414", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.480" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Najoung Kim, Song Feng, Chulaka Gunasekara, and Luis Lastras. 2020. Implicit discourse relation clas- sification: We need to talk about evaluation. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5404- 5414, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Multi-task attentionbased neural networks for implicit discourse relationship representation and identification", |
|
"authors": [ |
|
{ |
|
"first": "Man", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianxiang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuanbin", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng-Yu", |
|
"middle": [], |
|
"last": "Niu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haifeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1299--1308", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1134" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Man Lan, Jianxiang Wang, Yuanbin Wu, Zheng-Yu Niu, and Haifeng Wang. 2017. Multi-task attention- based neural networks for implicit discourse rela- tionship representation and identification. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 1299- 1308, Copenhagen, Denmark. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Implicit discourse relation classification via multi-task neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifang", |
|
"middle": [], |
|
"last": "Sui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2750--2756", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Liu, Sujian Li, Xiaodong Zhang, and Zhifang Sui. 2016. Implicit discourse relation classification via multi-task neural networks. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2750-2756.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Roberta: A robustly optimized bert pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Fixing weight decay regularization in adam", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Loshchilov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Hutter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam. arXiv preprint ArXiv:1711.05101.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "An unsupervised approach to recognizing discourse relations", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdessamad", |
|
"middle": [], |
|
"last": "Echihabi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting of the association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "368--375", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Marcu and Abdessamad Echihabi. 2002. An un- supervised approach to recognizing discourse rela- tions. In Proceedings of the 40th annual meeting of the association for computational linguistics, pages 368-375.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "DisSent: Learning sentence representations from explicit discourse relations", |
|
"authors": [ |
|
{ |
|
"first": "Allen", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Bennett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4497--4510", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1442" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allen Nie, Erin Bennett, and Noah Goodman. 2019. DisSent: Learning sentence representations from ex- plicit discourse relations. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4497-4510, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Automatic sense prediction for implicit discourse relations in text", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "683--691", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse re- lations in text. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP, pages 683-691.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Using syntax to disambiguate explicit discourse connectives in text", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "13--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Pitler and Ani Nenkova. 2009. Using syntax to disambiguate explicit discourse connectives in text. In Proceedings of the ACL-IJCNLP 2009 Confer- ence Short Papers, pages 13-16, Suntec, Singapore. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The penn discourse treebank 2.0", |
|
"authors": [ |
|
{ |
|
"first": "Rashmi", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Dinesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eleni", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Livio", |
|
"middle": [], |
|
"last": "Robaldo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Aravind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The penn discourse treebank 2.0. In In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The biomedical discourse relation bank", |
|
"authors": [ |
|
{ |
|
"first": "Rashmi", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Mcroy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadya", |
|
"middle": [], |
|
"last": "Frid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "BMC bioinformatics", |
|
"volume": "12", |
|
"issue": "1", |
|
"pages": "1--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rashmi Prasad, Susan McRoy, Nadya Frid, Aravind Joshi, and Hong Yu. 2011. The biomedical dis- course relation bank. BMC bioinformatics, 12(1):1- 18.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Adversarial connectiveexploiting networks for implicit discourse relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Lianhui", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhisong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiting", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1006--1017", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1093" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, and Eric Xing. 2017. Adversarial connective- exploiting networks for implicit discourse relation classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1006-1017, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Language models are unsupervised multitask learners", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rewon", |
|
"middle": [], |
|
"last": "Child", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "OpenAI blog", |
|
"volume": "1", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Improving the inference of implicit discourse relations via classifying explicit discourse connectives", |
|
"authors": [ |
|
{ |
|
"first": "Attapol", |
|
"middle": [], |
|
"last": "Rutherford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "799--808", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/N15-1081" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Attapol Rutherford and Nianwen Xue. 2015. Improv- ing the inference of implicit discourse relations via classifying explicit discourse connectives. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 799-808, Denver, Colorado. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.01108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Learning to explicitate connectives with Seq2Seq network for implicit discourse relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 13th International Conference on Computational Semantics -Long Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--199", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-0416" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Shi and Vera Demberg. 2019a. Learning to explic- itate connectives with Seq2Seq network for implicit discourse relation classification. In Proceedings of the 13th International Conference on Computational Semantics -Long Papers, pages 188-199, Gothen- burg, Sweden. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Next sentence prediction helps implicit discourse relation classification within and across domains", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5790--5796", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1586" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Shi and Vera Demberg. 2019b. Next sentence pre- diction helps implicit discourse relation classifica- tion within and across domains. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5790-5796, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Using explicit discourse connectives in translation for implicit discourse relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frances", |
|
"middle": [], |
|
"last": "Yung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Rubino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "484--495", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Shi, Frances Yung, Raphael Rubino, and Vera Demberg. 2017. Using explicit discourse connec- tives in translation for implicit discourse relation classification. In Proceedings of the Eighth Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 484-495.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Using automatically labelled examples to classify rhetorical relations: An assessment", |
|
"authors": [ |
|
{ |
|
"first": "Caroline", |
|
"middle": [], |
|
"last": "Sporleder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Natural Language Engineering", |
|
"volume": "14", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caroline Sporleder and Alex Lascarides. 2008. Using automatically labelled examples to classify rhetori- cal relations: An assessment. Natural Language En- gineering, 14(3):369.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Towards cross-domain pdtb-style discourse parsing", |
|
"authors": [ |
|
{ |
|
"first": "Evgeny", |
|
"middle": [], |
|
"last": "Stepanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Riccardi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evgeny Stepanov and Giuseppe Riccardi. 2014. To- wards cross-domain pdtb-style discourse parsing. In Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi), pages 30-37.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-demos.6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Bilinguallyconstrained synthetic data for implicit discourse relation recognition", |
|
"authors": [ |
|
{ |
|
"first": "Changxing", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yidong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanzhou", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinsong", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2306--2312", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1253" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Changxing Wu, Xiaodong Shi, Yidong Chen, Yanzhou Huang, and Jinsong Su. 2016. Bilingually- constrained synthetic data for implicit discourse re- lation recognition. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 2306-2312, Austin, Texas. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Connective prediction using machine learning for implicit discourse relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Man", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [ |
|
"Yu" |
|
], |
|
"last": "Niu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chew Lim", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "The 2012 International Joint Conference on Neural Networks (IJCNN)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Xu, Man Lan, Yue Lu, Zheng Yu Niu, and Chew Lim Tan. 2012. Connective prediction us- ing machine learning for implicit discourse relation classification. In The 2012 International Joint Con- ference on Neural Networks (IJCNN), pages 1-8. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "TDB 1.1: Extensions on Turkish discourse bank", |
|
"authors": [ |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Zeyrek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Murathan", |
|
"middle": [], |
|
"last": "Kurfal\u0131", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 11th Linguistic Annotation Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "76--81", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-0809" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deniz Zeyrek and Murathan Kurfal\u0131. 2017. TDB 1.1: Extensions on Turkish discourse bank. In Proceed- ings of the 11th Linguistic Annotation Workshop, pages 76-81, Valencia, Spain. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Predicting discourse connectives for implicit discourse relation recognition", |
|
"authors": [ |
|
{ |
|
"first": "Zhi-Min", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng-Yu", |
|
"middle": [], |
|
"last": "Niu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Man", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chew Lim", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1507--1514", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhi-Min Zhou, Yu Xu, Zheng-Yu Niu, Man Lan, Jian Su, and Chew Lim Tan. 2010. Predicting discourse connectives for implicit discourse relation recog- nition. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1507-1514. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Explicit and implicit discourse relations in the prague discourse treebank", |
|
"authors": [ |
|
{ |
|
"first": "S\u00e1rka", |
|
"middle": [], |
|
"last": "Zik\u00e1nov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "M\u00edrovsk\u1ef3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavl\u00edna", |
|
"middle": [], |
|
"last": "Synkov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Text, Speech, and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "236--248", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S\u00e1rka Zik\u00e1nov\u00e1, Ji\u0159\u00ed M\u00edrovsk\u1ef3, and Pavl\u00edna Synkov\u00e1. 2019. Explicit and implicit discourse relations in the prague discourse treebank. In International Confer- ence on Text, Speech, and Dialogue, pages 236-248. Springer.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "A high level visualization of the proposed pipeline.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "The (truncated) confusion matrices between the predicted and gold connectives of the implicit relations in PDTB 2.0 test set. The matrices are confined to relations with one of the most frequent 10 implicit connectives for readability purposes. The x-axis presents the gold connectives whereas the y-axis shows the predictions.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": ". . to expand and [o]ne below . . . . . . to expand because [o]ne below . . . . . . to expand but [o]ne below . . . The list of connectives are chosen among the lexical items PDTB 2.0 annotation guideline recognizes as discourse connectives", |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "Number of instances in the respective datasets. For the BioDRB test and full distinctions, please refer to Section 5.3.", |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "Margin 58.86 24.53 30.36 12.15 62.12 33.69 30.66 11.62 GPT2-large 62.85 29.70 36.69 19.17 62.47 36.59 33.08 12.97 + Margin 60.81 29.48 34.82 15.18 61.91 38.33 33.86 13.61 GPT2-XL 58.86 33.54 35.11 16.23 59.19 39.86 34.22 14.75 + Margin 56.75 33.17 34.53 12.54 59.19 41.28 35.33 15.25 RoBERTa-base 78.70 29.70 37.84 12.92 74.73 33.52 33.45 10.22 + Margin 78.05 28.83 37.41 13.55 74.67 34.31 34.13 10.98 RoBERTa-large 71.38 28.44 37.84 13.21 71.26 35.77 32.42 11.25 + Margin 70.98 28.46 38.13 13.49 71.42 37.71 33.70 12.93", |
|
"content": "<table><tr><td/><td/><td colspan=\"2\">Test set</td><td/><td/><td colspan=\"2\">Full Data</td><td/></tr><tr><td/><td colspan=\"2\">4-way</td><td colspan=\"2\">11-way</td><td colspan=\"2\">4-way</td><td colspan=\"2\">11-way</td></tr><tr><td/><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td></tr><tr><td>Bi-LSTM baseline</td><td>-</td><td>-</td><td>32.97</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>(Bai and Zhao, 2018)</td><td>-</td><td>-</td><td>29.52</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MaxEnt baseline</td><td colspan=\"2\">58.44 26.64</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>(Shi and Demberg, 2019b)</td><td colspan=\"3\">77.34 43.03 45.19</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BERT-base-uncased</td><td colspan=\"8\">54.15 30.29 36.98 14.59 54.90 36.30 33.80 13.99</td></tr><tr><td colspan=\"9\">+ Margin 52.11 30.15 36.69 15.41 55.15 37.46 35.75 14.59</td></tr><tr><td>BERT-large-cased</td><td colspan=\"8\">75.37 26.51 37.12 10.29 72.28 30.11 32.19 8.28</td></tr><tr><td colspan=\"9\">+ Margin 70.57 25.62 34.53 10.74 68.69 31.21 31.82 10.07</td></tr><tr><td>BERT-large-cased-wwm</td><td colspan=\"8\">62.36 24.59 32.95 10.87 65.36 33.59 31.81 11.39</td></tr><tr><td colspan=\"9\">+ Margin 56.99 24.79 31.37 11.18 59.83 33.10 31.20 11.90</td></tr><tr><td>BERT-large-uncased</td><td colspan=\"8\">58.05 30.43 35.25 12.82 57.32 36.21 34.23 13.85</td></tr><tr><td colspan=\"9\">+ Margin 57.24 31.84 37.99 15.58 57.01 37.73 35.23 14.54</td></tr><tr><td>BERT-large-uncased-wwm</td><td colspan=\"8\">61.22 32.24 38.27 15.29 60.05 37.49 34.58 14.09</td></tr><tr><td colspan=\"9\">+ Margin 51.95 30.39 36.83 15.62 53.98 37.03 34.40 14.55</td></tr><tr><td>DistilBERT-base-cased</td><td colspan=\"8\">39.51 23.62 21.44 11.77 41.21 28.93 21.78 10.47</td></tr><tr><td colspan=\"9\">+ Margin 40.00 27.78 25.32 14.97 38.35 30.43 23.89 11.56</td></tr><tr><td>GPT2</td><td colspan=\"8\">59.11 24.36 30.94 11.29 62.85 32.37 29.82 10.44</td></tr><tr><td>+</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |