|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:58:58.377211Z" |
|
}, |
|
"title": "Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Kamil", |
|
"middle": [], |
|
"last": "Bujel", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Yannakoudakis", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Rei", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-based architectures. As transformers contain multiple layers of multihead self-attention, information in the sentence gets distributed between many tokens, negatively affecting zero-shot token-level performance. We find that a soft attention module which explicitly encourages sharpness of attention weights can significantly outperform existing methods.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-based architectures. As transformers contain multiple layers of multihead self-attention, information in the sentence gets distributed between many tokens, negatively affecting zero-shot token-level performance. We find that a soft attention module which explicitly encourages sharpness of attention weights can significantly outperform existing methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sequence labeling and sentence classification can represent facets of the same task at different granularities; for example, detecting grammar errors and predicting the grammaticality of sentences. Transformer-based architectures such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have been shown to achieve state-of-the-art results on both sequence labeling (Bell et al., 2019) and sentence classification (Sun et al., 2019) problems. However, such tasks are typically treated in isolation rather than within a unified approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 264, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 295, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 393, |
|
"text": "(Bell et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 440, |
|
"text": "(Sun et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we investigate methods for inferring token-level predictions from transformer models trained only on sentence-level annotations. The ability to classify individual tokens without direct supervision opens possibilities for training sequence labeling models on tasks and datasets where only sentence-level or document-level annotation is available. In addition, attention-based architectures allow us to directly investigate what the model is learning and to quantitatively measure whether its rationales (supporting evidence) for particular input sentences match human expectations. While evaluating the faithfulness (Herman, 2017) of a model's rationale is still an open research question and up for debate (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; DeYoung et al., 2020; Jacovi and Goldberg, 2020; Atanasova et al., 2020) , the methods explored here allow for measuring the plausibility (agreeability to human annotators; DeYoung et al. (2020)) of transformer-based models using existing sequence labeling datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 631, |
|
"end": 645, |
|
"text": "(Herman, 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 722, |
|
"end": 746, |
|
"text": "(Jain and Wallace, 2019;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 774, |
|
"text": "Wiegreffe and Pinter, 2019;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 775, |
|
"end": 796, |
|
"text": "DeYoung et al., 2020;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 797, |
|
"end": 823, |
|
"text": "Jacovi and Goldberg, 2020;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 824, |
|
"end": 847, |
|
"text": "Atanasova et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluate and compare different methods for adapting pre-trained transformer models into zeroshot sequence labelers, trained using only gold sentence-level signal. Our experiments show that applying existing approaches to transformer architectures is not straightforward -transformers already contain several layers of multi-head attention, distributing sentencelevel information across many tokens, whereas the existing methods rely on all the information going through one central attention module. Approaches such as LIME (Ribeiro et al., 2016) for scoring word importance also struggle to infer correct token-level annotations in a zero-shot manner (e.g., it achieves only 2% F-score on one of our datasets). We find that a modified attention function is needed to allow transformers to better focus on individual important tokens and achieve a new state-of-the-art on zero-shot sequence labeling.", |
|
"cite_spans": [ |
|
{ |
|
"start": 527, |
|
"end": 549, |
|
"text": "(Ribeiro et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The contributions of this paper are fourfold:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We present the first experiments utilizing (pretrained) sentence-level transformers as zeroshot sequence labelers;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We perform a systematic comparison of alternative methods for zero-shot sequence labeling on different datasets;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose a novel modification of the attention function that significantly improves zero-shot sequence-labeling performance of transformers over the previous state of the art, while achieving on-par or better results on sentence classification;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We make our source code and models publicly available to facilitate further research in the field. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluate four different methods for turning sentence-level transformer models into zero-shot sequence labelers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "LIME (Ribeiro et al., 2016) generates local wordlevel importance scores through a meta-model that is trained on perturbed data generated by randomly masking out words in the input sentence. It was originally investigated in the context of Support Vector Machine (Hearst et al., 1998) text classifiers with unigram features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 27, |
|
"text": "(Ribeiro et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 283, |
|
"text": "(Hearst et al., 1998)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LIME", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We apply LIME to a RoBERTa model supervised as a sentence classifier and investigate whether its scores can be used for sequence labeling. We use RoBERTa's MASK token to mask out individual words and allow LIME to generate 5000 masked samples per sentence. The resulting explanation weights are then used as classification scores for each word, with the decision threshold fine-tuned based on the development set performance. Thorne et al. (2019) found LIME to outperform attention-based approaches on the task of explaining NLI models. LIME was used to probe a LSTMbased sentence-pair classifier (Lan and Xu, 2018) by removing tokens from the premise and hypothesis sentences separately. The generated scores were used to perform binary classification of tokens, with the threshold based on F 1 performance on the development set. The token-level predictions were evaluated against human explanations of the entailment relation using the e-SNLI dataset (Camburu et al., 2018) . LIME was found to outperform other methods, however, it was also 1000\u00d7 slower than attention-based methods at generating these explanations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 426, |
|
"end": 446, |
|
"text": "Thorne et al. (2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 597, |
|
"end": 615, |
|
"text": "(Lan and Xu, 2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 976, |
|
"text": "(Camburu et al., 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LIME", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The attention heads in a trained transformer model are designed to identify and combine useful information for a particular task. Clark et al. (2019) found that specific heads can specialize on different linguistic properties such as syntax and coreference. However, transformer models contain many layers with multiple attention heads, distributing the text representation and making it more difficult to identify token importance for the overall task. Given a particular head, we can obtain an importance score for each token by averaging the attention scores from all the tokens that attend to it. In order to investigate the best possible setting, we report results for the attention head that achieves the highest token-level Mean Average Precision score on the development set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 149, |
|
"text": "Clark et al. (2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attention heads", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Rei and S\u00f8gaard (2018) described a method for predicting token-level labels based on a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) architecture supervised at the sentence-level only. A dedicated attention module was integrated for building sentence representations, with its attention weights also acting as token-level importance scores. The architecture was found to outperform a gradient-based approach on the tasks of zero-shot sequence labeling for error detection, uncertainty detection, and sentiment analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 140, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In order to obtain a single raw attention value e i for each token, biLSTM output vectors were passed through a feedforward layer:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "e i = tanh(W e h i + b e ) e i = W e e i + b e (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where e i is the attention vector for token t i ; h i is the biLSTM output for t i ; and e i is the single raw attention value. W e , b e , W e , b e are trainable parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Instead of softmax or sparsemax (Martins and Astudillo, 2016) , which would restrict the distribution of the scores, a soft attention based on sigmoid activation was used to obtain importance scores:", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 61, |
|
"text": "(Martins and Astudillo, 2016)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "a i = \u03c3( e i ) a i = a i N k=1 a k (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where N is the number of tokens and \u03c3 is the logistic function. a i shows the importance of a particular token and is in the range 0 \u2264 a i \u2264 1, independent of any other scores in the sentence; therefore, it can be directly used for sequence labeling with a natural threshold of 0.5. a i contains the same information but is normalized to sum up to 1 over the whole sentence, making it suitable for attention weights when building the sentence representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "As a i and a i are directly tied, training the former through the sentence classification objective will also train the latter for the sequence labeling task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The attention values were then used to obtain the sentence representation c by acting as weights for the biLSTM token outputs:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "c = N i=0 a i h i (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Finally, the sentence representation c was passed through the final feedforward layer, followed by a sigmoid to obtain the predicted score y for the sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "d = tanh(W d c + b d ) y = \u03c3(W y d + b y ) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where d is the sentence vector, c is the sentence representation, and y is the sentence prediction score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "W d , b d , W y , b", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "y are all trainable parameters. We adapt this approach to the transformer models by attaching a separate soft attention module on top of the token-level output representations. This effectively ignores the CLS token, which is commonly used for sentence classification, and instead builds a new sentence representation from the token representations, which replace the previously used biLSTM outputs:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "e i = tanh(W e T i + b e ) c = N i=0 a i T i (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where T i is the contextualized embedding for token t i . A diagram of the model architecture is included in Appendix F. Commonly used tokenizers for transformer models split words into subwords, while sequence labeling datasets are annotated at the word level. We find that taking the maximum attention value over all the subwords as the word-level importance score produces good results on the development sets. For a word w i split into tokens [t j , ..., t m ], where j, m \u2208 [1, N ], the resulting final word importance score r i is then given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "r i = max({ a j , a j+1 , ..., a m })", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "During training, we optimize sentence-level binary cross-entropy as the main objective function:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L 1 = j CrossEntropy(y (j) , y (j) ) |y|", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where y (j) and\u1ef9 (j) are the predicted sentence classification logits and the gold label for the j th sentence respectively. We also adopt the additional loss functions from , which encourage the attention weights to behave more like token-level classifiers:", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 20, |
|
"text": "(j)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L 2 = j (min j ( a i ) \u2212 0) 2 |y| (8) L 3 = j (max j ( a i ) \u2212 y (j) ) 2 |y|", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Eq. 8 optimizes the minimum unnormalized attention to be 0 and therefore incentivizes the model to only focus on some, but not all words; Eq. 9 ensures that some attention weights are close to 1 if the overall sentence is classified as positive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We then jointly optimize these three loss functions using a hyperparameter \u03b3:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "L = L 1 + \u03b3(L 2 + L 3 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft attention", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Our experiments show that, when combined with transformer-based models, the soft attention method tends to spread out the attention too widely. Instead of focusing on specific important words, the model broadly attends to the whole sentence. Figures 3 and 4 in Appendix A present examples demonstrating such behaviour. As transformers contain several layers of attention, with multiple heads in each layer, the information in the sentence gets distributed across all tokens before it reaches the soft attention module at the top. To improve this behaviour and incentivize the model to direct information through a smaller and more focused set of tokens, we experiment with a weighted soft attention:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 257, |
|
"text": "Figures 3 and 4", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Weighted soft attention", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "a i =\u00e3 \u03b2 i N k=1\u00e3 \u03b2 k (10)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weighted soft attention", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "where \u03b2 is a hyperparamete and where values \u03b2 > 1 make the weight distribution sharper, allowing the model to focus on a smaller number of tokens. We experiment with values of \u03b2 \u2208 {1, 2, 3, 4} on the development sets and find \u03b2 = 2 to significantly improve token labeling performance without negatively affecting sentence classification results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weighted soft attention", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We investigate the performance of these methods as zero-shot sequence labelers using three different datasets. Gold token-level annotation in these datasets is used for evaluation; however, the models are trained using sentence-level labels only. The CoNLL 2010 shared task (Farkas et al., 2010) 2 focuses on the detection of uncertainty cues in natural language text. The dataset contains 19, 542 examples with both sentence-level uncertainty labels and annotated keywords indicating uncertainty. We use the train/test data from the task and randomly choose 10% of the training set for development.", |
|
"cite_spans": [ |
|
{ |
|
"start": 274, |
|
"end": 295, |
|
"text": "(Farkas et al., 2010)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We also evaluate on the task of grammatical error detection (GED) -identifying which sentences are grammatically incorrect (i.e., contain at least one grammatical error). The First Certificate in English dataset FCE (Yannakoudakis et al., 2011) consists of essays written by non-native learners of English, annotated for grammatical errors. We use the train/dev/test splits released by Rei and Yannakoudakis (2016) for sequence labeling, with a total of 33, 673 sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 244, |
|
"text": "(Yannakoudakis et al., 2011)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In addition, we evaluate on the Write & Improve (Yannakoudakis et al., 2018) and LOCNESS (Granger, 1998) GED dataset 3 (38, 692 sentences) released as part of the BEA 2019 shared task (Bryant et al., 2019) . It contains English essays written in response to varied topics and by English learners from different proficiency levels, as well as native English speakers. As the gold test set labels are not publicly available, we evaluate on the released development set and use 10% of the training data for tuning 4 . For both GED datasets, we train the model to detect grammatically incorrect sentences and evaluate how well the methods can identify individual tokens that have been annotated as errors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 76, |
|
"text": "(Yannakoudakis et al., 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 89, |
|
"end": 104, |
|
"text": "(Granger, 1998)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 205, |
|
"text": "(Bryant et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We use the pre-trained RoBERTa-base (Liu et al., 2019) model, made available by HuggingFace (Wolf et al., 2020) , as our transformer architecture. Following Mosbach et al. (2021) , transformer models are fine-tuned for 20 epochs, and the best performing checkpoint is then chosen based on sentence-level performance on the development set. Each experiment is repeated with 5 different random seeds and the averaged results are reported. The average duration of training on Nvidia GeForce RTX 2080Ti was 1 hour. Significance testing is performed with a two-tailed paired t-test and a = 0.05. Hyperparameteres are tuned on the development set and presented in Appendices B and C.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 54, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 111, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 157, |
|
"end": 178, |
|
"text": "Mosbach et al. (2021)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The LIME and attention head methods provide only a score without a natural decision boundary for classification. Therefore, we choose their thresholds based on the token-level F 1 -score on the development set. In contrast, the soft attention and weighted soft attention methods do not require such additional tuning that uses token-level labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The results are presented in Table 1 . Each model is trained as a sentence classifier and then evaluated as a token labeler. The challenge of the zero-shot sequence-labeling setting lies in the fact that the models are trained without utilizing any gold tokenlevel signal; nevertheless, some methods perform considerably better than others. For reference, we also include a random baseline, which samples token-level scores from the standard uniform distribution; a RoBERTa model supervised as a sentence classifier only; and the model from Rei and S\u00f8gaard (2018) based on BiLSTMs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 36, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We report the F 1 -measure on the token level along with Mean Average Precision (MAP) for returning positive tokens. The MAP metric views the task as a ranking problem and therefore removes Figure 1 : Example word-level importance scores r i (Eq. 6) of different methods applied to an excerpt from the CoNLL10 dataset. HEAD corresponds to attention heads; SA to soft attention; and W-SA to weighted soft attention. We can observe how W-SA is the only method that correctly assigns substantially higher weights to the 'may' and 'seems' uncertainty cues.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 198, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "the dependence on specific classification thresholds. In addition, we report the F 1 -measure on the main sentence-level task to ensure the proposed methods do not have adverse effects on sentence classification performance. Precision and recall values are included in Appendix E. LIME has relatively low performance on FCE and BEA 2019, while it achieves somewhat higher results on CoNLL 2010. Comparing the MAP scores, the attention head method performs substantially better, especially considering that it is much more lightweight and requires no additional computation. Nevertheless, both of these methods rely on using some annotated examples to tune their classification threshold, which precludes their application in a truly zero-shot setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Combining the soft attention mechanism with the transformer architecture provides some improvements over the previous methods, while also improving over . A notable exception is the CoNLL 2010 dataset where this method achieves only 8% F 1 and 20% MAP. Error analysis revealed that this is due to the transformer representations spreading attention scores evenly between a large number of tokens, as observed in Figure 1 . Uncertainty cues in CoNLL 2010 can span across whole sentences (e.g., 'Either ... or ...'), with such examples encouraging the model to distribute information even further.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 420, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The weighted soft attention modification addresses this issue and considerably improves performance across all metrics on all datasets. Compared to the non-weighted version of the soft attention method, applying the extra weights leads to a significant improvement in terms of MAP, with a minimum of 5.01% absolute gain on FCE. The improvements are also statistically significant compared to the current state of the art : 5.35% absolute improvement on FCE; 9.38% on BEA 2019; and 3.36% on CoNLL 2010. While the F 1 on CoNLL 2010 is slightly lower, the MAP score is higher, indicating that the model has difficulty finding an optimal decision boundary, but nevertheless provides a better ranking. In future work, the weighted soft attention method for transformers could potentially be combined with token supervision in order to train robust multi-level models (Barrett et al., 2018; Rei and S\u00f8gaard, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 862, |
|
"end": 884, |
|
"text": "(Barrett et al., 2018;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 885, |
|
"end": 907, |
|
"text": "Rei and S\u00f8gaard, 2019)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We investigated methods for inferring tokenlevel predictions from transformer models trained only on sentence-level annotations. Experiments showed that previous approaches designed for LSTM architectures do not perform as well when applied to transformers. As transformer models already contain multiple layers of multi-head attention, the input representations get distributed between many tokens, making it more difficult to identify the importance of each individual token. LIME was not able to accurately identify target tokens, while the soft attention method primarily assigned equal attention scores across most words in a sentence. Directly using the scores from the existing attention heads performed better than expected, but required some annotated data for tuning the decision threshold. Modifying the soft attention module with an explicit sharpness constraint on the weights was found to encourage more distinct predictions, significantly improving token-level results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We present samples of word-level predictions (word-level importance scores r i , Eq. 6) to illustrate differences between methods. In the figures that follow, HEAD refers to attention heads, SA to soft attention, and W-SA to weighted soft attention. Figure 2 : CoNLL 2010 negative sentence (without uncertainty cues). We can clearly see that most methods correctly put weights close to 0 for all words, except from HEAD, which focuses on 'shown' and '.'. We surmise this is due to the fact that, for HEAD, weights over the whole sentence have to sum up to 1. . We can observe that HEAD correctly identifies both of the uncertainty cues: 'may' and 'seems'; however the weight for 'may' is quite low. Similarly, LIME identifies both tokens, but the weight for 'seems' is particularly low (lower than for 'to'). SA simply assigns high weights to all words. W-SA focuses primarily on the two uncertainty cue words; however, it also incorrectly focuses on 'not'. . We can see that both LIME and HEAD struggle to assign informative and/or useful weights to the words. All SA weights are relatively high, with small variations in value. We can see that squaring (W-SA) leads to more well-defined weights over the whole sentence, with high weights mainly observed in the second part of the sentence, which is the one that contains incorrect words. However, on this dataset, even W-SA struggles to correctly identify which words precisely are incorrect. Table 4 : Mean sentence-level F 1 score on the development set, averaged over 5 runs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 258, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1445, |
|
"end": 1452, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Example word-level predictions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/bujol12/ bert-seq-interpretability", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://rgai.sed.hu/node/118 3 https://www.cl.cam.ac.uk/research/nl/ bea2019st/ 4 https://github.com/bujol12/ bert-seq-interpretability/blob/master/ dev_indices_train_ABC.txt", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank James Thorne for his assistance in setting up the LIME experiments. Kamil Bujel was funded by the Undergraduate Research Opportunities Programme Bursary from the Department of Computing at Imperial College London.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": " Table 10 : Token-level results: P, R and F 1 refer to Precision, Recall and F-measure respectively on the positive class. MAP is the Mean Average Precision at the token-level.F Weighted soft attention architecture ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 9, |
|
"text": "Table 10", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A diagnostic study of explainability techniques for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Pepa", |
|
"middle": [], |
|
"last": "Atanasova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [ |
|
"Grue" |
|
], |
|
"last": "Simonsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christina", |
|
"middle": [], |
|
"last": "Lioma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabelle", |
|
"middle": [], |
|
"last": "Augenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3256--3274", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.263" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pepa Atanasova, Jakob Grue Simonsen, Christina Li- oma, and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classifi- cation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 3256-3274, Online. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Sequence classification with human attention", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joachim", |
|
"middle": [], |
|
"last": "Bingel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nora", |
|
"middle": [], |
|
"last": "Hollenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Rei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "302--312", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K18-1030" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, and Anders S\u00f8gaard. 2018. Sequence classification with human attention. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 302-312, Brussels, Bel- gium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Context is key: Grammatical error detection with contextual word representations", |
|
"authors": [ |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Yannakoudakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Rei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--115", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4410" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel Bell, Helen Yannakoudakis, and Marek Rei. 2019. Context is key: Grammatical error detec- tion with contextual word representations. In Pro- ceedings of the Fourteenth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 103-115, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The BEA-2019 shared task on grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariano", |
|
"middle": [], |
|
"last": "Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "\u00d8istein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--75", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4406" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Bryant, Mariano Felice, \u00d8istein E. An- dersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Pro- ceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "e-snli: Natural language inference with natural language explanations", |
|
"authors": [ |
|
{ |
|
"first": "Oana-Maria", |
|
"middle": [], |
|
"last": "Camburu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rockt\u00e4schel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Lukasiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "NeurIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9560--9572", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oana-Maria Camburu, Tim Rockt\u00e4schel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Nat- ural language inference with natural language expla- nations. In NeurIPS, pages 9560-9572.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "What does BERT look at? an analysis of BERT's attention", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Urvashi", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--286", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4828" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "ERASER: A benchmark to evaluate rationalized NLP models", |
|
"authors": [ |
|
{ |
|
"first": "Jay", |
|
"middle": [], |
|
"last": "Deyoung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarthak", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nazneen", |
|
"middle": [], |
|
"last": "Fatema Rajani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Lehman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Byron", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Wallace", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4443--4458", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.408" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443-4458, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The CoNLL-2010 shared task: Learning to detect hedges and their scope in natural language text", |
|
"authors": [ |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "M\u00f3ra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00e1nos", |
|
"middle": [], |
|
"last": "Csirik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "Szarvas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning -Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich\u00e1rd Farkas, Veronika Vincze, Gy\u00f6rgy M\u00f3ra, J\u00e1nos Csirik, and Gy\u00f6rgy Szarvas. 2010. The CoNLL- 2010 shared task: Learning to detect hedges and their scope in natural language text. In Proceed- ings of the Fourteenth Conference on Computational Natural Language Learning -Shared Task, pages 1-12, Uppsala, Sweden. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The computer learner corpus: A versatile new source of data for SLA research", |
|
"authors": [ |
|
{ |
|
"first": "Sylviane", |
|
"middle": [], |
|
"last": "Granger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sylviane Granger. 1998. The computer learner cor- pus: A versatile new source of data for SLA research. Longman.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Support vector machines", |
|
"authors": [ |
|
{ |
|
"first": "Marti", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Hearst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Susan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edgar", |
|
"middle": [], |
|
"last": "Dumais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Osuna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Platt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Scholkopf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "IEEE Intelligent Systems and their applications", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "18--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marti A. Hearst, Susan T Dumais, Edgar Osuna, John Platt, and Bernhard Scholkopf. 1998. Support vec- tor machines. IEEE Intelligent Systems and their ap- plications, 13(4):18-28.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The promise and peril of human evaluation for model interpretability", |
|
"authors": [ |
|
{ |
|
"first": "Bernease", |
|
"middle": [], |
|
"last": "Herman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.07414" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernease Herman. 2017. The promise and peril of human evaluation for model interpretability. arXiv preprint arXiv:1711.07414.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?", |
|
"authors": [ |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Jacovi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4198--4205", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.386" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alon Jacovi and Yoav Goldberg. 2020. Towards faith- fully interpretable NLP systems: How should we de- fine and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4198-4205, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Attention is not Explanation", |
|
"authors": [ |
|
{ |
|
"first": "Sarthak", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Byron", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Wallace", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "3543--3556", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1357" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Neural network models for paraphrase identification, semantic textual similarity, natural language inference, and question answering", |
|
"authors": [ |
|
{ |
|
"first": "Wuwei", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3890--3902", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wuwei Lan and Wei Xu. 2018. Neural network models for paraphrase identification, semantic textual simi- larity, natural language inference, and question an- swering. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3890-3902, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Roberta: A robustly optimized bert pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", |
|
"authors": [ |
|
{ |
|
"first": "Andre", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramon", |
|
"middle": [], |
|
"last": "Astudillo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1614--1623", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of atten- tion and multi-label classification. In International Conference on Machine Learning, pages 1614-1623. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "On the stability of fine-tuning {bert}: Misconceptions, explanations, and strong baselines", |
|
"authors": [ |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Mosbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maksym", |
|
"middle": [], |
|
"last": "Andriushchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marius Mosbach, Maksym Andriushchenko, and Diet- rich Klakow. 2021. On the stability of fine-tuning {bert}: Misconceptions, explanations, and strong baselines. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Zero-shot sequence labeling: Transferring knowledge from sentences to tokens", |
|
"authors": [ |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Rei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "293--302", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1027" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marek Rei and Anders S\u00f8gaard. 2018. Zero-shot se- quence labeling: Transferring knowledge from sen- tences to tokens. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 293-302, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Jointly learning to label sentences and tokens", |
|
"authors": [ |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Rei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "6916--6923", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marek Rei and Anders S\u00f8gaard. 2019. Jointly learn- ing to label sentences and tokens. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6916-6923.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Compositional sequence labeling models for error detection in learner writing", |
|
"authors": [ |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Rei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Yannakoudakis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1181--1191", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1112" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marek Rei and Helen Yannakoudakis. 2016. Composi- tional sequence labeling models for error detection in learner writing. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1181- 1191, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Why Should I Trust You?\": Explaining the Predictions of Any Classifier", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "97--101", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-3020" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"Why Should I Trust You?\": Explaining the Predictions of Any Classifier. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Demonstrations, pages 97-101, San Diego, Califor- nia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "How to fine-tune bert for text classification?", |
|
"authors": [ |
|
{ |
|
"first": "Chi", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yige", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "China National Conference on Chinese Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "194--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computa- tional Linguistics, pages 194-206. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Generating token-level explanations for natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Thorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodoulopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arpit", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "963--969", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2019. Generating token-level explanations for natural language inference. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 963-969, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Attention is not not explanation", |
|
"authors": [ |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Wiegreffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuval", |
|
"middle": [], |
|
"last": "Pinter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--20", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 11-20, Hong Kong, China. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-demos.6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Developing an automated writing placement system for esl learners", |
|
"authors": [ |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Yannakoudakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "\u00d8istein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ardeshir", |
|
"middle": [], |
|
"last": "Andersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Geranpayeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diane", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nicholls", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Applied Measurement in Education", |
|
"volume": "31", |
|
"issue": "3", |
|
"pages": "251--267", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helen Yannakoudakis, \u00d8istein E Andersen, Ardeshir Geranpayeh, Ted Briscoe, and Diane Nicholls. 2018. Developing an automated writing placement system for esl learners. Applied Measurement in Education, 31(3):251-267.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "A new dataset and method for automatically grading ESOL texts", |
|
"authors": [ |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Yannakoudakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Medlock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "180--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180-189, Portland, Oregon, USA. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "CoNLL 2010 positive sentence (with uncertainty cues)", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "FCE positive sentence (contains grammatical errors)", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "Results on FCE, BEA 2019 and CoNLL 2010. Sent F 1 refers to F-measure on the sentence classification task; F 1 refers to token-level classification performance; MAP is the token-level Mean Average Precision.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Model hyperparameters.", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"3\">C Word-level prediction thresholds</td></tr><tr><td>Dataset</td><td>Method</td><td>Threshold</td></tr><tr><td>CoNLL 2010</td><td>LIME</td><td>0.200</td></tr><tr><td/><td>Random baseline</td><td>0.500</td></tr><tr><td/><td>Attention heads</td><td>0.320</td></tr><tr><td/><td>Rei and S\u00f8gaard (2018)</td><td>0.500</td></tr><tr><td/><td>Soft attention</td><td>0.500</td></tr><tr><td/><td>Weighted soft attention</td><td>0.500</td></tr><tr><td>FCE</td><td>LIME</td><td>0.001</td></tr><tr><td/><td>Random baseline</td><td>0.500</td></tr><tr><td/><td>Attention heads</td><td>0.080</td></tr><tr><td/><td>Rei and S\u00f8gaard (2018)</td><td>0.500</td></tr><tr><td/><td>Soft attention</td><td>0.500</td></tr><tr><td/><td>Weighted soft attention</td><td>0.500</td></tr><tr><td>BEA 2019</td><td>LIME</td><td>0.010</td></tr><tr><td/><td>Random baseline</td><td>0.500</td></tr><tr><td/><td>Attention heads</td><td>0.080</td></tr><tr><td/><td>Rei and S\u00f8gaard (2018)</td><td>0.500</td></tr><tr><td/><td>Soft attention</td><td>0.500</td></tr><tr><td/><td>Weighted soft attention</td><td>0.500</td></tr></table>", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "Word-level thresholds above which a word is classified as positive.", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">D Validation set results</td><td/></tr><tr><td>Dataset</td><td>Method</td><td>Sent F1</td></tr><tr><td>CoNLL 2010</td><td>LIME</td><td>91.77</td></tr><tr><td/><td>RoBERTa</td><td>91.77</td></tr><tr><td/><td>Attention heads</td><td>91.77</td></tr><tr><td/><td>Soft attention</td><td>92.12</td></tr><tr><td/><td>Weighted soft attention</td><td>91.82</td></tr><tr><td>FCE</td><td>LIME</td><td>84.49</td></tr><tr><td/><td>RoBERTa</td><td>84.49</td></tr><tr><td/><td>Attention heads</td><td>84.49</td></tr><tr><td/><td>Soft attention</td><td>84.82</td></tr><tr><td/><td>Weighted soft attention</td><td>85.56</td></tr><tr><td>BEA 2019</td><td>LIME</td><td>83.65</td></tr><tr><td/><td>RoBERTa</td><td>83.65</td></tr><tr><td/><td>Attention heads</td><td>83.65</td></tr><tr><td/><td>Soft attention</td><td>83.47</td></tr><tr><td/><td>Weighted soft attention</td><td>83.64</td></tr></table>", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |